From patchwork Thu Jan 7 22:58:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358596 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8866CC433E6 for ; Thu, 7 Jan 2021 22:59:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 53622235FA for ; Thu, 7 Jan 2021 22:59:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727873AbhAGW7w (ORCPT ); Thu, 7 Jan 2021 17:59:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51774 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727107AbhAGW7v (ORCPT ); Thu, 7 Jan 2021 17:59:51 -0500 Received: from mail-pj1-x1030.google.com (mail-pj1-x1030.google.com [IPv6:2607:f8b0:4864:20::1030]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8C968C0612F5 for ; Thu, 7 Jan 2021 14:59:11 -0800 (PST) Received: by mail-pj1-x1030.google.com with SMTP id z12so2596575pjn.1 for ; Thu, 07 Jan 2021 14:59:11 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FKj/NB2RX+nGsA9ANgVlTh6hOyQ0/YSsUPnmnBRsbB8=; b=bR16/GSgzzHlKvfddrzmm/ZtXzwnqzFaSB7TvPhVicd10ZrsBrbeu+cV+jp7nQiw3u EE7+UBqZis6M3li0XhERNK40oLfNrhRW1D00D6gRrTg+jSvsrf0nzr7bdRxsz7zE2sGD TWLacTHVMKUY0aX2WJWX4N2xVSKNYl8jzKa5HUJiei46oc8l2UIYvTzqNfZywmL2zWlL qLEJxVSnlb0GefsdtLPJ/P7XSbicB23ATv4CvRbDk9X8THm2ygqANqBl/1svhM4RvNGk owYdqakKX+aWxJPbFK/70wFa1dxvwGarzuZi5kJJwckZHJYlGnKU91wqmVr/MQcnQFKM /GaA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FKj/NB2RX+nGsA9ANgVlTh6hOyQ0/YSsUPnmnBRsbB8=; b=j7nI2aJj2R4kU1m85W61GYyPu9hYr6Zu87ojJ/l8XJFin7PzewHf028lc+yYyY+Vny Uc9LkaD7Q5DHEBuN+SewRf3cVTRGjS8C9pM7kOV3Ye/QxnNcUgUB39wzyaL2VdAFIe8t rcaqA4xD4ZUNx1YFnhac0EOe7yKiz6OeFmMuV66UfV/mQYbQp59fNt/NNirTVUTHkf2u uMGFpdRGZQImqPVcfi1Mem1qmMag8XrD6DAou9NR5VhlAtGxw9CgqW6krTI0ZrcpghdR IVoMgx8sC0Dm+NfvRAMOKvauP/Kp1HFlXOh3N0u03G16RvJyIztiQKwReT9nVz5Frmc9 49Vw== X-Gm-Message-State: AOAM531NaOH7SPtrPaMqoUE2A/gEuJ+DwzzdikQAHLLsTeGw0/+k28SY DQ4pTgIMO3Iq3YpV797QXaDWBHOXGEGNdA== X-Google-Smtp-Source: ABdhPJxmWcm22niXFLkceYoAqUemQN05zBx13wsUKKVMkis7y8912SlvU0ad1awV3RxHBKlJV5kWxA== X-Received: by 2002:a17:90a:d494:: with SMTP id s20mr705628pju.178.1610060350907; Thu, 07 Jan 2021 14:59:10 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.09 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:10 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 01/31] elx: libefc_sli: SLI-4 register offsets and field definitions Date: Thu, 7 Jan 2021 14:58:35 -0800 Message-Id: <20210107225905.18186-2-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This is the initial patch for the new Emulex target mode SCSI driver sources. This patch: - Creates the new Emulex source level directory drivers/scsi/elx and adds the directory to the MAINTAINERS file. - Creates the first library subdirectory drivers/scsi/elx/libefc_sli. This library is a SLI-4 interface library. - Starts the population of the libefc_sli library with definitions of SLI-4 hardware register offsets and definitions. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- MAINTAINERS | 9 + drivers/scsi/elx/libefc_sli/sli4.c | 21 ++ drivers/scsi/elx/libefc_sli/sli4.h | 308 +++++++++++++++++++++++++++++ 3 files changed, 338 insertions(+) create mode 100644 drivers/scsi/elx/libefc_sli/sli4.c create mode 100644 drivers/scsi/elx/libefc_sli/sli4.h diff --git a/MAINTAINERS b/MAINTAINERS index c021a3704857..c0d56b0ed018 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -6509,6 +6509,15 @@ S: Supported W: http://www.broadcom.com F: drivers/scsi/lpfc/ +EMULEX/BROADCOM EFCT FC/FCOE SCSI TARGET DRIVER +M: James Smart +M: Ram Vegesna +L: linux-scsi@vger.kernel.org +L: target-devel@vger.kernel.org +S: Supported +W: http://www.broadcom.com +F: drivers/scsi/elx/ + ENE CB710 FLASH CARD READER DRIVER M: Michał Mirosław S: Maintained diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c new file mode 100644 index 000000000000..5396e14e7bf8 --- /dev/null +++ b/drivers/scsi/elx/libefc_sli/sli4.c @@ -0,0 +1,21 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/** + * All common (i.e. transport-independent) SLI-4 functions are implemented + * in this file. + */ +#include "sli4.h" + +static struct sli4_asic_entry_t sli4_asic_table[] = { + { SLI4_ASIC_REV_B0, SLI4_ASIC_GEN_5}, + { SLI4_ASIC_REV_D0, SLI4_ASIC_GEN_5}, + { SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6}, + { SLI4_ASIC_REV_A0, SLI4_ASIC_GEN_6}, + { SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_6}, + { SLI4_ASIC_REV_A3, SLI4_ASIC_GEN_6}, + { SLI4_ASIC_REV_A1, SLI4_ASIC_GEN_7}, +}; diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h new file mode 100644 index 000000000000..f8e4b2355721 --- /dev/null +++ b/drivers/scsi/elx/libefc_sli/sli4.h @@ -0,0 +1,308 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + * + */ + +/* + * All common SLI-4 structures and function prototypes. + */ + +#ifndef _SLI4_H +#define _SLI4_H + +#include +#include +#include "scsi/fc/fc_els.h" +#include "scsi/fc/fc_fs.h" +#include "../include/efc_common.h" + +/************************************************************************* + * Common SLI-4 register offsets and field definitions + */ + +/* SLI_INTF - SLI Interface Definition Register */ +#define SLI4_INTF_REG 0x0058 +enum sli4_intf { + SLI4_INTF_REV_SHIFT = 4, + SLI4_INTF_REV_MASK = 0xf0, + + SLI4_INTF_REV_S3 = 0x30, + SLI4_INTF_REV_S4 = 0x40, + + SLI4_INTF_FAMILY_SHIFT = 8, + SLI4_INTF_FAMILY_MASK = 0x0f00, + + SLI4_FAMILY_CHECK_ASIC_TYPE = 0x0f00, + + SLI4_INTF_IF_TYPE_SHIFT = 12, + SLI4_INTF_IF_TYPE_MASK = 0xf000, + + SLI4_INTF_IF_TYPE_2 = 0x2000, + SLI4_INTF_IF_TYPE_6 = 0x6000, + + SLI4_INTF_VALID_SHIFT = 29, + SLI4_INTF_VALID_MASK = 0xe0000000, + + SLI4_INTF_VALID_VALUE = 0xc0000000, +}; + +/* ASIC_ID - SLI ASIC Type and Revision Register */ +#define SLI4_ASIC_ID_REG 0x009c +enum sli4_asic { + SLI4_ASIC_GEN_SHIFT = 8, + SLI4_ASIC_GEN_MASK = 0xff00, + SLI4_ASIC_GEN_5 = 0x0b00, + SLI4_ASIC_GEN_6 = 0x0c00, + SLI4_ASIC_GEN_7 = 0x0d00, +}; + +enum sli4_acic_revisions { + SLI4_ASIC_REV_A0 = 0x00, + SLI4_ASIC_REV_A1 = 0x01, + SLI4_ASIC_REV_A2 = 0x02, + SLI4_ASIC_REV_A3 = 0x03, + SLI4_ASIC_REV_B0 = 0x10, + SLI4_ASIC_REV_B1 = 0x11, + SLI4_ASIC_REV_B2 = 0x12, + SLI4_ASIC_REV_C0 = 0x20, + SLI4_ASIC_REV_C1 = 0x21, + SLI4_ASIC_REV_C2 = 0x22, + SLI4_ASIC_REV_D0 = 0x30, +}; + +struct sli4_asic_entry_t { + u32 rev_id; + u32 family; +}; + +/* BMBX - Bootstrap Mailbox Register */ +#define SLI4_BMBX_REG 0x0160 +enum sli4_bmbx { + SLI4_BMBX_MASK_HI = 0x3, + SLI4_BMBX_MASK_LO = 0xf, + SLI4_BMBX_RDY = 1 << 0, + SLI4_BMBX_HI = 1 << 1, + SLI4_BMBX_SIZE = 256, +}; + +static inline u32 +sli_bmbx_write_hi(u64 addr) { + u32 val; + + val = upper_32_bits(addr) & ~SLI4_BMBX_MASK_HI; + val |= SLI4_BMBX_HI; + + return val; +} + +static inline u32 +sli_bmbx_write_lo(u64 addr) { + u32 val; + + val = (upper_32_bits(addr) & SLI4_BMBX_MASK_HI) << 30; + val |= ((addr) & ~SLI4_BMBX_MASK_LO) >> 2; + + return val; +} + +/* SLIPORT_CONTROL - SLI Port Control Register */ +#define SLI4_PORT_CTRL_REG 0x0408 +enum sli4_port_ctrl { + SLI4_PORT_CTRL_IP = 1u << 27, + SLI4_PORT_CTRL_IDIS = 1u << 22, + SLI4_PORT_CTRL_FDD = 1u << 31, +}; + +/* SLI4_SLIPORT_ERROR - SLI Port Error Register */ +#define SLI4_PORT_ERROR1 0x040c +#define SLI4_PORT_ERROR2 0x0410 + +/* EQCQ_DOORBELL - EQ and CQ Doorbell Register */ +#define SLI4_EQCQ_DB_REG 0x120 +enum sli4_eqcq_e { + SLI4_EQ_ID_LO_MASK = 0x01ff, + + SLI4_CQ_ID_LO_MASK = 0x03ff, + + SLI4_EQCQ_CI_EQ = 0x0200, + + SLI4_EQCQ_QT_EQ = 0x00000400, + SLI4_EQCQ_QT_CQ = 0x00000000, + + SLI4_EQCQ_ID_HI_SHIFT = 11, + SLI4_EQCQ_ID_HI_MASK = 0xf800, + + SLI4_EQCQ_NUM_SHIFT = 16, + SLI4_EQCQ_NUM_MASK = 0x1fff0000, + + SLI4_EQCQ_ARM = 0x20000000, + SLI4_EQCQ_UNARM = 0x00000000, +}; + +static inline u32 +sli_format_eq_db_data(u16 num_popped, u16 id, u32 arm) { + u32 reg; + + reg = (id & SLI4_EQ_ID_LO_MASK) | SLI4_EQCQ_QT_EQ; + reg |= (((id) >> 9) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK; + reg |= ((num_popped) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK; + reg |= arm | SLI4_EQCQ_CI_EQ; + + return reg; +} + +static inline u32 +sli_format_cq_db_data(u16 num_popped, u16 id, u32 arm) { + u32 reg; + + reg = ((id) & SLI4_CQ_ID_LO_MASK) | SLI4_EQCQ_QT_CQ; + reg |= (((id) >> 10) << SLI4_EQCQ_ID_HI_SHIFT) & SLI4_EQCQ_ID_HI_MASK; + reg |= ((num_popped) << SLI4_EQCQ_NUM_SHIFT) & SLI4_EQCQ_NUM_MASK; + reg |= arm; + + return reg; +} + +/* EQ_DOORBELL - EQ Doorbell Register for IF_TYPE = 6*/ +#define SLI4_IF6_EQ_DB_REG 0x120 +enum sli4_eq_e { + SLI4_IF6_EQ_ID_MASK = 0x0fff, + + SLI4_IF6_EQ_NUM_SHIFT = 16, + SLI4_IF6_EQ_NUM_MASK = 0x1fff0000, +}; + +static inline u32 +sli_format_if6_eq_db_data(u16 num_popped, u16 id, u32 arm) { + u32 reg; + + reg = id & SLI4_IF6_EQ_ID_MASK; + reg |= (num_popped << SLI4_IF6_EQ_NUM_SHIFT) & SLI4_IF6_EQ_NUM_MASK; + reg |= arm; + + return reg; +} + +/* CQ_DOORBELL - CQ Doorbell Register for IF_TYPE = 6 */ +#define SLI4_IF6_CQ_DB_REG 0xc0 +enum sli4_cq_e { + SLI4_IF6_CQ_ID_MASK = 0xffff, + + SLI4_IF6_CQ_NUM_SHIFT = 16, + SLI4_IF6_CQ_NUM_MASK = 0x1fff0000, +}; + +static inline u32 +sli_format_if6_cq_db_data(u16 num_popped, u16 id, u32 arm) { + u32 reg; + + reg = id & SLI4_IF6_CQ_ID_MASK; + reg |= ((num_popped) << SLI4_IF6_CQ_NUM_SHIFT) & SLI4_IF6_CQ_NUM_MASK; + reg |= arm; + + return reg; +} + +/* MQ_DOORBELL - MQ Doorbell Register */ +#define SLI4_MQ_DB_REG 0x0140 +#define SLI4_IF6_MQ_DB_REG 0x0160 +enum sli4_mq_e { + SLI4_MQ_ID_MASK = 0xffff, + + SLI4_MQ_NUM_SHIFT = 16, + SLI4_MQ_NUM_MASK = 0x3fff0000, +}; + +static inline u32 +sli_format_mq_db_data(u16 id) { + u32 reg; + + reg = id & SLI4_MQ_ID_MASK; + reg |= (1 << SLI4_MQ_NUM_SHIFT) & SLI4_MQ_NUM_MASK; + + return reg; +} + +/* RQ_DOORBELL - RQ Doorbell Register */ +#define SLI4_RQ_DB_REG 0x0a0 +#define SLI4_IF6_RQ_DB_REG 0x0080 +enum sli4_rq_e { + SLI4_RQ_DB_ID_MASK = 0xffff, + + SLI4_RQ_DB_NUM_SHIFT = 16, + SLI4_RQ_DB_NUM_MASK = 0x3fff0000, +}; + +static inline u32 +sli_format_rq_db_data(u16 id) { + u32 reg; + + reg = id & SLI4_RQ_DB_ID_MASK; + reg |= (1 << SLI4_RQ_DB_NUM_SHIFT) & SLI4_RQ_DB_NUM_MASK; + + return reg; +} + +/* WQ_DOORBELL - WQ Doorbell Register */ +#define SLI4_IO_WQ_DB_REG 0x040 +#define SLI4_IF6_WQ_DB_REG 0x040 +enum sli4_wq_e { + SLI4_WQ_ID_MASK = 0xffff, + + SLI4_WQ_IDX_SHIFT = 16, + SLI4_WQ_IDX_MASK = 0xff0000, + + SLI4_WQ_NUM_SHIFT = 24, + SLI4_WQ_NUM_MASK = 0x0ff00000, +}; + +static inline u32 +sli_format_wq_db_data(u16 id) { + u32 reg; + + reg = id & SLI4_WQ_ID_MASK; + reg |= (1 << SLI4_WQ_NUM_SHIFT) & SLI4_WQ_NUM_MASK; + + return reg; +} + +/* SLIPORT_STATUS - SLI Port Status Register */ +#define SLI4_PORT_STATUS_REGOFF 0x0404 +enum sli4_port_status { + SLI4_PORT_STATUS_FDP = 1u << 21, + SLI4_PORT_STATUS_RDY = 1u << 23, + SLI4_PORT_STATUS_RN = 1u << 24, + SLI4_PORT_STATUS_DIP = 1u << 25, + SLI4_PORT_STATUS_OTI = 1u << 29, + SLI4_PORT_STATUS_ERR = 1u << 31, +}; + +#define SLI4_PHYDEV_CTRL_REG 0x0414 +#define SLI4_PHYDEV_CTRL_FRST (1 << 1) +#define SLI4_PHYDEV_CTRL_DD (1 << 2) + +/* Register name enums */ +enum sli4_regname_en { + SLI4_REG_BMBX, + SLI4_REG_EQ_DOORBELL, + SLI4_REG_CQ_DOORBELL, + SLI4_REG_RQ_DOORBELL, + SLI4_REG_IO_WQ_DOORBELL, + SLI4_REG_MQ_DOORBELL, + SLI4_REG_PHYSDEV_CONTROL, + SLI4_REG_PORT_CONTROL, + SLI4_REG_PORT_ERROR1, + SLI4_REG_PORT_ERROR2, + SLI4_REG_PORT_SEMAPHORE, + SLI4_REG_PORT_STATUS, + SLI4_REG_UNKWOWN /* must be last */ +}; + +struct sli4_reg { + u32 rset; + u32 off; +}; + +#endif /* !_SLI4_H */ From patchwork Thu Jan 7 22:58:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358595 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09940C433E0 for ; Thu, 7 Jan 2021 23:00:11 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CEDAC235FC for ; Thu, 7 Jan 2021 23:00:10 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728173AbhAGW7z (ORCPT ); Thu, 7 Jan 2021 17:59:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51786 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727939AbhAGW7y (ORCPT ); Thu, 7 Jan 2021 17:59:54 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 363C9C0612F8 for ; Thu, 7 Jan 2021 14:59:14 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id t22so5002098pfl.3 for ; Thu, 07 Jan 2021 14:59:14 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W2kclGXofckNwESvCVxEUX2j9+OBssDlEjBMjArGAnE=; b=RLtORlJPxCTroikKtNeHVXUo1kc+n0/t1xcEgK5K5SA3wPbzKMO5ArZNk/SPJEsi0a eXLEGoqplclHnuA4iQvZ06D2v5hwqpAmvWiTns2/w7SAcI3RNHo/joqNAe5uPaNO7ocv x1XV6rN5e7jGHJtzN6OEK7E2LbUUhIfVG3xsuAQEbymnor85RaLmKK21uDiFLuLsVMIY 6DDWIG1MzlQo7pzJ6MxtvwgmEsx4RFixjbXdIpNRpIWtKPxxtSjqomBKqZ/COEj+Nr2+ 80pAHSqEcSPeviJ7aj7172CRHBANZaNy3L5ITDb6xw06b8esB0+oMw/NXhL3w4INymJC YJew== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W2kclGXofckNwESvCVxEUX2j9+OBssDlEjBMjArGAnE=; b=mu7/zzIE/EgW2NBt74fUGCGuV4ElEcI+mpnISfT9itKayn/B5/dE+6m22FldstyegU Cej+QBA2Kb9COsHtyYgLAEIyAtLsPwvOacLuPiGUDeTyM6ZI0WZi3YV9qJ5sgEmkhcxB nuBSL15o1OS2i4CwbPFMnSSVtPXRO2uEn2Fb+YBuuGLCLRd3pomR0swI/ykFZnwQYSKW UX0WAFkSSu6g8utbuyxi58+0a2yK8LwnIuhE08fWXU3ZSfivyM3mwTBo263mfzrdLyO+ G8HoQKwpFPEEdFmciJ3AFr6mRSC9Jv9aTWt/nQhb2Md6j9vM5WQApeOPSweCk2KgaBuo RvXA== X-Gm-Message-State: AOAM531wq5ZzNU/TnY1JDqpppCgss010CWq9WidwjKWsPb6JNsPYAF23 xOfjnqTELnEsNo9sdI1L9pbWDmFR95DngQ== X-Google-Smtp-Source: ABdhPJzUuc53oXzm3RJalm1yUTWUoCU3lNO4TB6hfq5F8ij5wM+XzjcYbmR/Y1nvfUfzp1WcyH6YVQ== X-Received: by 2002:a62:65c1:0:b029:1aa:ba52:fdfd with SMTP id z184-20020a6265c10000b02901aaba52fdfdmr734816pfb.7.1610060352980; Thu, 07 Jan 2021 14:59:12 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.12 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:12 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Daniel Wagner Subject: [PATCH v7 03/31] elx: libefc_sli: Data structures and defines for mbox commands Date: Thu, 7 Jan 2021 14:58:37 -0800 Message-Id: <20210107225905.18186-4-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc_sli SLI-4 library population. This patch adds definitions for SLI-4 mailbox commands and responses. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc_sli/sli4.h | 1628 +++++++++++++++++++++++++++- 1 file changed, 1627 insertions(+), 1 deletion(-) diff --git a/drivers/scsi/elx/libefc_sli/sli4.h b/drivers/scsi/elx/libefc_sli/sli4.h index d194b566b588..39b6fceed85e 100644 --- a/drivers/scsi/elx/libefc_sli/sli4.h +++ b/drivers/scsi/elx/libefc_sli/sli4.h @@ -1990,6 +1990,1520 @@ enum sli4_els_cmd_type { SLI4_ELS_REQUEST64_CMD_FABRIC = 0x0d, }; +#define SLI_PAGE_SIZE SZ_4K + +#define SLI4_BMBX_TIMEOUT_MSEC 30000 +#define SLI4_FW_READY_TIMEOUT_MSEC 30000 + +#define SLI4_BMBX_DELAY_US 1000 /* 1 ms */ +#define SLI4_INIT_PORT_DELAY_US 10000 /* 10 ms */ + +static inline u32 +sli_page_count(size_t bytes, u32 page_size) +{ + if (!page_size) + return 0; + + return (bytes + (page_size - 1)) >> __ffs(page_size); +} + +/************************************************************************* + * SLI-4 mailbox command formats and definitions + */ + +struct sli4_mbox_command_header { + u8 resvd0; + u8 command; + __le16 status; /* Port writes to indicate success/fail */ +}; + +enum sli4_mbx_cmd_value { + SLI4_MBX_CMD_CONFIG_LINK = 0x07, + SLI4_MBX_CMD_DUMP = 0x17, + SLI4_MBX_CMD_DOWN_LINK = 0x06, + SLI4_MBX_CMD_INIT_LINK = 0x05, + SLI4_MBX_CMD_INIT_VFI = 0xa3, + SLI4_MBX_CMD_INIT_VPI = 0xa4, + SLI4_MBX_CMD_POST_XRI = 0xa7, + SLI4_MBX_CMD_RELEASE_XRI = 0xac, + SLI4_MBX_CMD_READ_CONFIG = 0x0b, + SLI4_MBX_CMD_READ_STATUS = 0x0e, + SLI4_MBX_CMD_READ_NVPARMS = 0x02, + SLI4_MBX_CMD_READ_REV = 0x11, + SLI4_MBX_CMD_READ_LNK_STAT = 0x12, + SLI4_MBX_CMD_READ_SPARM64 = 0x8d, + SLI4_MBX_CMD_READ_TOPOLOGY = 0x95, + SLI4_MBX_CMD_REG_FCFI = 0xa0, + SLI4_MBX_CMD_REG_FCFI_MRQ = 0xaf, + SLI4_MBX_CMD_REG_RPI = 0x93, + SLI4_MBX_CMD_REG_RX_RQ = 0xa6, + SLI4_MBX_CMD_REG_VFI = 0x9f, + SLI4_MBX_CMD_REG_VPI = 0x96, + SLI4_MBX_CMD_RQST_FEATURES = 0x9d, + SLI4_MBX_CMD_SLI_CONFIG = 0x9b, + SLI4_MBX_CMD_UNREG_FCFI = 0xa2, + SLI4_MBX_CMD_UNREG_RPI = 0x14, + SLI4_MBX_CMD_UNREG_VFI = 0xa1, + SLI4_MBX_CMD_UNREG_VPI = 0x97, + SLI4_MBX_CMD_WRITE_NVPARMS = 0x03, + SLI4_MBX_CMD_CFG_AUTO_XFER_RDY = 0xad, +}; + +enum sli4_mbx_status { + SLI4_MBX_STATUS_SUCCESS = 0x0000, + SLI4_MBX_STATUS_FAILURE = 0x0001, + SLI4_MBX_STATUS_RPI_NOT_REG = 0x1400, +}; + +/* CONFIG_LINK - configure link-oriented parameters, + * such as default N_Port_ID address and various timers + */ +enum sli4_cmd_config_link_flags { + SLI4_CFG_LINK_BBSCN = 0xf00, + SLI4_CFG_LINK_CSCN = 0x1000, +}; + +struct sli4_cmd_config_link { + struct sli4_mbox_command_header hdr; + u8 maxbbc; + u8 rsvd5; + u8 rsvd6; + u8 rsvd7; + u8 alpa; + __le16 n_port_id; + u8 rsvd11; + __le32 rsvd12; + __le32 e_d_tov; + __le32 lp_tov; + __le32 r_a_tov; + __le32 r_t_tov; + __le32 al_tov; + __le32 rsvd36; + __le32 bbscn_dword; +}; + +#define SLI4_DUMP4_TYPE 0xf + +#define SLI4_WKI_TAG_SAT_TEM 0x1040 + +struct sli4_cmd_dump4 { + struct sli4_mbox_command_header hdr; + __le32 type_dword; + __le16 wki_selection; + __le16 rsvd10; + __le32 rsvd12; + __le32 returned_byte_cnt; + __le32 resp_data[59]; +}; + +/* INIT_LINK - initialize the link for a FC port */ +enum sli4_init_link_flags { + SLI4_INIT_LINK_F_LOOPBACK = 1 << 0, + + SLI4_INIT_LINK_F_P2P_ONLY = 1 << 1, + SLI4_INIT_LINK_F_FCAL_ONLY = 2 << 1, + SLI4_INIT_LINK_F_FCAL_FAIL_OVER = 0 << 1, + SLI4_INIT_LINK_F_P2P_FAIL_OVER = 1 << 1, + + SLI4_INIT_LINK_F_UNFAIR = 1 << 6, + SLI4_INIT_LINK_F_NO_LIRP = 1 << 7, + SLI4_INIT_LINK_F_LOOP_VALID_CHK = 1 << 8, + SLI4_INIT_LINK_F_NO_LISA = 1 << 9, + SLI4_INIT_LINK_F_FAIL_OVER = 1 << 10, + SLI4_INIT_LINK_F_FIXED_SPEED = 1 << 11, + SLI4_INIT_LINK_F_PICK_HI_ALPA = 1 << 15, + +}; + +enum sli4_fc_link_speed { + SLI4_LINK_SPEED_1G = 1, + SLI4_LINK_SPEED_2G, + SLI4_LINK_SPEED_AUTO_1_2, + SLI4_LINK_SPEED_4G, + SLI4_LINK_SPEED_AUTO_4_1, + SLI4_LINK_SPEED_AUTO_4_2, + SLI4_LINK_SPEED_AUTO_4_2_1, + SLI4_LINK_SPEED_8G, + SLI4_LINK_SPEED_AUTO_8_1, + SLI4_LINK_SPEED_AUTO_8_2, + SLI4_LINK_SPEED_AUTO_8_2_1, + SLI4_LINK_SPEED_AUTO_8_4, + SLI4_LINK_SPEED_AUTO_8_4_1, + SLI4_LINK_SPEED_AUTO_8_4_2, + SLI4_LINK_SPEED_10G, + SLI4_LINK_SPEED_16G, + SLI4_LINK_SPEED_AUTO_16_8_4, + SLI4_LINK_SPEED_AUTO_16_8, + SLI4_LINK_SPEED_32G, + SLI4_LINK_SPEED_AUTO_32_16_8, + SLI4_LINK_SPEED_AUTO_32_16, + SLI4_LINK_SPEED_64G, + SLI4_LINK_SPEED_AUTO_64_32_16, + SLI4_LINK_SPEED_AUTO_64_32, + SLI4_LINK_SPEED_128G, + SLI4_LINK_SPEED_AUTO_128_64_32, + SLI4_LINK_SPEED_AUTO_128_64, +}; + +struct sli4_cmd_init_link { + struct sli4_mbox_command_header hdr; + __le32 sel_reset_al_pa_dword; + __le32 flags0; + __le32 link_speed_sel_code; +}; + +/* INIT_VFI - initialize the VFI resource */ +enum sli4_init_vfi_flags { + SLI4_INIT_VFI_FLAG_VP = 0x1000, + SLI4_INIT_VFI_FLAG_VF = 0x2000, + SLI4_INIT_VFI_FLAG_VT = 0x4000, + SLI4_INIT_VFI_FLAG_VR = 0x8000, + + SLI4_INIT_VFI_VFID = 0x1fff, + SLI4_INIT_VFI_PRI = 0xe000, + + SLI4_INIT_VFI_HOP_COUNT = 0xff000000, +}; + +struct sli4_cmd_init_vfi { + struct sli4_mbox_command_header hdr; + __le16 vfi; + __le16 flags0_word; + __le16 fcfi; + __le16 vpi; + __le32 vf_id_pri_dword; + __le32 hop_cnt_dword; +}; + +/* INIT_VPI - initialize the VPI resource */ +struct sli4_cmd_init_vpi { + struct sli4_mbox_command_header hdr; + __le16 vpi; + __le16 vfi; +}; + +/* POST_XRI - post XRI resources to the SLI Port */ +enum sli4_post_xri_flags { + SLI4_POST_XRI_COUNT = 0xfff, + SLI4_POST_XRI_FLAG_ENX = 0x1000, + SLI4_POST_XRI_FLAG_DL = 0x2000, + SLI4_POST_XRI_FLAG_DI = 0x4000, + SLI4_POST_XRI_FLAG_VAL = 0x8000, +}; + +struct sli4_cmd_post_xri { + struct sli4_mbox_command_header hdr; + __le16 xri_base; + __le16 xri_count_flags; +}; + +/* RELEASE_XRI - Release XRI resources from the SLI Port */ +enum sli4_release_xri_flags { + SLI4_RELEASE_XRI_REL_XRI_CNT = 0x1f, + SLI4_RELEASE_XRI_COUNT = 0x1f, +}; + +struct sli4_cmd_release_xri { + struct sli4_mbox_command_header hdr; + __le16 rel_xri_count_word; + __le16 xri_count_word; + + struct { + __le16 xri_tag0; + __le16 xri_tag1; + } xri_tbl[62]; +}; + +/* READ_CONFIG - read SLI port configuration parameters */ +struct sli4_cmd_read_config { + struct sli4_mbox_command_header hdr; +}; + +enum sli4_read_cfg_resp_flags { + SLI4_READ_CFG_RESP_RESOURCE_EXT = 0x80000000, /* DW1 */ + SLI4_READ_CFG_RESP_TOPOLOGY = 0xff000000, /* DW2 */ +}; + +enum sli4_read_cfg_topo { + SLI4_READ_CFG_TOPO_FC = 0x1, /* FC topology unknown */ + SLI4_READ_CFG_TOPO_NON_FC_AL = 0x2, /* FC point-to-point or fabric */ + SLI4_READ_CFG_TOPO_FC_AL = 0x3, /* FC-AL topology */ +}; + +struct sli4_rsp_read_config { + struct sli4_mbox_command_header hdr; + __le32 ext_dword; + __le32 topology_dword; + __le32 resvd8; + __le16 e_d_tov; + __le16 resvd14; + __le32 resvd16; + __le16 r_a_tov; + __le16 resvd22; + __le32 resvd24; + __le32 resvd28; + __le16 lmt; + __le16 resvd34; + __le32 resvd36; + __le32 resvd40; + __le16 xri_base; + __le16 xri_count; + __le16 rpi_base; + __le16 rpi_count; + __le16 vpi_base; + __le16 vpi_count; + __le16 vfi_base; + __le16 vfi_count; + __le16 resvd60; + __le16 fcfi_count; + __le16 rq_count; + __le16 eq_count; + __le16 wq_count; + __le16 cq_count; + __le32 pad[45]; +}; + +/* READ_NVPARMS - read SLI port configuration parameters */ +enum sli4_read_nvparms_flags { + SLI4_READ_NVPARAMS_HARD_ALPA = 0xff, + SLI4_READ_NVPARAMS_PREFERRED_D_ID = 0xffffff00, +}; + +struct sli4_cmd_read_nvparms { + struct sli4_mbox_command_header hdr; + __le32 resvd0; + __le32 resvd4; + __le32 resvd8; + __le32 resvd12; + u8 wwpn[8]; + u8 wwnn[8]; + __le32 hard_alpa_d_id; +}; + +/* WRITE_NVPARMS - write SLI port configuration parameters */ +struct sli4_cmd_write_nvparms { + struct sli4_mbox_command_header hdr; + __le32 resvd0; + __le32 resvd4; + __le32 resvd8; + __le32 resvd12; + u8 wwpn[8]; + u8 wwnn[8]; + __le32 hard_alpa_d_id; +}; + +/* READ_REV - read the Port revision levels */ +enum { + SLI4_READ_REV_FLAG_SLI_LEVEL = 0xf, + SLI4_READ_REV_FLAG_FCOEM = 0x10, + SLI4_READ_REV_FLAG_CEEV = 0x60, + SLI4_READ_REV_FLAG_VPD = 0x2000, + + SLI4_READ_REV_AVAILABLE_LENGTH = 0xffffff, +}; + +struct sli4_cmd_read_rev { + struct sli4_mbox_command_header hdr; + __le16 resvd0; + __le16 flags0_word; + __le32 first_hw_rev; + __le32 second_hw_rev; + __le32 resvd12; + __le32 third_hw_rev; + u8 fc_ph_low; + u8 fc_ph_high; + u8 feature_level_low; + u8 feature_level_high; + __le32 resvd24; + __le32 first_fw_id; + u8 first_fw_name[16]; + __le32 second_fw_id; + u8 second_fw_name[16]; + __le32 rsvd18[30]; + __le32 available_length_dword; + struct sli4_dmaaddr hostbuf; + __le32 returned_vpd_length; + __le32 actual_vpd_length; +}; + +/* READ_SPARM64 - read the Port service parameters */ +#define SLI4_READ_SPARM64_WWPN_OFFSET (4 * sizeof(u32)) +#define SLI4_READ_SPARM64_WWNN_OFFSET (6 * sizeof(u32)) + +struct sli4_cmd_read_sparm64 { + struct sli4_mbox_command_header hdr; + __le32 resvd0; + __le32 resvd4; + struct sli4_bde bde_64; + __le16 vpi; + __le16 resvd22; + __le16 port_name_start; + __le16 port_name_len; + __le16 node_name_start; + __le16 node_name_len; +}; + +/* READ_TOPOLOGY - read the link event information */ +enum sli4_read_topo_e { + SLI4_READTOPO_ATTEN_TYPE = 0xff, + SLI4_READTOPO_FLAG_IL = 0x100, + SLI4_READTOPO_FLAG_PB_RECVD = 0x200, + + SLI4_READTOPO_LINKSTATE_RECV = 0x3, + SLI4_READTOPO_LINKSTATE_TRANS = 0xc, + SLI4_READTOPO_LINKSTATE_MACHINE = 0xf0, + SLI4_READTOPO_LINKSTATE_SPEED = 0xff00, + SLI4_READTOPO_LINKSTATE_TF = 0x40000000, + SLI4_READTOPO_LINKSTATE_LU = 0x80000000, + + SLI4_READTOPO_SCN_BBSCN = 0xf, + SLI4_READTOPO_SCN_CBBSCN = 0xf0, + + SLI4_READTOPO_R_T_TOV = 0x1ff, + SLI4_READTOPO_AL_TOV = 0xf000, + + SLI4_READTOPO_PB_FLAG = 0x80, + + SLI4_READTOPO_INIT_N_PORTID = 0xffffff, +}; + +#define SLI4_MIN_LOOP_MAP_BYTES 128 + +struct sli4_cmd_read_topology { + struct sli4_mbox_command_header hdr; + __le32 event_tag; + __le32 dw2_attentype; + u8 topology; + u8 lip_type; + u8 lip_al_ps; + u8 al_pa_granted; + struct sli4_bde bde_loop_map; + __le32 linkdown_state; + __le32 currlink_state; + u8 max_bbc; + u8 init_bbc; + u8 scn_flags; + u8 rsvd39; + __le16 dw10w0_al_rt_tov; + __le16 lp_tov; + u8 acquired_al_pa; + u8 pb_flags; + __le16 specified_al_pa; + __le32 dw12_init_n_port_id; +}; + +enum sli4_read_topo_link { + SLI4_READ_TOPOLOGY_LINK_UP = 0x1, + SLI4_READ_TOPOLOGY_LINK_DOWN, + SLI4_READ_TOPOLOGY_LINK_NO_ALPA, +}; + +enum sli4_read_topo { + SLI4_READ_TOPO_UNKNOWN = 0x0, + SLI4_READ_TOPO_NON_FC_AL, + SLI4_READ_TOPO_FC_AL, +}; + +enum sli4_read_topo_speed { + SLI4_READ_TOPOLOGY_SPEED_NONE = 0x00, + SLI4_READ_TOPOLOGY_SPEED_1G = 0x04, + SLI4_READ_TOPOLOGY_SPEED_2G = 0x08, + SLI4_READ_TOPOLOGY_SPEED_4G = 0x10, + SLI4_READ_TOPOLOGY_SPEED_8G = 0x20, + SLI4_READ_TOPOLOGY_SPEED_10G = 0x40, + SLI4_READ_TOPOLOGY_SPEED_16G = 0x80, + SLI4_READ_TOPOLOGY_SPEED_32G = 0x90, + SLI4_READ_TOPOLOGY_SPEED_64G = 0xa0, + SLI4_READ_TOPOLOGY_SPEED_128G = 0xb0, +}; + +/* REG_FCFI - activate a FC Forwarder */ +struct sli4_cmd_reg_fcfi_rq_cfg { + u8 r_ctl_mask; + u8 r_ctl_match; + u8 type_mask; + u8 type_match; +}; + +enum sli4_regfcfi_tag { + SLI4_REGFCFI_VLAN_TAG = 0xfff, + SLI4_REGFCFI_VLANTAG_VALID = 0x1000, +}; + +#define SLI4_CMD_REG_FCFI_NUM_RQ_CFG 4 +struct sli4_cmd_reg_fcfi { + struct sli4_mbox_command_header hdr; + __le16 fcf_index; + __le16 fcfi; + __le16 rqid1; + __le16 rqid0; + __le16 rqid3; + __le16 rqid2; + struct sli4_cmd_reg_fcfi_rq_cfg + rq_cfg[SLI4_CMD_REG_FCFI_NUM_RQ_CFG]; + __le32 dw8_vlan; +}; + +#define SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG 4 +#define SLI4_CMD_REG_FCFI_MRQ_MAX_NUM_RQ 32 +#define SLI4_CMD_REG_FCFI_SET_FCFI_MODE 0 +#define SLI4_CMD_REG_FCFI_SET_MRQ_MODE 1 + +enum sli4_reg_fcfi_mrq { + SLI4_REGFCFI_MRQ_VLAN_TAG = 0xfff, + SLI4_REGFCFI_MRQ_VLANTAG_VALID = 0x1000, + SLI4_REGFCFI_MRQ_MODE = 0x2000, + + SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS = 0xff, + SLI4_REGFCFI_MRQ_FILTER_BITMASK = 0xf00, + SLI4_REGFCFI_MRQ_RQ_SEL_POLICY = 0xf000, +}; + +struct sli4_cmd_reg_fcfi_mrq { + struct sli4_mbox_command_header hdr; + __le16 fcf_index; + __le16 fcfi; + __le16 rqid1; + __le16 rqid0; + __le16 rqid3; + __le16 rqid2; + struct sli4_cmd_reg_fcfi_rq_cfg + rq_cfg[SLI4_CMD_REG_FCFI_MRQ_NUM_RQ_CFG]; + __le32 dw8_vlan; + __le32 dw9_mrqflags; +}; + +struct sli4_cmd_rq_cfg { + __le16 rq_id; + u8 r_ctl_mask; + u8 r_ctl_match; + u8 type_mask; + u8 type_match; +}; + +/* REG_RPI - register a Remote Port Indicator */ +enum sli4_reg_rpi { + SLI4_REGRPI_REMOTE_N_PORTID = 0xffffff, /* DW2 */ + SLI4_REGRPI_UPD = 0x1000000, + SLI4_REGRPI_ETOW = 0x8000000, + SLI4_REGRPI_TERP = 0x20000000, + SLI4_REGRPI_CI = 0x80000000, +}; + +struct sli4_cmd_reg_rpi { + struct sli4_mbox_command_header hdr; + __le16 rpi; + __le16 rsvd2; + __le32 dw2_rportid_flags; + struct sli4_bde bde_64; + __le16 vpi; + __le16 rsvd26; +}; + +#define SLI4_REG_RPI_BUF_LEN 0x70 + +/* REG_VFI - register a Virtual Fabric Indicator */ +enum sli_reg_vfi { + SLI4_REGVFI_VP = 0x1000, /* DW1 */ + SLI4_REGVFI_UPD = 0x2000, + + SLI4_REGVFI_LOCAL_N_PORTID = 0xffffff, /* DW10 */ +}; + +struct sli4_cmd_reg_vfi { + struct sli4_mbox_command_header hdr; + __le16 vfi; + __le16 dw0w1_flags; + __le16 fcfi; + __le16 vpi; + u8 wwpn[8]; + struct sli4_bde sparm; + __le32 e_d_tov; + __le32 r_a_tov; + __le32 dw10_lportid_flags; +}; + +/* REG_VPI - register a Virtual Port Indicator */ +enum sli4_reg_vpi { + SLI4_REGVPI_LOCAL_N_PORTID = 0xffffff, + SLI4_REGVPI_UPD = 0x1000000, +}; + +struct sli4_cmd_reg_vpi { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le32 dw2_lportid_flags; + u8 wwpn[8]; + __le32 rsvd12; + __le16 vpi; + __le16 vfi; +}; + +/* REQUEST_FEATURES - request / query SLI features */ +enum sli4_req_features_flags { + SLI4_REQFEAT_QRY = 0x1, /* Dw1 */ + + SLI4_REQFEAT_IAAB = 1 << 0, /* DW2 & DW3 */ + SLI4_REQFEAT_NPIV = 1 << 1, + SLI4_REQFEAT_DIF = 1 << 2, + SLI4_REQFEAT_VF = 1 << 3, + SLI4_REQFEAT_FCPI = 1 << 4, + SLI4_REQFEAT_FCPT = 1 << 5, + SLI4_REQFEAT_FCPC = 1 << 6, + SLI4_REQFEAT_RSVD = 1 << 7, + SLI4_REQFEAT_RQD = 1 << 8, + SLI4_REQFEAT_IAAR = 1 << 9, + SLI4_REQFEAT_HLM = 1 << 10, + SLI4_REQFEAT_PERFH = 1 << 11, + SLI4_REQFEAT_RXSEQ = 1 << 12, + SLI4_REQFEAT_RXRI = 1 << 13, + SLI4_REQFEAT_DCL2 = 1 << 14, + SLI4_REQFEAT_RSCO = 1 << 15, + SLI4_REQFEAT_MRQP = 1 << 16, +}; + +struct sli4_cmd_request_features { + struct sli4_mbox_command_header hdr; + __le32 dw1_qry; + __le32 cmd; + __le32 resp; +}; + +/* + * SLI_CONFIG - submit a configuration command to Port + * + * Command is either embedded as part of the payload (embed) or located + * in a separate memory buffer (mem) + */ +enum sli4_sli_config { + SLI4_SLICONF_EMB = 0x1, /* DW1 */ + SLI4_SLICONF_PMDCMD_SHIFT = 3, + SLI4_SLICONF_PMDCMD_MASK = 0xf8, + SLI4_SLICONF_PMDCMD_VAL_1 = 8, + SLI4_SLICONF_PMDCNT = 0xf8, + + SLI4_SLICONF_PMD_LEN = 0x00ffffff, +}; + +struct sli4_cmd_sli_config { + struct sli4_mbox_command_header hdr; + __le32 dw1_flags; + __le32 payload_len; + __le32 rsvd12[3]; + union { + u8 embed[58 * sizeof(u32)]; + struct sli4_bufptr mem; + } payload; +}; + +/* READ_STATUS - read tx/rx status of a particular port */ +#define SLI4_READSTATUS_CLEAR_COUNTERS 0x1 + +struct sli4_cmd_read_status { + struct sli4_mbox_command_header hdr; + __le32 dw1_flags; + __le32 rsvd4; + __le32 trans_kbyte_cnt; + __le32 recv_kbyte_cnt; + __le32 trans_frame_cnt; + __le32 recv_frame_cnt; + __le32 trans_seq_cnt; + __le32 recv_seq_cnt; + __le32 tot_exchanges_orig; + __le32 tot_exchanges_resp; + __le32 recv_p_bsy_cnt; + __le32 recv_f_bsy_cnt; + __le32 no_rq_buf_dropped_frames_cnt; + __le32 empty_rq_timeout_cnt; + __le32 no_xri_dropped_frames_cnt; + __le32 empty_xri_pool_cnt; +}; + +/* READ_LNK_STAT - read link status of a particular port */ +enum sli4_read_link_stats_flags { + SLI4_READ_LNKSTAT_REC = 1u << 0, + SLI4_READ_LNKSTAT_GEC = 1u << 1, + SLI4_READ_LNKSTAT_W02OF = 1u << 2, + SLI4_READ_LNKSTAT_W03OF = 1u << 3, + SLI4_READ_LNKSTAT_W04OF = 1u << 4, + SLI4_READ_LNKSTAT_W05OF = 1u << 5, + SLI4_READ_LNKSTAT_W06OF = 1u << 6, + SLI4_READ_LNKSTAT_W07OF = 1u << 7, + SLI4_READ_LNKSTAT_W08OF = 1u << 8, + SLI4_READ_LNKSTAT_W09OF = 1u << 9, + SLI4_READ_LNKSTAT_W10OF = 1u << 10, + SLI4_READ_LNKSTAT_W11OF = 1u << 11, + SLI4_READ_LNKSTAT_W12OF = 1u << 12, + SLI4_READ_LNKSTAT_W13OF = 1u << 13, + SLI4_READ_LNKSTAT_W14OF = 1u << 14, + SLI4_READ_LNKSTAT_W15OF = 1u << 15, + SLI4_READ_LNKSTAT_W16OF = 1u << 16, + SLI4_READ_LNKSTAT_W17OF = 1u << 17, + SLI4_READ_LNKSTAT_W18OF = 1u << 18, + SLI4_READ_LNKSTAT_W19OF = 1u << 19, + SLI4_READ_LNKSTAT_W20OF = 1u << 20, + SLI4_READ_LNKSTAT_W21OF = 1u << 21, + SLI4_READ_LNKSTAT_CLRC = 1u << 30, + SLI4_READ_LNKSTAT_CLOF = 1u << 31, +}; + +struct sli4_cmd_read_link_stats { + struct sli4_mbox_command_header hdr; + __le32 dw1_flags; + __le32 linkfail_errcnt; + __le32 losssync_errcnt; + __le32 losssignal_errcnt; + __le32 primseq_errcnt; + __le32 inval_txword_errcnt; + __le32 crc_errcnt; + __le32 primseq_eventtimeout_cnt; + __le32 elastic_bufoverrun_errcnt; + __le32 arbit_fc_al_timeout_cnt; + __le32 adv_rx_buftor_to_buf_credit; + __le32 curr_rx_buf_to_buf_credit; + __le32 adv_tx_buf_to_buf_credit; + __le32 curr_tx_buf_to_buf_credit; + __le32 rx_eofa_cnt; + __le32 rx_eofdti_cnt; + __le32 rx_eofni_cnt; + __le32 rx_soff_cnt; + __le32 rx_dropped_no_aer_cnt; + __le32 rx_dropped_no_avail_rpi_rescnt; + __le32 rx_dropped_no_avail_xri_rescnt; +}; + +/* Format a WQE with WQ_ID Association performance hint */ +static inline void +sli_set_wq_id_association(void *entry, u16 q_id) +{ + u32 *wqe = entry; + + /* + * Set Word 10, bit 0 to zero + * Set Word 10, bits 15:1 to the WQ ID + */ + wqe[10] &= ~0xffff; + wqe[10] |= q_id << 1; +} + +/* UNREG_FCFI - unregister a FCFI */ +struct sli4_cmd_unreg_fcfi { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le16 fcfi; + __le16 rsvd6; +}; + +/* UNREG_RPI - unregister one or more RPI */ +enum sli4_unreg_rpi { + SLI4_UNREG_RPI_DP = 0x2000, + SLI4_UNREG_RPI_II_SHIFT = 14, + SLI4_UNREG_RPI_II_MASK = 0xc000, + SLI4_UNREG_RPI_II_RPI = 0x0000, + SLI4_UNREG_RPI_II_VPI = 0x4000, + SLI4_UNREG_RPI_II_VFI = 0x8000, + SLI4_UNREG_RPI_II_FCFI = 0xc000, + + SLI4_UNREG_RPI_DEST_N_PORTID_MASK = 0x00ffffff, +}; + +struct sli4_cmd_unreg_rpi { + struct sli4_mbox_command_header hdr; + __le16 index; + __le16 dw1w1_flags; + __le32 dw2_dest_n_portid; +}; + +/* UNREG_VFI - unregister one or more VFI */ +enum sli4_unreg_vfi { + SLI4_UNREG_VFI_II_SHIFT = 14, + SLI4_UNREG_VFI_II_MASK = 0xc000, + SLI4_UNREG_VFI_II_VFI = 0x0000, + SLI4_UNREG_VFI_II_FCFI = 0xc000, +}; + +struct sli4_cmd_unreg_vfi { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le16 index; + __le16 dw2_flags; +}; + +enum sli4_unreg_type { + SLI4_UNREG_TYPE_PORT, + SLI4_UNREG_TYPE_DOMAIN, + SLI4_UNREG_TYPE_FCF, + SLI4_UNREG_TYPE_ALL +}; + +/* UNREG_VPI - unregister one or more VPI */ +enum sli4_unreg_vpi { + SLI4_UNREG_VPI_II_SHIFT = 14, + SLI4_UNREG_VPI_II_MASK = 0xc000, + SLI4_UNREG_VPI_II_VPI = 0x0000, + SLI4_UNREG_VPI_II_VFI = 0x8000, + SLI4_UNREG_VPI_II_FCFI = 0xc000, +}; + +struct sli4_cmd_unreg_vpi { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le16 index; + __le16 dw2w0_flags; +}; + +/* AUTO_XFER_RDY - Configure the auto-generate XFER-RDY feature */ +struct sli4_cmd_config_auto_xfer_rdy { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le32 max_burst_len; +}; + +#define SLI4_CONFIG_AUTO_XFERRDY_BLKSIZE 0xffff + +struct sli4_cmd_config_auto_xfer_rdy_hp { + struct sli4_mbox_command_header hdr; + __le32 rsvd0; + __le32 max_burst_len; + __le32 dw3_esoc_flags; + __le16 block_size; + __le16 rsvd14; +}; + +/************************************************************************* + * SLI-4 common configuration command formats and definitions + */ + +/* + * Subsystem values. + */ +enum sli4_subsystem { + SLI4_SUBSYSTEM_COMMON = 0x01, + SLI4_SUBSYSTEM_LOWLEVEL = 0x0b, + SLI4_SUBSYSTEM_FC = 0x0c, + SLI4_SUBSYSTEM_DMTF = 0x11, +}; + +#define SLI4_OPC_LOWLEVEL_SET_WATCHDOG 0X36 + +/* + * Common opcode (OPC) values. + */ +enum sli4_cmn_opcode { + SLI4_CMN_FUNCTION_RESET = 0x3d, + SLI4_CMN_CREATE_CQ = 0x0c, + SLI4_CMN_CREATE_CQ_SET = 0x1d, + SLI4_CMN_DESTROY_CQ = 0x36, + SLI4_CMN_MODIFY_EQ_DELAY = 0x29, + SLI4_CMN_CREATE_EQ = 0x0d, + SLI4_CMN_DESTROY_EQ = 0x37, + SLI4_CMN_CREATE_MQ_EXT = 0x5a, + SLI4_CMN_DESTROY_MQ = 0x35, + SLI4_CMN_GET_CNTL_ATTRIBUTES = 0x20, + SLI4_CMN_NOP = 0x21, + SLI4_CMN_GET_RSC_EXTENT_INFO = 0x9a, + SLI4_CMN_GET_SLI4_PARAMS = 0xb5, + SLI4_CMN_QUERY_FW_CONFIG = 0x3a, + SLI4_CMN_GET_PORT_NAME = 0x4d, + + SLI4_CMN_WRITE_FLASHROM = 0x07, + /* TRANSCEIVER Data */ + SLI4_CMN_READ_TRANS_DATA = 0x49, + SLI4_CMN_GET_CNTL_ADDL_ATTRS = 0x79, + SLI4_CMN_GET_FUNCTION_CFG = 0xa0, + SLI4_CMN_GET_PROFILE_CFG = 0xa4, + SLI4_CMN_SET_PROFILE_CFG = 0xa5, + SLI4_CMN_GET_PROFILE_LIST = 0xa6, + SLI4_CMN_GET_ACTIVE_PROFILE = 0xa7, + SLI4_CMN_SET_ACTIVE_PROFILE = 0xa8, + SLI4_CMN_READ_OBJECT = 0xab, + SLI4_CMN_WRITE_OBJECT = 0xac, + SLI4_CMN_DELETE_OBJECT = 0xae, + SLI4_CMN_READ_OBJECT_LIST = 0xad, + SLI4_CMN_SET_DUMP_LOCATION = 0xb8, + SLI4_CMN_SET_FEATURES = 0xbf, + SLI4_CMN_GET_RECFG_LINK_INFO = 0xc9, + SLI4_CMN_SET_RECNG_LINK_ID = 0xca, +}; + +/* DMTF opcode (OPC) values */ +#define DMTF_EXEC_CLP_CMD 0x01 + +/* + * COMMON_FUNCTION_RESET + * + * Resets the Port, returning it to a power-on state. This configuration + * command does not have a payload and should set/expect the lengths to + * be zero. + */ +struct sli4_rqst_cmn_function_reset { + struct sli4_rqst_hdr hdr; +}; + +struct sli4_rsp_cmn_function_reset { + struct sli4_rsp_hdr hdr; +}; + + +/* + * COMMON_GET_CNTL_ATTRIBUTES + * + * Query for information about the SLI Port + */ +enum sli4_cntrl_attr_flags { + SLI4_CNTL_ATTR_PORTNUM = 0x3f, + SLI4_CNTL_ATTR_PORTTYPE = 0xc0, +}; + +struct sli4_rsp_cmn_get_cntl_attributes { + struct sli4_rsp_hdr hdr; + u8 version_str[32]; + u8 manufacturer_name[32]; + __le32 supported_modes; + u8 eprom_version_lo; + u8 eprom_version_hi; + __le16 rsvd17; + __le32 mbx_ds_version; + __le32 ep_fw_ds_version; + u8 ncsi_version_str[12]; + __le32 def_extended_timeout; + u8 model_number[32]; + u8 description[64]; + u8 serial_number[32]; + u8 ip_version_str[32]; + u8 fw_version_str[32]; + u8 bios_version_str[32]; + u8 redboot_version_str[32]; + u8 driver_version_str[32]; + u8 fw_on_flash_version_str[32]; + __le32 functionalities_supported; + __le16 max_cdb_length; + u8 asic_revision; + u8 generational_guid0; + __le32 generational_guid1_12[3]; + __le16 generational_guid13_14; + u8 generational_guid15; + u8 hba_port_count; + __le16 default_link_down_timeout; + u8 iscsi_version_min_max; + u8 multifunctional_device; + u8 cache_valid; + u8 hba_status; + u8 max_domains_supported; + u8 port_num_type_flags; + __le32 firmware_post_status; + __le32 hba_mtu; + u8 iscsi_features; + u8 rsvd121[3]; + __le16 pci_vendor_id; + __le16 pci_device_id; + __le16 pci_sub_vendor_id; + __le16 pci_sub_system_id; + u8 pci_bus_number; + u8 pci_device_number; + u8 pci_function_number; + u8 interface_type; + __le64 unique_identifier; + u8 number_of_netfilters; + u8 rsvd122[3]; +}; + +/* + * COMMON_GET_CNTL_ATTRIBUTES + * + * This command queries the controller information from the Flash ROM. + */ +struct sli4_rqst_cmn_get_cntl_addl_attributes { + struct sli4_rqst_hdr hdr; +}; + +struct sli4_rsp_cmn_get_cntl_addl_attributes { + struct sli4_rsp_hdr hdr; + __le16 ipl_file_number; + u8 ipl_file_version; + u8 rsvd4; + u8 on_die_temperature; + u8 rsvd5[3]; + __le32 driver_advanced_features_supported; + __le32 rsvd7[4]; + char universal_bios_version[32]; + char x86_bios_version[32]; + char efi_bios_version[32]; + char fcode_version[32]; + char uefi_bios_version[32]; + char uefi_nic_version[32]; + char uefi_fcode_version[32]; + char uefi_iscsi_version[32]; + char iscsi_x86_bios_version[32]; + char pxe_x86_bios_version[32]; + u8 default_wwpn[8]; + u8 ext_phy_version[32]; + u8 fc_universal_bios_version[32]; + u8 fc_x86_bios_version[32]; + u8 fc_efi_bios_version[32]; + u8 fc_fcode_version[32]; + u8 ext_phy_crc_label[8]; + u8 ipl_file_name[16]; + u8 rsvd139[72]; +}; + +/* + * COMMON_NOP + * + * This command does not do anything; it only returns + * the payload in the completion. + */ +struct sli4_rqst_cmn_nop { + struct sli4_rqst_hdr hdr; + __le32 context[2]; +}; + +struct sli4_rsp_cmn_nop { + struct sli4_rsp_hdr hdr; + __le32 context[2]; +}; + +struct sli4_rqst_cmn_get_resource_extent_info { + struct sli4_rqst_hdr hdr; + __le16 resource_type; + __le16 rsvd16; +}; + +enum sli4_rsc_type { + SLI4_RSC_TYPE_VFI = 0x20, + SLI4_RSC_TYPE_VPI = 0x21, + SLI4_RSC_TYPE_RPI = 0x22, + SLI4_RSC_TYPE_XRI = 0x23, +}; + +struct sli4_rsp_cmn_get_resource_extent_info { + struct sli4_rsp_hdr hdr; + __le16 resource_extent_count; + __le16 resource_extent_size; +}; + +#define SLI4_128BYTE_WQE_SUPPORT 0x02 + +#define GET_Q_CNT_METHOD(m) \ + (((m) & SLI4_PARAM_Q_CNT_MTHD_MASK) >> SLI4_PARAM_Q_CNT_MTHD_SHFT) +#define GET_Q_CREATE_VERSION(v) \ + (((v) & SLI4_PARAM_QV_MASK) >> SLI4_PARAM_QV_SHIFT) + +enum sli4_rsp_get_params_e { + /*GENERIC*/ + SLI4_PARAM_Q_CNT_MTHD_SHFT = 24, + SLI4_PARAM_Q_CNT_MTHD_MASK = 0xf << 24, + SLI4_PARAM_QV_SHIFT = 14, + SLI4_PARAM_QV_MASK = 3 << 14, + + /* DW4 */ + SLI4_PARAM_PROTO_TYPE_MASK = 0xff, + /* DW5 */ + SLI4_PARAM_FT = 1 << 0, + SLI4_PARAM_SLI_REV_MASK = 0xf << 4, + SLI4_PARAM_SLI_FAM_MASK = 0xf << 8, + SLI4_PARAM_IF_TYPE_MASK = 0xf << 12, + SLI4_PARAM_SLI_HINT1_MASK = 0xff << 16, + SLI4_PARAM_SLI_HINT2_MASK = 0x1f << 24, + /* DW6 */ + SLI4_PARAM_EQ_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_EQE_SZS_MASK = 0xf << 8, + SLI4_PARAM_EQ_PAGE_SZS_MASK = 0xff << 16, + /* DW8 */ + SLI4_PARAM_CQ_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_CQE_SZS_MASK = 0xf << 8, + SLI4_PARAM_CQ_PAGE_SZS_MASK = 0xff << 16, + /* DW10 */ + SLI4_PARAM_MQ_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_MQ_PAGE_SZS_MASK = 0xff << 16, + /* DW12 */ + SLI4_PARAM_WQ_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_WQE_SZS_MASK = 0xf << 8, + SLI4_PARAM_WQ_PAGE_SZS_MASK = 0xff << 16, + /* DW14 */ + SLI4_PARAM_RQ_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_RQE_SZS_MASK = 0xf << 8, + SLI4_PARAM_RQ_PAGE_SZS_MASK = 0xff << 16, + /* DW15W1*/ + SLI4_PARAM_RQ_DB_WINDOW_MASK = 0xf000, + /* DW16 */ + SLI4_PARAM_FC = 1 << 0, + SLI4_PARAM_EXT = 1 << 1, + SLI4_PARAM_HDRR = 1 << 2, + SLI4_PARAM_SGLR = 1 << 3, + SLI4_PARAM_FBRR = 1 << 4, + SLI4_PARAM_AREG = 1 << 5, + SLI4_PARAM_TGT = 1 << 6, + SLI4_PARAM_TERP = 1 << 7, + SLI4_PARAM_ASSI = 1 << 8, + SLI4_PARAM_WCHN = 1 << 9, + SLI4_PARAM_TCCA = 1 << 10, + SLI4_PARAM_TRTY = 1 << 11, + SLI4_PARAM_TRIR = 1 << 12, + SLI4_PARAM_PHOFF = 1 << 13, + SLI4_PARAM_PHON = 1 << 14, + SLI4_PARAM_PHWQ = 1 << 15, + SLI4_PARAM_BOUND_4GA = 1 << 16, + SLI4_PARAM_RXC = 1 << 17, + SLI4_PARAM_HLM = 1 << 18, + SLI4_PARAM_IPR = 1 << 19, + SLI4_PARAM_RXRI = 1 << 20, + SLI4_PARAM_SGLC = 1 << 21, + SLI4_PARAM_TIMM = 1 << 22, + SLI4_PARAM_TSMM = 1 << 23, + SLI4_PARAM_OAS = 1 << 25, + SLI4_PARAM_LC = 1 << 26, + SLI4_PARAM_AGXF = 1 << 27, + SLI4_PARAM_LOOPBACK_MASK = 0xf << 28, + /* DW18 */ + SLI4_PARAM_SGL_PAGE_CNT_MASK = 0xf << 0, + SLI4_PARAM_SGL_PAGE_SZS_MASK = 0xff << 8, + SLI4_PARAM_SGL_PP_ALIGN_MASK = 0xff << 16, +}; + +struct sli4_rqst_cmn_get_sli4_params { + struct sli4_rqst_hdr hdr; +}; + +struct sli4_rsp_cmn_get_sli4_params { + struct sli4_rsp_hdr hdr; + __le32 dw4_protocol_type; + __le32 dw5_sli; + __le32 dw6_eq_page_cnt; + __le16 eqe_count_mask; + __le16 rsvd26; + __le32 dw8_cq_page_cnt; + __le16 cqe_count_mask; + __le16 rsvd34; + __le32 dw10_mq_page_cnt; + __le16 mqe_count_mask; + __le16 rsvd42; + __le32 dw12_wq_page_cnt; + __le16 wqe_count_mask; + __le16 rsvd50; + __le32 dw14_rq_page_cnt; + __le16 rqe_count_mask; + __le16 dw15w1_rq_db_window; + __le32 dw16_loopback_scope; + __le32 sge_supported_length; + __le32 dw18_sgl_page_cnt; + __le16 min_rq_buffer_size; + __le16 rsvd75; + __le32 max_rq_buffer_size; + __le16 physical_xri_max; + __le16 physical_rpi_max; + __le16 physical_vpi_max; + __le16 physical_vfi_max; + __le32 rsvd88; + __le16 frag_num_field_offset; + __le16 frag_num_field_size; + __le16 sgl_index_field_offset; + __le16 sgl_index_field_size; + __le32 chain_sge_initial_value_lo; + __le32 chain_sge_initial_value_hi; +}; + +/*Port Types*/ +enum sli4_port_types { + SLI4_PORT_TYPE_ETH = 0, + SLI4_PORT_TYPE_FC = 1, +}; + +struct sli4_rqst_cmn_get_port_name { + struct sli4_rqst_hdr hdr; + u8 port_type; + u8 rsvd4[3]; +}; + +struct sli4_rsp_cmn_get_port_name { + struct sli4_rsp_hdr hdr; + char port_name[4]; +}; + +struct sli4_rqst_cmn_write_flashrom { + struct sli4_rqst_hdr hdr; + __le32 flash_rom_access_opcode; + __le32 flash_rom_access_operation_type; + __le32 data_buffer_size; + __le32 offset; + u8 data_buffer[4]; +}; + +/* + * COMMON_READ_TRANSCEIVER_DATA + * + * This command reads SFF transceiver data(Format is defined + * by the SFF-8472 specification). + */ +struct sli4_rqst_cmn_read_transceiver_data { + struct sli4_rqst_hdr hdr; + __le32 page_number; + __le32 port; +}; + +struct sli4_rsp_cmn_read_transceiver_data { + struct sli4_rsp_hdr hdr; + __le32 page_number; + __le32 port; + u8 page_data[128]; + u8 page_data_2[128]; +}; + +#define SLI4_REQ_DESIRE_READLEN 0xffffff + +struct sli4_rqst_cmn_read_object { + struct sli4_rqst_hdr hdr; + __le32 desired_read_length_dword; + __le32 read_offset; + u8 object_name[104]; + __le32 host_buffer_descriptor_count; + struct sli4_bde host_buffer_descriptor[0]; +}; + +#define RSP_COM_READ_OBJ_EOF 0x80000000 + +struct sli4_rsp_cmn_read_object { + struct sli4_rsp_hdr hdr; + __le32 actual_read_length; + __le32 eof_dword; +}; + +enum sli4_rqst_write_object_flags { + SLI4_RQ_DES_WRITE_LEN = 0xffffff, + SLI4_RQ_DES_WRITE_LEN_NOC = 0x40000000, + SLI4_RQ_DES_WRITE_LEN_EOF = 0x80000000, +}; + +struct sli4_rqst_cmn_write_object { + struct sli4_rqst_hdr hdr; + __le32 desired_write_len_dword; + __le32 write_offset; + u8 object_name[104]; + __le32 host_buffer_descriptor_count; + struct sli4_bde host_buffer_descriptor[0]; +}; + +#define RSP_CHANGE_STATUS 0xff + +struct sli4_rsp_cmn_write_object { + struct sli4_rsp_hdr hdr; + __le32 actual_write_length; + __le32 change_status_dword; +}; + +struct sli4_rqst_cmn_delete_object { + struct sli4_rqst_hdr hdr; + __le32 rsvd4; + __le32 rsvd5; + u8 object_name[104]; +}; + +#define SLI4_RQ_OBJ_LIST_READ_LEN 0xffffff + +struct sli4_rqst_cmn_read_object_list { + struct sli4_rqst_hdr hdr; + __le32 desired_read_length_dword; + __le32 read_offset; + u8 object_name[104]; + __le32 host_buffer_descriptor_count; + struct sli4_bde host_buffer_descriptor[0]; +}; + +enum sli4_rqst_set_dump_flags { + SLI4_CMN_SET_DUMP_BUFFER_LEN = 0xffffff, + SLI4_CMN_SET_DUMP_FDB = 0x20000000, + SLI4_CMN_SET_DUMP_BLP = 0x40000000, + SLI4_CMN_SET_DUMP_QRY = 0x80000000, +}; + +struct sli4_rqst_cmn_set_dump_location { + struct sli4_rqst_hdr hdr; + __le32 buffer_length_dword; + __le32 buf_addr_low; + __le32 buf_addr_high; +}; + +struct sli4_rsp_cmn_set_dump_location { + struct sli4_rsp_hdr hdr; + __le32 buffer_length_dword; +}; + +enum sli4_dump_level { + SLI4_DUMP_LEVEL_NONE, + SLI4_CHIP_LEVEL_DUMP, + SLI4_FUNC_DESC_DUMP, +}; + +enum sli4_dump_state { + SLI4_DUMP_STATE_NONE, + SLI4_CHIP_DUMP_STATE_VALID, + SLI4_FUNC_DUMP_STATE_VALID, +}; + +enum sli4_dump_status { + SLI4_DUMP_READY_STATUS_NOT_READY, + SLI4_DUMP_READY_STATUS_DD_PRESENT, + SLI4_DUMP_READY_STATUS_FDB_PRESENT, + SLI4_DUMP_READY_STATUS_SKIP_DUMP, + SLI4_DUMP_READY_STATUS_FAILED = -1, +}; + +enum sli4_set_features { + SLI4_SET_FEATURES_DIF_SEED = 0x01, + SLI4_SET_FEATURES_XRI_TIMER = 0x03, + SLI4_SET_FEATURES_MAX_PCIE_SPEED = 0x04, + SLI4_SET_FEATURES_FCTL_CHECK = 0x05, + SLI4_SET_FEATURES_FEC = 0x06, + SLI4_SET_FEATURES_PCIE_RECV_DETECT = 0x07, + SLI4_SET_FEATURES_DIF_MEMORY_MODE = 0x08, + SLI4_SET_FEATURES_DISABLE_SLI_PORT_PAUSE_STATE = 0x09, + SLI4_SET_FEATURES_ENABLE_PCIE_OPTIONS = 0x0a, + SLI4_SET_FEAT_CFG_AUTO_XFER_RDY_T10PI = 0x0c, + SLI4_SET_FEATURES_ENABLE_MULTI_RECEIVE_QUEUE = 0x0d, + SLI4_SET_FEATURES_SET_FTD_XFER_HINT = 0x0f, + SLI4_SET_FEATURES_SLI_PORT_HEALTH_CHECK = 0x11, +}; + +struct sli4_rqst_cmn_set_features { + struct sli4_rqst_hdr hdr; + __le32 feature; + __le32 param_len; + __le32 params[8]; +}; + +struct sli4_rqst_cmn_set_features_dif_seed { + __le16 seed; + __le16 rsvd16; +}; + +enum sli4_rqst_set_mrq_features { + SLI4_RQ_MULTIRQ_ISR = 0x1, + SLI4_RQ_MULTIRQ_AUTOGEN_XFER_RDY = 0x2, + + SLI4_RQ_MULTIRQ_NUM_RQS = 0xff, + SLI4_RQ_MULTIRQ_RQ_SELECT = 0xf00, +}; + +struct sli4_rqst_cmn_set_features_multirq { + __le32 auto_gen_xfer_dword; + __le32 num_rqs_dword; +}; + +enum sli4_rqst_health_check_flags { + SLI4_RQ_HEALTH_CHECK_ENABLE = 0x1, + SLI4_RQ_HEALTH_CHECK_QUERY = 0x2, +}; + +struct sli4_rqst_cmn_set_features_health_check { + __le32 health_check_dword; +}; + +struct sli4_rqst_cmn_set_features_set_fdt_xfer_hint { + __le32 fdt_xfer_hint; +}; + +struct sli4_rqst_dmtf_exec_clp_cmd { + struct sli4_rqst_hdr hdr; + __le32 cmd_buf_length; + __le32 resp_buf_length; + __le32 cmd_buf_addr_low; + __le32 cmd_buf_addr_high; + __le32 resp_buf_addr_low; + __le32 resp_buf_addr_high; +}; + +struct sli4_rsp_dmtf_exec_clp_cmd { + struct sli4_rsp_hdr hdr; + __le32 rsvd4; + __le32 resp_length; + __le32 rsvd6; + __le32 rsvd7; + __le32 rsvd8; + __le32 rsvd9; + __le32 clp_status; + __le32 clp_detailed_status; +}; + +#define SLI4_PROTOCOL_FC 0x10 +#define SLI4_PROTOCOL_DEFAULT 0xff + +struct sli4_rspource_descriptor_v1 { + u8 descriptor_type; + u8 descriptor_length; + __le16 rsvd16; + __le32 type_specific[0]; +}; + +enum sli4_pcie_desc_flags { + SLI4_PCIE_DESC_IMM = 0x4000, + SLI4_PCIE_DESC_NOSV = 0x8000, + + SLI4_PCIE_DESC_PF_NO = 0x3ff0000, + + SLI4_PCIE_DESC_MISSN_ROLE = 0xff, + SLI4_PCIE_DESC_PCHG = 0x8000000, + SLI4_PCIE_DESC_SCHG = 0x10000000, + SLI4_PCIE_DESC_XCHG = 0x20000000, + SLI4_PCIE_DESC_XROM = 0xc0000000 +}; + +struct sli4_pcie_resource_descriptor_v1 { + u8 descriptor_type; + u8 descriptor_length; + __le16 imm_nosv_dword; + __le32 pf_number_dword; + __le32 rsvd3; + u8 sriov_state; + u8 pf_state; + u8 pf_type; + u8 rsvd4; + __le16 number_of_vfs; + __le16 rsvd5; + __le32 mission_roles_dword; + __le32 rsvd7[16]; +}; + +struct sli4_rqst_cmn_get_function_config { + struct sli4_rqst_hdr hdr; +}; + +struct sli4_rsp_cmn_get_function_config { + struct sli4_rsp_hdr hdr; + __le32 desc_count; + __le32 desc[54]; +}; + +/* Link Config Descriptor for link config functions */ +struct sli4_link_config_descriptor { + u8 link_config_id; + u8 rsvd1[3]; + __le32 config_description[8]; +}; + +#define MAX_LINK_DES 10 + +struct sli4_rqst_cmn_get_reconfig_link_info { + struct sli4_rqst_hdr hdr; +}; + +struct sli4_rsp_cmn_get_reconfig_link_info { + struct sli4_rsp_hdr hdr; + u8 active_link_config_id; + u8 rsvd17; + u8 next_link_config_id; + u8 rsvd19; + __le32 link_configuration_descriptor_count; + struct sli4_link_config_descriptor + desc[MAX_LINK_DES]; +}; + +enum sli4_set_reconfig_link_flags { + SLI4_SET_RECONFIG_LINKID_NEXT = 0xff, + SLI4_SET_RECONFIG_LINKID_FD = 1u << 31, +}; + +struct sli4_rqst_cmn_set_reconfig_link_id { + struct sli4_rqst_hdr hdr; + __le32 dw4_flags; +}; + +struct sli4_rsp_cmn_set_reconfig_link_id { + struct sli4_rsp_hdr hdr; +}; + +struct sli4_rqst_lowlevel_set_watchdog { + struct sli4_rqst_hdr hdr; + __le16 watchdog_timeout; + __le16 rsvd18; +}; + +struct sli4_rsp_lowlevel_set_watchdog { + struct sli4_rsp_hdr hdr; + __le32 rsvd; +}; + +/* FC opcode (OPC) values */ +enum sli4_fc_opcodes { + SLI4_OPC_WQ_CREATE = 0x1, + SLI4_OPC_WQ_DESTROY = 0x2, + SLI4_OPC_POST_SGL_PAGES = 0x3, + SLI4_OPC_RQ_CREATE = 0x5, + SLI4_OPC_RQ_DESTROY = 0x6, + SLI4_OPC_READ_FCF_TABLE = 0x8, + SLI4_OPC_POST_HDR_TEMPLATES = 0xb, + SLI4_OPC_REDISCOVER_FCF = 0x10, +}; + +/* Use the default CQ associated with the WQ */ +#define SLI4_CQ_DEFAULT 0xffff + +/* + * POST_SGL_PAGES + * + * Register the scatter gather list (SGL) memory and + * associate it with an XRI. + */ +struct sli4_rqst_post_sgl_pages { + struct sli4_rqst_hdr hdr; + __le16 xri_start; + __le16 xri_count; + struct { + __le32 page0_low; + __le32 page0_high; + __le32 page1_low; + __le32 page1_high; + } page_set[10]; +}; + +struct sli4_rsp_post_sgl_pages { + struct sli4_rsp_hdr hdr; +}; + +struct sli4_rqst_post_hdr_templates { + struct sli4_rqst_hdr hdr; + __le16 rpi_offset; + __le16 page_count; + struct sli4_dmaaddr page_descriptor[0]; +}; + +#define SLI4_HDR_TEMPLATE_SIZE 64 + +enum sli4_io_flags { +/* The XRI associated with this IO is already active */ + SLI4_IO_CONTINUATION = 1 << 0, +/* Automatically generate a good RSP frame */ + SLI4_IO_AUTO_GOOD_RESPONSE = 1 << 1, + SLI4_IO_NO_ABORT = 1 << 2, +/* Set the DNRX bit because no auto xref rdy buffer is posted */ + SLI4_IO_DNRX = 1 << 3, +}; + +enum sli4_callback { + SLI4_CB_LINK, + SLI4_CB_MAX, +}; + +enum sli4_link_status { + SLI4_LINK_STATUS_UP, + SLI4_LINK_STATUS_DOWN, + SLI4_LINK_STATUS_NO_ALPA, + SLI4_LINK_STATUS_MAX, +}; + +enum sli4_link_topology { + SLI4_LINK_TOPO_NON_FC_AL = 1, + SLI4_LINK_TOPO_FC_AL, + SLI4_LINK_TOPO_LOOPBACK_INTERNAL, + SLI4_LINK_TOPO_LOOPBACK_EXTERNAL, + SLI4_LINK_TOPO_NONE, + SLI4_LINK_TOPO_MAX, +}; + +enum sli4_link_medium { + SLI4_LINK_MEDIUM_ETHERNET, + SLI4_LINK_MEDIUM_FC, + SLI4_LINK_MEDIUM_MAX, +}; /******Driver specific structures******/ struct sli4_queue { @@ -2020,7 +3534,7 @@ struct sli4_queue { } u; }; -/* Prameters used to populate WQE*/ +/* Parameters used to populate WQE*/ struct sli_bls_params { u32 s_id; u32 d_id; @@ -2081,4 +3595,116 @@ struct sli_fcp_tgt_params { u16 tag; }; +struct sli4_link_event { + enum sli4_link_status status; + enum sli4_link_topology topology; + enum sli4_link_medium medium; + u32 speed; + u8 *loop_map; + u32 fc_id; +}; + +enum sli4_resource { + SLI4_RSRC_VFI, + SLI4_RSRC_VPI, + SLI4_RSRC_RPI, + SLI4_RSRC_XRI, + SLI4_RSRC_FCFI, + SLI4_RSRC_MAX, +}; + +struct sli4_extent { + u32 number; + u32 size; + u32 n_alloc; + u32 *base; + unsigned long *use_map; + u32 map_size; +}; + +struct sli4_queue_info { + u16 max_qcount[SLI4_QTYPE_MAX]; + u32 max_qentries[SLI4_QTYPE_MAX]; + u16 count_mask[SLI4_QTYPE_MAX]; + u16 count_method[SLI4_QTYPE_MAX]; + u32 qpage_count[SLI4_QTYPE_MAX]; +}; + +struct sli4_params { + u8 has_extents; + u8 auto_reg; + u8 auto_xfer_rdy; + u8 hdr_template_req; + u8 perf_hint; + u8 perf_wq_id_association; + u8 cq_create_version; + u8 mq_create_version; + u8 high_login_mode; + u8 sgl_pre_registered; + u8 sgl_pre_reg_required; + u8 t10_dif_inline_capable; + u8 t10_dif_separate_capable; +}; + +struct sli4 { + void *os; + struct pci_dev *pci; + void __iomem *reg[PCI_STD_NUM_BARS]; + + u32 sli_rev; + u32 sli_family; + u32 if_type; + + u16 asic_type; + u16 asic_rev; + + u16 e_d_tov; + u16 r_a_tov; + struct sli4_queue_info qinfo; + u16 link_module_type; + u8 rq_batch; + u8 port_number; + char port_name[2]; + u16 rq_min_buf_size; + u32 rq_max_buf_size; + u8 topology; + u8 wwpn[8]; + u8 wwnn[8]; + u32 fw_rev[2]; + u8 fw_name[2][16]; + char ipl_name[16]; + u32 hw_rev[3]; + char modeldesc[64]; + char bios_version_string[32]; + u32 wqe_size; + u32 vpd_length; + /* + * Tracks the port resources using extents metaphor. For + * devices that don't implement extents (i.e. + * has_extents == FALSE), the code models each resource as + * a single large extent. + */ + struct sli4_extent ext[SLI4_RSRC_MAX]; + u32 features; + struct sli4_params params; + u32 sge_supported_length; + u32 sgl_page_sizes; + u32 max_sgl_pages; + + /* + * Callback functions + */ + int (*link)(void *ctx, void *event); + void *link_arg; + + struct efc_dma bmbx; + + /* Save pointer to physical memory descriptor for non-embedded + * SLI_CONFIG commands for BMBX dumping purposes + */ + struct efc_dma *bmbx_non_emb_pmd; + + struct efc_dma vpd_data; +}; + #endif /* !_SLI4_H */ From patchwork Thu Jan 7 22:58:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358594 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B9C9CC433DB for ; Thu, 7 Jan 2021 23:00:32 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7B8C8235FF for ; Thu, 7 Jan 2021 23:00:32 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728274AbhAGXAb (ORCPT ); Thu, 7 Jan 2021 18:00:31 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728249AbhAGXAb (ORCPT ); Thu, 7 Jan 2021 18:00:31 -0500 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3DF2AC0612FB for ; Thu, 7 Jan 2021 14:59:17 -0800 (PST) Received: by mail-pf1-x42f.google.com with SMTP id h10so4987153pfo.9 for ; Thu, 07 Jan 2021 14:59:17 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wKJu0X9wk/1LkMfmuNRNUbh105V1hFBssU0ZBNbpbfE=; b=n4H2Amk+IGmJPMudkbkkCkbaDjwV3t5Xnkg67E8DjvkUyVnFM+WNgM1GRGr+6C3ufa PzbXyWGcETNfSzqcGx6lQ/11nL6ZwAPmZ/04bRztCS+gf8KbJnPlrSxdJ+CkJmbfEOZY dt1b0opLdMxkCBoH4p2R52betu5Cwpr8NecGG8OCQEMdsIDAZ+jjylPA63hDgVUO/gxv K7ZmlqJNer2QrwYd2jcUPo+wWRWr5Wyn4TJauhRQddjucoXQ5mNrUKl9NfcAmkcDbSDh WNoQ8KHG74UYsk8TKaYqoWSDS1h3zWwj0oiytys3Ve+cBQSNhjPS8Zi041166y1ux/pd /O4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wKJu0X9wk/1LkMfmuNRNUbh105V1hFBssU0ZBNbpbfE=; b=GjKjddIAkOrduoT0dMVyaaHH9ASD2DjCjW2Q3lW+lGh0lvS2k1otejc1vsbfV6p7rv wQzT6aIdATMCZyMejXSncu7npRKeVhQRunyX16sMy2Ls7/eeNub33XJen/H6+0pHNUJe Vmq3s3yYHPhX2WK6p7kwaFgtpFLEDgVGOnynK7Og90sPhOMR0Uu1rVwHwm2TA1LIHCaU BRIKcALEkjwiRUWQ4OI91O23cmminz14jsg5xcBWA9MVra65B6rGGI3sI+1cqModoLeQ nMw7rYzsJ4QeJ1yQn1jBAGNzhtPjnpR+rHoNuC7xOUFKRyY+ru5tNK72stQawlWJ3IAJ 3sKQ== X-Gm-Message-State: AOAM530MFN5DmCE/Z22cxRsv3sbQizCjI8SCKNr23oBajn7cbr2nK06L WPUBgp0TGBJoo2lMJiUWDch6ZLRxPXP+FA== X-Google-Smtp-Source: ABdhPJyr6IDtE5mVqVntlb1zV0WuxkCNKv7qyZxbBjGozYulJ+SbvxwH88jYfsSHNYb9Lkll5RFWFQ== X-Received: by 2002:a63:da4f:: with SMTP id l15mr4017695pgj.22.1610060356204; Thu, 07 Jan 2021 14:59:16 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:15 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 06/31] elx: libefc_sli: bmbx routines and SLI config commands Date: Thu, 7 Jan 2021 14:58:40 -0800 Message-Id: <20210107225905.18186-7-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc_sli SLI-4 library population. This patch adds routines to create mailbox commands used during adapter initialization and adds APIs to issue mailbox commands to the adapter through the bootstrap mailbox register. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc_sli/sli4.c | 1166 ++++++++++++++++++++++++++++ 1 file changed, 1166 insertions(+) diff --git a/drivers/scsi/elx/libefc_sli/sli4.c b/drivers/scsi/elx/libefc_sli/sli4.c index 6b71779274c4..e41070fb21d6 100644 --- a/drivers/scsi/elx/libefc_sli/sli4.c +++ b/drivers/scsi/elx/libefc_sli/sli4.c @@ -2866,3 +2866,1169 @@ sli_fc_rqe_rqid_and_index(struct sli4 *sli4, u8 *cqe, u16 *rq_id, u32 *index) return rc; } + +static int +sli_bmbx_wait(struct sli4 *sli4, u32 msec) +{ + u32 val; + unsigned long end; + + /* Wait for the bootstrap mailbox to report "ready" */ + end = jiffies + msecs_to_jiffies(msec); + do { + val = readl(sli4->reg[0] + SLI4_BMBX_REG); + if (val & SLI4_BMBX_RDY) + return EFC_SUCCESS; + + usleep_range(1000, 2000); + } while (time_before(jiffies, end)); + + return EFC_FAIL; +} + +static int +sli_bmbx_write(struct sli4 *sli4) +{ + u32 val; + + /* write buffer location to bootstrap mailbox register */ + val = sli_bmbx_write_hi(sli4->bmbx.phys); + writel(val, (sli4->reg[0] + SLI4_BMBX_REG)); + + if (sli_bmbx_wait(sli4, SLI4_BMBX_DELAY_US)) { + efc_log_crit(sli4, "BMBX WRITE_HI failed\n"); + return EFC_FAIL; + } + val = sli_bmbx_write_lo(sli4->bmbx.phys); + writel(val, (sli4->reg[0] + SLI4_BMBX_REG)); + + /* wait for SLI Port to set ready bit */ + return sli_bmbx_wait(sli4, SLI4_BMBX_TIMEOUT_MSEC); +} + +int +sli_bmbx_command(struct sli4 *sli4) +{ + void *cqe = (u8 *)sli4->bmbx.virt + SLI4_BMBX_SIZE; + + if (sli_fw_error_status(sli4) > 0) { + efc_log_crit(sli4, "Chip is in an error state -Mailbox command rejected"); + efc_log_crit(sli4, " status=%#x error1=%#x error2=%#x\n", + sli_reg_read_status(sli4), + sli_reg_read_err1(sli4), + sli_reg_read_err2(sli4)); + return EFC_FAIL; + } + + /* Submit a command to the bootstrap mailbox and check the status */ + if (sli_bmbx_write(sli4)) { + efc_log_crit(sli4, "bmbx write fail phys=%pad reg=%#x\n", + &sli4->bmbx.phys, readl(sli4->reg[0] + SLI4_BMBX_REG)); + return EFC_FAIL; + } + + /* check completion queue entry status */ + if (le32_to_cpu(((struct sli4_mcqe *)cqe)->dw3_flags) & + SLI4_MCQE_VALID) { + return sli_cqe_mq(sli4, cqe); + } + efc_log_crit(sli4, "invalid or wrong type\n"); + return EFC_FAIL; +} + +int +sli_cmd_config_link(struct sli4 *sli4, void *buf) +{ + struct sli4_cmd_config_link *config_link = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + config_link->hdr.command = SLI4_MBX_CMD_CONFIG_LINK; + + /* Port interprets zero in a field as "use default value" */ + + return EFC_SUCCESS; +} + +int +sli_cmd_down_link(struct sli4 *sli4, void *buf) +{ + struct sli4_mbox_command_header *hdr = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + hdr->command = SLI4_MBX_CMD_DOWN_LINK; + + /* Port interprets zero in a field as "use default value" */ + + return EFC_SUCCESS; +} + +int +sli_cmd_dump_type4(struct sli4 *sli4, void *buf, u16 wki) +{ + struct sli4_cmd_dump4 *cmd = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + cmd->hdr.command = SLI4_MBX_CMD_DUMP; + cmd->type_dword = cpu_to_le32(0x4); + cmd->wki_selection = cpu_to_le16(wki); + return EFC_SUCCESS; +} + +int +sli_cmd_common_read_transceiver_data(struct sli4 *sli4, void *buf, u32 page_num, + struct efc_dma *dma) +{ + struct sli4_rqst_cmn_read_transceiver_data *req = NULL; + u32 psize; + + if (!dma) + psize = SLI4_CFG_PYLD_LENGTH(cmn_read_transceiver_data); + else + psize = dma->size; + + req = sli_config_cmd_init(sli4, buf, psize, dma); + if (!req) + return EFC_FAIL; + + sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_READ_TRANS_DATA, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_read_transceiver_data)); + + req->page_number = cpu_to_le32(page_num); + req->port = cpu_to_le32(sli4->port_number); + + return EFC_SUCCESS; +} + +int +sli_cmd_read_link_stats(struct sli4 *sli4, void *buf, u8 req_ext_counters, + u8 clear_overflow_flags, + u8 clear_all_counters) +{ + struct sli4_cmd_read_link_stats *cmd = buf; + u32 flags; + + memset(buf, 0, SLI4_BMBX_SIZE); + + cmd->hdr.command = SLI4_MBX_CMD_READ_LNK_STAT; + + flags = 0; + if (req_ext_counters) + flags |= SLI4_READ_LNKSTAT_REC; + if (clear_all_counters) + flags |= SLI4_READ_LNKSTAT_CLRC; + if (clear_overflow_flags) + flags |= SLI4_READ_LNKSTAT_CLOF; + + cmd->dw1_flags = cpu_to_le32(flags); + return EFC_SUCCESS; +} + +int +sli_cmd_read_status(struct sli4 *sli4, void *buf, u8 clear_counters) +{ + struct sli4_cmd_read_status *cmd = buf; + u32 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + cmd->hdr.command = SLI4_MBX_CMD_READ_STATUS; + if (clear_counters) + flags |= SLI4_READSTATUS_CLEAR_COUNTERS; + else + flags &= ~SLI4_READSTATUS_CLEAR_COUNTERS; + + cmd->dw1_flags = cpu_to_le32(flags); + return EFC_SUCCESS; +} + +int +sli_cmd_init_link(struct sli4 *sli4, void *buf, u32 speed, u8 reset_alpa) +{ + struct sli4_cmd_init_link *init_link = buf; + u32 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + init_link->hdr.command = SLI4_MBX_CMD_INIT_LINK; + + init_link->sel_reset_al_pa_dword = + cpu_to_le32(reset_alpa); + flags &= ~SLI4_INIT_LINK_F_LOOPBACK; + + init_link->link_speed_sel_code = cpu_to_le32(speed); + switch (speed) { + case SLI4_LINK_SPEED_1G: + case SLI4_LINK_SPEED_2G: + case SLI4_LINK_SPEED_4G: + case SLI4_LINK_SPEED_8G: + case SLI4_LINK_SPEED_16G: + case SLI4_LINK_SPEED_32G: + case SLI4_LINK_SPEED_64G: + flags |= SLI4_INIT_LINK_F_FIXED_SPEED; + break; + case SLI4_LINK_SPEED_10G: + efc_log_info(sli4, "unsupported FC speed %d\n", speed); + init_link->flags0 = cpu_to_le32(flags); + return EFC_FAIL; + } + + switch (sli4->topology) { + case SLI4_READ_CFG_TOPO_FC: + /* Attempt P2P but failover to FC-AL */ + flags |= SLI4_INIT_LINK_F_FAIL_OVER; + flags |= SLI4_INIT_LINK_F_P2P_FAIL_OVER; + break; + case SLI4_READ_CFG_TOPO_FC_AL: + flags |= SLI4_INIT_LINK_F_FCAL_ONLY; + if (speed == SLI4_LINK_SPEED_16G || + speed == SLI4_LINK_SPEED_32G) { + efc_log_info(sli4, "unsupported FC-AL speed %d\n", + speed); + init_link->flags0 = cpu_to_le32(flags); + return EFC_FAIL; + } + break; + case SLI4_READ_CFG_TOPO_NON_FC_AL: + flags |= SLI4_INIT_LINK_F_P2P_ONLY; + break; + default: + + efc_log_info(sli4, "unsupported topology %#x\n", + sli4->topology); + + init_link->flags0 = cpu_to_le32(flags); + return EFC_FAIL; + } + + flags &= ~SLI4_INIT_LINK_F_UNFAIR; + flags &= ~SLI4_INIT_LINK_F_NO_LIRP; + flags &= ~SLI4_INIT_LINK_F_LOOP_VALID_CHK; + flags &= ~SLI4_INIT_LINK_F_NO_LISA; + flags &= ~SLI4_INIT_LINK_F_PICK_HI_ALPA; + init_link->flags0 = cpu_to_le32(flags); + + return EFC_SUCCESS; +} + +int +sli_cmd_init_vfi(struct sli4 *sli4, void *buf, u16 vfi, u16 fcfi, u16 vpi) +{ + struct sli4_cmd_init_vfi *init_vfi = buf; + u16 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + init_vfi->hdr.command = SLI4_MBX_CMD_INIT_VFI; + init_vfi->vfi = cpu_to_le16(vfi); + init_vfi->fcfi = cpu_to_le16(fcfi); + + /* + * If the VPI is valid, initialize it at the same time as + * the VFI + */ + if (vpi != U16_MAX) { + flags |= SLI4_INIT_VFI_FLAG_VP; + init_vfi->flags0_word = cpu_to_le16(flags); + init_vfi->vpi = cpu_to_le16(vpi); + } + + return EFC_SUCCESS; +} + +int +sli_cmd_init_vpi(struct sli4 *sli4, void *buf, u16 vpi, u16 vfi) +{ + struct sli4_cmd_init_vpi *init_vpi = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + init_vpi->hdr.command = SLI4_MBX_CMD_INIT_VPI; + init_vpi->vpi = cpu_to_le16(vpi); + init_vpi->vfi = cpu_to_le16(vfi); + + return EFC_SUCCESS; +} + +int +sli_cmd_post_xri(struct sli4 *sli4, void *buf, u16 xri_base, u16 xri_count) +{ + struct sli4_cmd_post_xri *post_xri = buf; + u16 xri_count_flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + post_xri->hdr.command = SLI4_MBX_CMD_POST_XRI; + post_xri->xri_base = cpu_to_le16(xri_base); + xri_count_flags = xri_count & SLI4_POST_XRI_COUNT; + xri_count_flags |= SLI4_POST_XRI_FLAG_ENX; + xri_count_flags |= SLI4_POST_XRI_FLAG_VAL; + post_xri->xri_count_flags = cpu_to_le16(xri_count_flags); + + return EFC_SUCCESS; +} + +int +sli_cmd_release_xri(struct sli4 *sli4, void *buf, u8 num_xri) +{ + struct sli4_cmd_release_xri *release_xri = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + release_xri->hdr.command = SLI4_MBX_CMD_RELEASE_XRI; + release_xri->xri_count_word = cpu_to_le16(num_xri & + SLI4_RELEASE_XRI_COUNT); + + return EFC_SUCCESS; +} + +static int +sli_cmd_read_config(struct sli4 *sli4, void *buf) +{ + struct sli4_cmd_read_config *read_config = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + read_config->hdr.command = SLI4_MBX_CMD_READ_CONFIG; + + return EFC_SUCCESS; +} + +int +sli_cmd_read_nvparms(struct sli4 *sli4, void *buf) +{ + struct sli4_cmd_read_nvparms *read_nvparms = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + read_nvparms->hdr.command = SLI4_MBX_CMD_READ_NVPARMS; + + return EFC_SUCCESS; +} + +int +sli_cmd_write_nvparms(struct sli4 *sli4, void *buf, u8 *wwpn, u8 *wwnn, + u8 hard_alpa, u32 preferred_d_id) +{ + struct sli4_cmd_write_nvparms *write_nvparms = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + write_nvparms->hdr.command = SLI4_MBX_CMD_WRITE_NVPARMS; + memcpy(write_nvparms->wwpn, wwpn, 8); + memcpy(write_nvparms->wwnn, wwnn, 8); + + write_nvparms->hard_alpa_d_id = + cpu_to_le32((preferred_d_id << 8) | hard_alpa); + return EFC_SUCCESS; +} + +static int +sli_cmd_read_rev(struct sli4 *sli4, void *buf, struct efc_dma *vpd) +{ + struct sli4_cmd_read_rev *read_rev = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + read_rev->hdr.command = SLI4_MBX_CMD_READ_REV; + + if (vpd && vpd->size) { + read_rev->flags0_word |= cpu_to_le16(SLI4_READ_REV_FLAG_VPD); + + read_rev->available_length_dword = + cpu_to_le32(vpd->size & + SLI4_READ_REV_AVAILABLE_LENGTH); + + read_rev->hostbuf.low = + cpu_to_le32(lower_32_bits(vpd->phys)); + read_rev->hostbuf.high = + cpu_to_le32(upper_32_bits(vpd->phys)); + } + + return EFC_SUCCESS; +} + +int +sli_cmd_read_sparm64(struct sli4 *sli4, void *buf, struct efc_dma *dma, u16 vpi) +{ + struct sli4_cmd_read_sparm64 *read_sparm64 = buf; + + if (vpi == U16_MAX) { + efc_log_err(sli4, "special VPI not supported!!!\n"); + return EFC_FAIL; + } + + if (!dma || !dma->phys) { + efc_log_err(sli4, "bad DMA buffer\n"); + return EFC_FAIL; + } + + memset(buf, 0, SLI4_BMBX_SIZE); + + read_sparm64->hdr.command = SLI4_MBX_CMD_READ_SPARM64; + + read_sparm64->bde_64.bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (dma->size & SLI4_BDE_LEN_MASK)); + read_sparm64->bde_64.u.data.low = + cpu_to_le32(lower_32_bits(dma->phys)); + read_sparm64->bde_64.u.data.high = + cpu_to_le32(upper_32_bits(dma->phys)); + + read_sparm64->vpi = cpu_to_le16(vpi); + + return EFC_SUCCESS; +} + +int +sli_cmd_read_topology(struct sli4 *sli4, void *buf, struct efc_dma *dma) +{ + struct sli4_cmd_read_topology *read_topo = buf; + + if (!dma || !dma->size) + return EFC_FAIL; + + if (dma->size < SLI4_MIN_LOOP_MAP_BYTES) { + efc_log_err(sli4, "loop map buffer too small %zx\n", dma->size); + return EFC_FAIL; + } + + memset(buf, 0, SLI4_BMBX_SIZE); + + read_topo->hdr.command = SLI4_MBX_CMD_READ_TOPOLOGY; + + memset(dma->virt, 0, dma->size); + + read_topo->bde_loop_map.bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (dma->size & SLI4_BDE_LEN_MASK)); + read_topo->bde_loop_map.u.data.low = + cpu_to_le32(lower_32_bits(dma->phys)); + read_topo->bde_loop_map.u.data.high = + cpu_to_le32(upper_32_bits(dma->phys)); + + return EFC_SUCCESS; +} + +int +sli_cmd_reg_fcfi(struct sli4 *sli4, void *buf, u16 index, + struct sli4_cmd_rq_cfg *rq_cfg) +{ + struct sli4_cmd_reg_fcfi *reg_fcfi = buf; + u32 i; + + memset(buf, 0, SLI4_BMBX_SIZE); + + reg_fcfi->hdr.command = SLI4_MBX_CMD_REG_FCFI; + + reg_fcfi->fcf_index = cpu_to_le16(index); + + for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) { + switch (i) { + case 0: + reg_fcfi->rqid0 = rq_cfg[0].rq_id; + break; + case 1: + reg_fcfi->rqid1 = rq_cfg[1].rq_id; + break; + case 2: + reg_fcfi->rqid2 = rq_cfg[2].rq_id; + break; + case 3: + reg_fcfi->rqid3 = rq_cfg[3].rq_id; + break; + } + reg_fcfi->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask; + reg_fcfi->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match; + reg_fcfi->rq_cfg[i].type_mask = rq_cfg[i].type_mask; + reg_fcfi->rq_cfg[i].type_match = rq_cfg[i].type_match; + } + + return EFC_SUCCESS; +} + +int +sli_cmd_reg_fcfi_mrq(struct sli4 *sli4, void *buf, u8 mode, u16 fcf_index, + u8 rq_selection_policy, u8 mrq_bit_mask, u16 num_mrqs, + struct sli4_cmd_rq_cfg *rq_cfg) +{ + struct sli4_cmd_reg_fcfi_mrq *reg_fcfi_mrq = buf; + u32 i; + u32 mrq_flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + reg_fcfi_mrq->hdr.command = SLI4_MBX_CMD_REG_FCFI_MRQ; + if (mode == SLI4_CMD_REG_FCFI_SET_FCFI_MODE) { + reg_fcfi_mrq->fcf_index = cpu_to_le16(fcf_index); + goto done; + } + + reg_fcfi_mrq->dw8_vlan = cpu_to_le32(SLI4_REGFCFI_MRQ_MODE); + + for (i = 0; i < SLI4_CMD_REG_FCFI_NUM_RQ_CFG; i++) { + reg_fcfi_mrq->rq_cfg[i].r_ctl_mask = rq_cfg[i].r_ctl_mask; + reg_fcfi_mrq->rq_cfg[i].r_ctl_match = rq_cfg[i].r_ctl_match; + reg_fcfi_mrq->rq_cfg[i].type_mask = rq_cfg[i].type_mask; + reg_fcfi_mrq->rq_cfg[i].type_match = rq_cfg[i].type_match; + + switch (i) { + case 3: + reg_fcfi_mrq->rqid3 = rq_cfg[i].rq_id; + break; + case 2: + reg_fcfi_mrq->rqid2 = rq_cfg[i].rq_id; + break; + case 1: + reg_fcfi_mrq->rqid1 = rq_cfg[i].rq_id; + break; + case 0: + reg_fcfi_mrq->rqid0 = rq_cfg[i].rq_id; + break; + } + } + + mrq_flags = num_mrqs & SLI4_REGFCFI_MRQ_MASK_NUM_PAIRS; + mrq_flags |= (mrq_bit_mask << 8); + mrq_flags |= (rq_selection_policy << 12); + reg_fcfi_mrq->dw9_mrqflags = cpu_to_le32(mrq_flags); +done: + return EFC_SUCCESS; +} + +int +sli_cmd_reg_rpi(struct sli4 *sli4, void *buf, u32 rpi, u32 vpi, u32 fc_id, + struct efc_dma *dma, u8 update, u8 enable_t10_pi) +{ + struct sli4_cmd_reg_rpi *reg_rpi = buf; + u32 rportid_flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + reg_rpi->hdr.command = SLI4_MBX_CMD_REG_RPI; + + reg_rpi->rpi = cpu_to_le16(rpi); + + rportid_flags = fc_id & SLI4_REGRPI_REMOTE_N_PORTID; + + if (update) + rportid_flags |= SLI4_REGRPI_UPD; + else + rportid_flags &= ~SLI4_REGRPI_UPD; + + if (enable_t10_pi) + rportid_flags |= SLI4_REGRPI_ETOW; + else + rportid_flags &= ~SLI4_REGRPI_ETOW; + + reg_rpi->dw2_rportid_flags = cpu_to_le32(rportid_flags); + + reg_rpi->bde_64.bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_LEN_MASK)); + reg_rpi->bde_64.u.data.low = + cpu_to_le32(lower_32_bits(dma->phys)); + reg_rpi->bde_64.u.data.high = + cpu_to_le32(upper_32_bits(dma->phys)); + + reg_rpi->vpi = cpu_to_le16(vpi); + + return EFC_SUCCESS; +} + +int +sli_cmd_reg_vfi(struct sli4 *sli4, void *buf, size_t size, + u16 vfi, u16 fcfi, struct efc_dma dma, + u16 vpi, __be64 sli_wwpn, u32 fc_id) +{ + struct sli4_cmd_reg_vfi *reg_vfi = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + reg_vfi->hdr.command = SLI4_MBX_CMD_REG_VFI; + + reg_vfi->vfi = cpu_to_le16(vfi); + + reg_vfi->fcfi = cpu_to_le16(fcfi); + + reg_vfi->sparm.bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (SLI4_REG_RPI_BUF_LEN & SLI4_BDE_LEN_MASK)); + reg_vfi->sparm.u.data.low = + cpu_to_le32(lower_32_bits(dma.phys)); + reg_vfi->sparm.u.data.high = + cpu_to_le32(upper_32_bits(dma.phys)); + + reg_vfi->e_d_tov = cpu_to_le32(sli4->e_d_tov); + reg_vfi->r_a_tov = cpu_to_le32(sli4->r_a_tov); + + reg_vfi->dw0w1_flags |= cpu_to_le16(SLI4_REGVFI_VP); + reg_vfi->vpi = cpu_to_le16(vpi); + memcpy(reg_vfi->wwpn, &sli_wwpn, sizeof(reg_vfi->wwpn)); + reg_vfi->dw10_lportid_flags = cpu_to_le32(fc_id); + + return EFC_SUCCESS; +} + +int +sli_cmd_reg_vpi(struct sli4 *sli4, void *buf, u32 fc_id, __be64 sli_wwpn, + u16 vpi, u16 vfi, bool update) +{ + struct sli4_cmd_reg_vpi *reg_vpi = buf; + u32 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + reg_vpi->hdr.command = SLI4_MBX_CMD_REG_VPI; + + flags = (fc_id & SLI4_REGVPI_LOCAL_N_PORTID); + if (update) + flags |= SLI4_REGVPI_UPD; + else + flags &= ~SLI4_REGVPI_UPD; + + reg_vpi->dw2_lportid_flags = cpu_to_le32(flags); + memcpy(reg_vpi->wwpn, &sli_wwpn, sizeof(reg_vpi->wwpn)); + reg_vpi->vpi = cpu_to_le16(vpi); + reg_vpi->vfi = cpu_to_le16(vfi); + + return EFC_SUCCESS; +} + +static int +sli_cmd_request_features(struct sli4 *sli4, void *buf, u32 features_mask, + bool query) +{ + struct sli4_cmd_request_features *req_features = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + req_features->hdr.command = SLI4_MBX_CMD_RQST_FEATURES; + + if (query) + req_features->dw1_qry = cpu_to_le32(SLI4_REQFEAT_QRY); + + req_features->cmd = cpu_to_le32(features_mask); + + return EFC_SUCCESS; +} + +int +sli_cmd_unreg_fcfi(struct sli4 *sli4, void *buf, u16 indicator) +{ + struct sli4_cmd_unreg_fcfi *unreg_fcfi = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + unreg_fcfi->hdr.command = SLI4_MBX_CMD_UNREG_FCFI; + unreg_fcfi->fcfi = cpu_to_le16(indicator); + + return EFC_SUCCESS; +} + +int +sli_cmd_unreg_rpi(struct sli4 *sli4, void *buf, u16 indicator, + enum sli4_resource which, u32 fc_id) +{ + struct sli4_cmd_unreg_rpi *unreg_rpi = buf; + u32 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + unreg_rpi->hdr.command = SLI4_MBX_CMD_UNREG_RPI; + switch (which) { + case SLI4_RSRC_RPI: + flags |= SLI4_UNREG_RPI_II_RPI; + if (fc_id == U32_MAX) + break; + + flags |= SLI4_UNREG_RPI_DP; + unreg_rpi->dw2_dest_n_portid = + cpu_to_le32(fc_id & SLI4_UNREG_RPI_DEST_N_PORTID_MASK); + break; + case SLI4_RSRC_VPI: + flags |= SLI4_UNREG_RPI_II_VPI; + break; + case SLI4_RSRC_VFI: + flags |= SLI4_UNREG_RPI_II_VFI; + break; + case SLI4_RSRC_FCFI: + flags |= SLI4_UNREG_RPI_II_FCFI; + break; + default: + efc_log_info(sli4, "unknown type %#x\n", which); + return EFC_FAIL; + } + + unreg_rpi->dw1w1_flags = cpu_to_le16(flags); + unreg_rpi->index = cpu_to_le16(indicator); + + return EFC_SUCCESS; +} + +int +sli_cmd_unreg_vfi(struct sli4 *sli4, void *buf, u16 index, u32 which) +{ + struct sli4_cmd_unreg_vfi *unreg_vfi = buf; + + memset(buf, 0, SLI4_BMBX_SIZE); + + unreg_vfi->hdr.command = SLI4_MBX_CMD_UNREG_VFI; + switch (which) { + case SLI4_UNREG_TYPE_DOMAIN: + unreg_vfi->index = cpu_to_le16(index); + break; + case SLI4_UNREG_TYPE_FCF: + unreg_vfi->index = cpu_to_le16(index); + break; + case SLI4_UNREG_TYPE_ALL: + unreg_vfi->index = cpu_to_le16(U32_MAX); + break; + default: + return EFC_FAIL; + } + + if (which != SLI4_UNREG_TYPE_DOMAIN) + unreg_vfi->dw2_flags = cpu_to_le16(SLI4_UNREG_VFI_II_FCFI); + + return EFC_SUCCESS; +} + +int +sli_cmd_unreg_vpi(struct sli4 *sli4, void *buf, u16 indicator, u32 which) +{ + struct sli4_cmd_unreg_vpi *unreg_vpi = buf; + u32 flags = 0; + + memset(buf, 0, SLI4_BMBX_SIZE); + + unreg_vpi->hdr.command = SLI4_MBX_CMD_UNREG_VPI; + unreg_vpi->index = cpu_to_le16(indicator); + switch (which) { + case SLI4_UNREG_TYPE_PORT: + flags |= SLI4_UNREG_VPI_II_VPI; + break; + case SLI4_UNREG_TYPE_DOMAIN: + flags |= SLI4_UNREG_VPI_II_VFI; + break; + case SLI4_UNREG_TYPE_FCF: + flags |= SLI4_UNREG_VPI_II_FCFI; + break; + case SLI4_UNREG_TYPE_ALL: + /* override indicator */ + unreg_vpi->index = cpu_to_le16(U32_MAX); + flags |= SLI4_UNREG_VPI_II_FCFI; + break; + default: + return EFC_FAIL; + } + + unreg_vpi->dw2w0_flags = cpu_to_le16(flags); + return EFC_SUCCESS; +} + +static int +sli_cmd_common_modify_eq_delay(struct sli4 *sli4, void *buf, + struct sli4_queue *q, int num_q, u32 shift, + u32 delay_mult) +{ + struct sli4_rqst_cmn_modify_eq_delay *req = NULL; + int i; + + req = sli_config_cmd_init(sli4, buf, + SLI4_CFG_PYLD_LENGTH(cmn_modify_eq_delay), NULL); + if (!req) + return EFC_FAIL; + + sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_MODIFY_EQ_DELAY, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_modify_eq_delay)); + req->num_eq = cpu_to_le32(num_q); + + for (i = 0; i < num_q; i++) { + req->eq_delay_record[i].eq_id = cpu_to_le32(q[i].id); + req->eq_delay_record[i].phase = cpu_to_le32(shift); + req->eq_delay_record[i].delay_multiplier = + cpu_to_le32(delay_mult); + } + + return EFC_SUCCESS; +} + +void +sli4_cmd_lowlevel_set_watchdog(struct sli4 *sli4, void *buf, + size_t size, u16 timeout) +{ + struct sli4_rqst_lowlevel_set_watchdog *req = NULL; + + req = sli_config_cmd_init(sli4, buf, + SLI4_CFG_PYLD_LENGTH(lowlevel_set_watchdog), NULL); + if (!req) + return; + + sli_cmd_fill_hdr(&req->hdr, SLI4_OPC_LOWLEVEL_SET_WATCHDOG, + SLI4_SUBSYSTEM_LOWLEVEL, CMD_V0, + SLI4_RQST_PYLD_LEN(lowlevel_set_watchdog)); + req->watchdog_timeout = cpu_to_le16(timeout); +} + +static int +sli_cmd_common_get_cntl_attributes(struct sli4 *sli4, void *buf, + struct efc_dma *dma) +{ + struct sli4_rqst_hdr *hdr = NULL; + + hdr = sli_config_cmd_init(sli4, buf, SLI4_RQST_CMDSZ(hdr), dma); + if (!hdr) + return EFC_FAIL; + + hdr->opcode = SLI4_CMN_GET_CNTL_ATTRIBUTES; + hdr->subsystem = SLI4_SUBSYSTEM_COMMON; + hdr->request_length = cpu_to_le32(dma->size); + + return EFC_SUCCESS; +} + +static int +sli_cmd_common_get_cntl_addl_attributes(struct sli4 *sli4, void *buf, + struct efc_dma *dma) +{ + struct sli4_rqst_hdr *hdr = NULL; + + hdr = sli_config_cmd_init(sli4, buf, SLI4_RQST_CMDSZ(hdr), dma); + if (!hdr) + return EFC_FAIL; + + hdr->opcode = SLI4_CMN_GET_CNTL_ADDL_ATTRS; + hdr->subsystem = SLI4_SUBSYSTEM_COMMON; + hdr->request_length = cpu_to_le32(dma->size); + + return EFC_SUCCESS; +} + +int +sli_cmd_common_nop(struct sli4 *sli4, void *buf, uint64_t context) +{ + struct sli4_rqst_cmn_nop *nop = NULL; + + nop = sli_config_cmd_init(sli4, buf, SLI4_CFG_PYLD_LENGTH(cmn_nop), + NULL); + if (!nop) + return EFC_FAIL; + + sli_cmd_fill_hdr(&nop->hdr, SLI4_CMN_NOP, SLI4_SUBSYSTEM_COMMON, + CMD_V0, SLI4_RQST_PYLD_LEN(cmn_nop)); + + memcpy(&nop->context, &context, sizeof(context)); + + return EFC_SUCCESS; +} + +int +sli_cmd_common_get_resource_extent_info(struct sli4 *sli4, void *buf, u16 rtype) +{ + struct sli4_rqst_cmn_get_resource_extent_info *ext = NULL; + + ext = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_get_resource_extent_info), NULL); + if (!ext) + return EFC_FAIL; + + sli_cmd_fill_hdr(&ext->hdr, SLI4_CMN_GET_RSC_EXTENT_INFO, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_get_resource_extent_info)); + + ext->resource_type = cpu_to_le16(rtype); + + return EFC_SUCCESS; +} + +int +sli_cmd_common_get_sli4_parameters(struct sli4 *sli4, void *buf) +{ + struct sli4_rqst_hdr *hdr = NULL; + + hdr = sli_config_cmd_init(sli4, buf, + SLI4_CFG_PYLD_LENGTH(cmn_get_sli4_params), NULL); + if (!hdr) + return EFC_FAIL; + + hdr->opcode = SLI4_CMN_GET_SLI4_PARAMS; + hdr->subsystem = SLI4_SUBSYSTEM_COMMON; + hdr->request_length = SLI4_RQST_PYLD_LEN(cmn_get_sli4_params); + + return EFC_SUCCESS; +} + +static int +sli_cmd_common_get_port_name(struct sli4 *sli4, void *buf) +{ + struct sli4_rqst_cmn_get_port_name *pname; + + pname = sli_config_cmd_init(sli4, buf, + SLI4_CFG_PYLD_LENGTH(cmn_get_port_name), NULL); + if (!pname) + return EFC_FAIL; + + sli_cmd_fill_hdr(&pname->hdr, SLI4_CMN_GET_PORT_NAME, + SLI4_SUBSYSTEM_COMMON, CMD_V1, + SLI4_RQST_PYLD_LEN(cmn_get_port_name)); + + /* Set the port type value (ethernet=0, FC=1) for V1 commands */ + pname->port_type = SLI4_PORT_TYPE_FC; + + return EFC_SUCCESS; +} + +int +sli_cmd_common_write_object(struct sli4 *sli4, void *buf, u16 noc, + u16 eof, u32 desired_write_length, + u32 offset, char *obj_name, + struct efc_dma *dma) +{ + struct sli4_rqst_cmn_write_object *wr_obj = NULL; + struct sli4_bde *bde; + u32 dwflags = 0; + + wr_obj = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_write_object) + sizeof(*bde), NULL); + if (!wr_obj) + return EFC_FAIL; + + sli_cmd_fill_hdr(&wr_obj->hdr, SLI4_CMN_WRITE_OBJECT, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN_VAR(cmn_write_object, sizeof(*bde))); + + if (noc) + dwflags |= SLI4_RQ_DES_WRITE_LEN_NOC; + if (eof) + dwflags |= SLI4_RQ_DES_WRITE_LEN_EOF; + dwflags |= (desired_write_length & SLI4_RQ_DES_WRITE_LEN); + + wr_obj->desired_write_len_dword = cpu_to_le32(dwflags); + + wr_obj->write_offset = cpu_to_le32(offset); + strncpy(wr_obj->object_name, obj_name, sizeof(wr_obj->object_name) - 1); + wr_obj->host_buffer_descriptor_count = cpu_to_le32(1); + + bde = (struct sli4_bde *)wr_obj->host_buffer_descriptor; + + /* Setup to transfer xfer_size bytes to device */ + bde->bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (desired_write_length & SLI4_BDE_LEN_MASK)); + bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys)); + bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys)); + + return EFC_SUCCESS; +} + +int +sli_cmd_common_delete_object(struct sli4 *sli4, void *buf, char *obj_name) +{ + struct sli4_rqst_cmn_delete_object *req = NULL; + + req = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_delete_object), NULL); + if (!req) + return EFC_FAIL; + + sli_cmd_fill_hdr(&req->hdr, SLI4_CMN_DELETE_OBJECT, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_delete_object)); + + strncpy(req->object_name, obj_name, sizeof(req->object_name) - 1); + return EFC_SUCCESS; +} + +int +sli_cmd_common_read_object(struct sli4 *sli4, void *buf, u32 desired_read_len, + u32 offset, char *obj_name, struct efc_dma *dma) +{ + struct sli4_rqst_cmn_read_object *rd_obj = NULL; + struct sli4_bde *bde; + + rd_obj = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_read_object) + sizeof(*bde), NULL); + if (!rd_obj) + return EFC_FAIL; + + sli_cmd_fill_hdr(&rd_obj->hdr, SLI4_CMN_READ_OBJECT, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN_VAR(cmn_read_object, sizeof(*bde))); + rd_obj->desired_read_length_dword = + cpu_to_le32(desired_read_len & SLI4_REQ_DESIRE_READLEN); + + rd_obj->read_offset = cpu_to_le32(offset); + strncpy(rd_obj->object_name, obj_name, sizeof(rd_obj->object_name) - 1); + rd_obj->host_buffer_descriptor_count = cpu_to_le32(1); + + bde = (struct sli4_bde *)rd_obj->host_buffer_descriptor; + + /* Setup to transfer xfer_size bytes to device */ + bde->bde_type_buflen = + cpu_to_le32((SLI4_BDE_TYPE_VAL(64)) | + (desired_read_len & SLI4_BDE_LEN_MASK)); + if (dma) { + bde->u.data.low = cpu_to_le32(lower_32_bits(dma->phys)); + bde->u.data.high = cpu_to_le32(upper_32_bits(dma->phys)); + } else { + bde->u.data.low = 0; + bde->u.data.high = 0; + } + + return EFC_SUCCESS; +} + +int +sli_cmd_dmtf_exec_clp_cmd(struct sli4 *sli4, void *buf, struct efc_dma *cmd, + struct efc_dma *resp) +{ + struct sli4_rqst_dmtf_exec_clp_cmd *clp_cmd = NULL; + + clp_cmd = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(dmtf_exec_clp_cmd), NULL); + if (!clp_cmd) + return EFC_FAIL; + + sli_cmd_fill_hdr(&clp_cmd->hdr, DMTF_EXEC_CLP_CMD, SLI4_SUBSYSTEM_DMTF, + CMD_V0, SLI4_RQST_PYLD_LEN(dmtf_exec_clp_cmd)); + + clp_cmd->cmd_buf_length = cpu_to_le32(cmd->size); + clp_cmd->cmd_buf_addr_low = cpu_to_le32(lower_32_bits(cmd->phys)); + clp_cmd->cmd_buf_addr_high = cpu_to_le32(upper_32_bits(cmd->phys)); + clp_cmd->resp_buf_length = cpu_to_le32(resp->size); + clp_cmd->resp_buf_addr_low = cpu_to_le32(lower_32_bits(resp->phys)); + clp_cmd->resp_buf_addr_high = cpu_to_le32(upper_32_bits(resp->phys)); + return EFC_SUCCESS; +} + +int +sli_cmd_common_set_dump_location(struct sli4 *sli4, void *buf, bool query, + bool is_buffer_list, + struct efc_dma *buffer, u8 fdb) +{ + struct sli4_rqst_cmn_set_dump_location *set_dump_loc = NULL; + u32 buffer_length_flag = 0; + + set_dump_loc = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_set_dump_location), NULL); + if (!set_dump_loc) + return EFC_FAIL; + + sli_cmd_fill_hdr(&set_dump_loc->hdr, SLI4_CMN_SET_DUMP_LOCATION, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_set_dump_location)); + + if (is_buffer_list) + buffer_length_flag |= SLI4_CMN_SET_DUMP_BLP; + + if (query) + buffer_length_flag |= SLI4_CMN_SET_DUMP_QRY; + + if (fdb) + buffer_length_flag |= SLI4_CMN_SET_DUMP_FDB; + + if (buffer) { + set_dump_loc->buf_addr_low = + cpu_to_le32(lower_32_bits(buffer->phys)); + set_dump_loc->buf_addr_high = + cpu_to_le32(upper_32_bits(buffer->phys)); + + buffer_length_flag |= + buffer->len & SLI4_CMN_SET_DUMP_BUFFER_LEN; + } else { + set_dump_loc->buf_addr_low = 0; + set_dump_loc->buf_addr_high = 0; + set_dump_loc->buffer_length_dword = 0; + } + set_dump_loc->buffer_length_dword = cpu_to_le32(buffer_length_flag); + return EFC_SUCCESS; +} + +int +sli_cmd_common_set_features(struct sli4 *sli4, void *buf, u32 feature, + u32 param_len, void *parameter) +{ + struct sli4_rqst_cmn_set_features *cmd = NULL; + + cmd = sli_config_cmd_init(sli4, buf, + SLI4_RQST_CMDSZ(cmn_set_features), NULL); + if (!cmd) + return EFC_FAIL; + + sli_cmd_fill_hdr(&cmd->hdr, SLI4_CMN_SET_FEATURES, + SLI4_SUBSYSTEM_COMMON, CMD_V0, + SLI4_RQST_PYLD_LEN(cmn_set_features)); + + cmd->feature = cpu_to_le32(feature); + cmd->param_len = cpu_to_le32(param_len); + memcpy(cmd->params, parameter, param_len); + + return EFC_SUCCESS; +} + +int +sli_cqe_mq(struct sli4 *sli4, void *buf) +{ + struct sli4_mcqe *mcqe = buf; + u32 dwflags = le32_to_cpu(mcqe->dw3_flags); + /* + * Firmware can split mbx completions into two MCQEs: first with only + * the "consumed" bit set and a second with the "complete" bit set. + * Thus, ignore MCQE unless "complete" is set. + */ + if (!(dwflags & SLI4_MCQE_COMPLETED)) + return SLI4_MCQE_STATUS_NOT_COMPLETED; + + if (le16_to_cpu(mcqe->completion_status)) { + efc_log_info(sli4, "status(st=%#x ext=%#x con=%d cmp=%d ae=%d val=%d)\n", + le16_to_cpu(mcqe->completion_status), + le16_to_cpu(mcqe->extended_status), + (dwflags & SLI4_MCQE_CONSUMED), + (dwflags & SLI4_MCQE_COMPLETED), + (dwflags & SLI4_MCQE_AE), + (dwflags & SLI4_MCQE_VALID)); + } + + return le16_to_cpu(mcqe->completion_status); +} + +int +sli_cqe_async(struct sli4 *sli4, void *buf) +{ + struct sli4_acqe *acqe = buf; + int rc = EFC_FAIL; + + if (!buf) { + efc_log_err(sli4, "bad parameter sli4=%p buf=%p\n", sli4, buf); + return EFC_FAIL; + } + + switch (acqe->event_code) { + case SLI4_ACQE_EVENT_CODE_LINK_STATE: + efc_log_info(sli4, "Unsupported by FC link, evt code:%#x\n", + acqe->event_code); + break; + case SLI4_ACQE_EVENT_CODE_GRP_5: + efc_log_info(sli4, "ACQE GRP5\n"); + break; + case SLI4_ACQE_EVENT_CODE_SLI_PORT_EVENT: + efc_log_info(sli4, "ACQE SLI Port, type=0x%x, data1,2=0x%08x,0x%08x\n", + acqe->event_type, + le32_to_cpu(acqe->event_data[0]), + le32_to_cpu(acqe->event_data[1])); + break; + case SLI4_ACQE_EVENT_CODE_FC_LINK_EVENT: + rc = sli_fc_process_link_attention(sli4, buf); + break; + default: + efc_log_info(sli4, "ACQE unknown=%#x\n", + acqe->event_code); + } + + return rc; +} From patchwork Thu Jan 7 22:58:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358593 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, UPPERCASE_50_75, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0C71EC433E0 for ; Thu, 7 Jan 2021 23:00:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DA303235FF for ; Thu, 7 Jan 2021 23:00:34 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728402AbhAGXAe (ORCPT ); Thu, 7 Jan 2021 18:00:34 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728297AbhAGXAc (ORCPT ); Thu, 7 Jan 2021 18:00:32 -0500 Received: from mail-pf1-x433.google.com (mail-pf1-x433.google.com [IPv6:2607:f8b0:4864:20::433]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id ACADAC0612FD for ; Thu, 7 Jan 2021 14:59:18 -0800 (PST) Received: by mail-pf1-x433.google.com with SMTP id w6so5007485pfu.1 for ; Thu, 07 Jan 2021 14:59:18 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=K4Cf3PKgRRlNl8jh3XjcvB5sF7GWafxkwPQ91gY1DWw=; b=Rui+B4atJ0b1RcU3W4uNxWiODLi+FBAlsF1VIMjhaKwhLBTxhsDdgpcO00qekz5yrt 7RL5s2tBHNucUTPHccedVhXKhhtiA5r5B1m0L9XWKFu5jjvUqZlNPsKRkxxo8whRE7WL EvphhntqNytjvb+Et7m23RyQmKg/RGUTKfBb0xBRuME14SUUxR+OsO5J0KmabzJNCbqk j4pu+ry95qsxB1/f9ss7nzmCWWfeKhwg00w92zHBes+pQ7Tpom/YW69hDZeV62SgXpzf 4V6kcMT4Xcl7FKupohj8tbciQ66M6BlGxCLZcC0rTpuK5eqogmTY8Ee/XV4xN+7ZdSPw aklQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=K4Cf3PKgRRlNl8jh3XjcvB5sF7GWafxkwPQ91gY1DWw=; b=WVp0NNTWqVYHx0ESyOpCQhheLywNXMgn7Mp8Rn+gLEXHT9VYijoYSvGgPB2K+zYH0H oqNkipHiEh6FStMJoR3mwN/klDtu8W6byRoY9rs3V2FqIatDsPzAmouSA75LZNWKHDRd 6lFMwrFtS+PZeJ2wHX0J/TgGbCVvRAMR0glQu9u9cv7ajlLPHr50NMvNCl6eCavriXmA SA081vp/YAHuFFhrlpAF1W4poUw1BHKiCKLK6vt1u0X9PSuv5TiZeCErWvR4A/6MDH2d vs2l9hO3MuYdDSc0G8V+it6/PFEH3t06e2tjSDbee41MNwjJOPtHk0b+WSw76s5EBZtJ 0M1Q== X-Gm-Message-State: AOAM532VTCYJuTqqopTfK9mJT4jwetwU1q8C8c+SBmdI48hB5UGqOTVh 1amLHb/9c6DZ4prlBtJk/ndh0qPqyMr3gw== X-Google-Smtp-Source: ABdhPJzWnHYpu1sv9quAoLqC6s/v+FSk1Vuc5d0ytDsRzLPzIP0c28/3fmCy7YoKvaJNK7o2mCaeSg== X-Received: by 2002:a63:2347:: with SMTP id u7mr3958888pgm.189.1610060357980; Thu, 07 Jan 2021 14:59:17 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:17 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 08/31] elx: libefc: Generic state machine framework Date: Thu, 7 Jan 2021 14:58:42 -0800 Message-Id: <20210107225905.18186-9-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch starts the population of the libefc library. The library will contain common tasks usable by a target or initiator driver. The library will also contain a FC discovery state machine interface. This patch creates the library directory and adds definitions for the discovery state machine interface. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc/efc_sm.c | 54 +++++++++ drivers/scsi/elx/libefc/efc_sm.h | 197 +++++++++++++++++++++++++++++++ 2 files changed, 251 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_sm.c create mode 100644 drivers/scsi/elx/libefc/efc_sm.h diff --git a/drivers/scsi/elx/libefc/efc_sm.c b/drivers/scsi/elx/libefc/efc_sm.c new file mode 100644 index 000000000000..252399c19293 --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_sm.c @@ -0,0 +1,54 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * Generic state machine framework. + */ +#include "efc.h" +#include "efc_sm.h" + +/** + * efc_sm_post_event() - Post an event to a context. + * + * @ctx: State machine context + * @evt: Event to post + * @data: Event-specific data (if any) + */ +int +efc_sm_post_event(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *data) +{ + if (!ctx->current_state) + return EFC_FAIL; + + ctx->current_state(ctx, evt, data); + return EFC_SUCCESS; +} + +void +efc_sm_transition(struct efc_sm_ctx *ctx, + void (*state)(struct efc_sm_ctx *, + enum efc_sm_event, void *), void *data) + +{ + if (ctx->current_state == state) { + efc_sm_post_event(ctx, EFC_EVT_REENTER, data); + } else { + efc_sm_post_event(ctx, EFC_EVT_EXIT, data); + ctx->current_state = state; + efc_sm_post_event(ctx, EFC_EVT_ENTER, data); + } +} + +static char *event_name[] = EFC_SM_EVENT_NAME; + +const char *efc_sm_event_name(enum efc_sm_event evt) +{ + if (evt > EFC_EVT_LAST) + return "unknown"; + + return event_name[evt]; +} diff --git a/drivers/scsi/elx/libefc/efc_sm.h b/drivers/scsi/elx/libefc/efc_sm.h new file mode 100644 index 000000000000..e28835e7717d --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_sm.h @@ -0,0 +1,197 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + * + */ + +/** + * Generic state machine framework declarations. + */ + +#ifndef _EFC_SM_H +#define _EFC_SM_H + +struct efc_sm_ctx; + +/* State Machine events */ +enum efc_sm_event { + /* Common Events */ + EFC_EVT_ENTER, + EFC_EVT_REENTER, + EFC_EVT_EXIT, + EFC_EVT_SHUTDOWN, + EFC_EVT_ALL_CHILD_NODES_FREE, + EFC_EVT_RESUME, + EFC_EVT_TIMER_EXPIRED, + + /* Domain Events */ + EFC_EVT_RESPONSE, + EFC_EVT_ERROR, + + EFC_EVT_DOMAIN_FOUND, + EFC_EVT_DOMAIN_ALLOC_OK, + EFC_EVT_DOMAIN_ALLOC_FAIL, + EFC_EVT_DOMAIN_REQ_ATTACH, + EFC_EVT_DOMAIN_ATTACH_OK, + EFC_EVT_DOMAIN_ATTACH_FAIL, + EFC_EVT_DOMAIN_LOST, + EFC_EVT_DOMAIN_FREE_OK, + EFC_EVT_DOMAIN_FREE_FAIL, + EFC_EVT_HW_DOMAIN_REQ_ATTACH, + EFC_EVT_HW_DOMAIN_REQ_FREE, + + /* Sport Events */ + EFC_EVT_NPORT_ALLOC_OK, + EFC_EVT_NPORT_ALLOC_FAIL, + EFC_EVT_NPORT_ATTACH_OK, + EFC_EVT_NPORT_ATTACH_FAIL, + EFC_EVT_NPORT_FREE_OK, + EFC_EVT_NPORT_FREE_FAIL, + EFC_EVT_NPORT_TOPOLOGY_NOTIFY, + EFC_EVT_HW_PORT_ALLOC_OK, + EFC_EVT_HW_PORT_ALLOC_FAIL, + EFC_EVT_HW_PORT_ATTACH_OK, + EFC_EVT_HW_PORT_REQ_ATTACH, + EFC_EVT_HW_PORT_REQ_FREE, + EFC_EVT_HW_PORT_FREE_OK, + + /* Login Events */ + EFC_EVT_SRRS_ELS_REQ_OK, + EFC_EVT_SRRS_ELS_CMPL_OK, + EFC_EVT_SRRS_ELS_REQ_FAIL, + EFC_EVT_SRRS_ELS_CMPL_FAIL, + EFC_EVT_SRRS_ELS_REQ_RJT, + EFC_EVT_NODE_ATTACH_OK, + EFC_EVT_NODE_ATTACH_FAIL, + EFC_EVT_NODE_FREE_OK, + EFC_EVT_NODE_FREE_FAIL, + EFC_EVT_ELS_FRAME, + EFC_EVT_ELS_REQ_TIMEOUT, + EFC_EVT_ELS_REQ_ABORTED, + /* request an ELS IO be aborted */ + EFC_EVT_ABORT_ELS, + /* ELS abort process complete */ + EFC_EVT_ELS_ABORT_CMPL, + + EFC_EVT_ABTS_RCVD, + + /* node is not in the GID_PT payload */ + EFC_EVT_NODE_MISSING, + /* node is allocated and in the GID_PT payload */ + EFC_EVT_NODE_REFOUND, + /* node shutting down due to PLOGI recvd (implicit logo) */ + EFC_EVT_SHUTDOWN_IMPLICIT_LOGO, + /* node shutting down due to LOGO recvd/sent (explicit logo) */ + EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, + + EFC_EVT_PLOGI_RCVD, + EFC_EVT_FLOGI_RCVD, + EFC_EVT_LOGO_RCVD, + EFC_EVT_PRLI_RCVD, + EFC_EVT_PRLO_RCVD, + EFC_EVT_PDISC_RCVD, + EFC_EVT_FDISC_RCVD, + EFC_EVT_ADISC_RCVD, + EFC_EVT_RSCN_RCVD, + EFC_EVT_SCR_RCVD, + EFC_EVT_ELS_RCVD, + + EFC_EVT_FCP_CMD_RCVD, + + EFC_EVT_GIDPT_DELAY_EXPIRED, + + /* SCSI Target Server events */ + EFC_EVT_NODE_ACTIVE_IO_LIST_EMPTY, + EFC_EVT_NODE_DEL_INI_COMPLETE, + EFC_EVT_NODE_DEL_TGT_COMPLETE, + EFC_EVT_NODE_SESS_REG_OK, + EFC_EVT_NODE_SESS_REG_FAIL, + + /* Must be last */ + EFC_EVT_LAST +}; + +/* State Machine event name lookup array */ +#define EFC_SM_EVENT_NAME { \ + [EFC_EVT_ENTER] = "EFC_EVT_ENTER", \ + [EFC_EVT_REENTER] = "EFC_EVT_REENTER", \ + [EFC_EVT_EXIT] = "EFC_EVT_EXIT", \ + [EFC_EVT_SHUTDOWN] = "EFC_EVT_SHUTDOWN", \ + [EFC_EVT_ALL_CHILD_NODES_FREE] = "EFC_EVT_ALL_CHILD_NODES_FREE",\ + [EFC_EVT_RESUME] = "EFC_EVT_RESUME", \ + [EFC_EVT_TIMER_EXPIRED] = "EFC_EVT_TIMER_EXPIRED", \ + [EFC_EVT_RESPONSE] = "EFC_EVT_RESPONSE", \ + [EFC_EVT_ERROR] = "EFC_EVT_ERROR", \ + [EFC_EVT_DOMAIN_FOUND] = "EFC_EVT_DOMAIN_FOUND", \ + [EFC_EVT_DOMAIN_ALLOC_OK] = "EFC_EVT_DOMAIN_ALLOC_OK", \ + [EFC_EVT_DOMAIN_ALLOC_FAIL] = "EFC_EVT_DOMAIN_ALLOC_FAIL", \ + [EFC_EVT_DOMAIN_REQ_ATTACH] = "EFC_EVT_DOMAIN_REQ_ATTACH", \ + [EFC_EVT_DOMAIN_ATTACH_OK] = "EFC_EVT_DOMAIN_ATTACH_OK", \ + [EFC_EVT_DOMAIN_ATTACH_FAIL] = "EFC_EVT_DOMAIN_ATTACH_FAIL", \ + [EFC_EVT_DOMAIN_LOST] = "EFC_EVT_DOMAIN_LOST", \ + [EFC_EVT_DOMAIN_FREE_OK] = "EFC_EVT_DOMAIN_FREE_OK", \ + [EFC_EVT_DOMAIN_FREE_FAIL] = "EFC_EVT_DOMAIN_FREE_FAIL", \ + [EFC_EVT_HW_DOMAIN_REQ_ATTACH] = "EFC_EVT_HW_DOMAIN_REQ_ATTACH",\ + [EFC_EVT_HW_DOMAIN_REQ_FREE] = "EFC_EVT_HW_DOMAIN_REQ_FREE", \ + [EFC_EVT_NPORT_ALLOC_OK] = "EFC_EVT_NPORT_ALLOC_OK", \ + [EFC_EVT_NPORT_ALLOC_FAIL] = "EFC_EVT_NPORT_ALLOC_FAIL", \ + [EFC_EVT_NPORT_ATTACH_OK] = "EFC_EVT_NPORT_ATTACH_OK", \ + [EFC_EVT_NPORT_ATTACH_FAIL] = "EFC_EVT_NPORT_ATTACH_FAIL", \ + [EFC_EVT_NPORT_FREE_OK] = "EFC_EVT_NPORT_FREE_OK", \ + [EFC_EVT_NPORT_FREE_FAIL] = "EFC_EVT_NPORT_FREE_FAIL", \ + [EFC_EVT_NPORT_TOPOLOGY_NOTIFY] = "EFC_EVT_NPORT_TOPOLOGY_NOTIFY",\ + [EFC_EVT_HW_PORT_ALLOC_OK] = "EFC_EVT_HW_PORT_ALLOC_OK", \ + [EFC_EVT_HW_PORT_ALLOC_FAIL] = "EFC_EVT_HW_PORT_ALLOC_FAIL", \ + [EFC_EVT_HW_PORT_ATTACH_OK] = "EFC_EVT_HW_PORT_ATTACH_OK", \ + [EFC_EVT_HW_PORT_REQ_ATTACH] = "EFC_EVT_HW_PORT_REQ_ATTACH", \ + [EFC_EVT_HW_PORT_REQ_FREE] = "EFC_EVT_HW_PORT_REQ_FREE", \ + [EFC_EVT_HW_PORT_FREE_OK] = "EFC_EVT_HW_PORT_FREE_OK", \ + [EFC_EVT_SRRS_ELS_REQ_OK] = "EFC_EVT_SRRS_ELS_REQ_OK", \ + [EFC_EVT_SRRS_ELS_CMPL_OK] = "EFC_EVT_SRRS_ELS_CMPL_OK", \ + [EFC_EVT_SRRS_ELS_REQ_FAIL] = "EFC_EVT_SRRS_ELS_REQ_FAIL", \ + [EFC_EVT_SRRS_ELS_CMPL_FAIL] = "EFC_EVT_SRRS_ELS_CMPL_FAIL", \ + [EFC_EVT_SRRS_ELS_REQ_RJT] = "EFC_EVT_SRRS_ELS_REQ_RJT", \ + [EFC_EVT_NODE_ATTACH_OK] = "EFC_EVT_NODE_ATTACH_OK", \ + [EFC_EVT_NODE_ATTACH_FAIL] = "EFC_EVT_NODE_ATTACH_FAIL", \ + [EFC_EVT_NODE_FREE_OK] = "EFC_EVT_NODE_FREE_OK", \ + [EFC_EVT_NODE_FREE_FAIL] = "EFC_EVT_NODE_FREE_FAIL", \ + [EFC_EVT_ELS_FRAME] = "EFC_EVT_ELS_FRAME", \ + [EFC_EVT_ELS_REQ_TIMEOUT] = "EFC_EVT_ELS_REQ_TIMEOUT", \ + [EFC_EVT_ELS_REQ_ABORTED] = "EFC_EVT_ELS_REQ_ABORTED", \ + [EFC_EVT_ABORT_ELS] = "EFC_EVT_ABORT_ELS", \ + [EFC_EVT_ELS_ABORT_CMPL] = "EFC_EVT_ELS_ABORT_CMPL", \ + [EFC_EVT_ABTS_RCVD] = "EFC_EVT_ABTS_RCVD", \ + [EFC_EVT_NODE_MISSING] = "EFC_EVT_NODE_MISSING", \ + [EFC_EVT_NODE_REFOUND] = "EFC_EVT_NODE_REFOUND", \ + [EFC_EVT_SHUTDOWN_IMPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_IMPLICIT_LOGO",\ + [EFC_EVT_SHUTDOWN_EXPLICIT_LOGO] = "EFC_EVT_SHUTDOWN_EXPLICIT_LOGO",\ + [EFC_EVT_PLOGI_RCVD] = "EFC_EVT_PLOGI_RCVD", \ + [EFC_EVT_FLOGI_RCVD] = "EFC_EVT_FLOGI_RCVD", \ + [EFC_EVT_LOGO_RCVD] = "EFC_EVT_LOGO_RCVD", \ + [EFC_EVT_PRLI_RCVD] = "EFC_EVT_PRLI_RCVD", \ + [EFC_EVT_PRLO_RCVD] = "EFC_EVT_PRLO_RCVD", \ + [EFC_EVT_PDISC_RCVD] = "EFC_EVT_PDISC_RCVD", \ + [EFC_EVT_FDISC_RCVD] = "EFC_EVT_FDISC_RCVD", \ + [EFC_EVT_ADISC_RCVD] = "EFC_EVT_ADISC_RCVD", \ + [EFC_EVT_RSCN_RCVD] = "EFC_EVT_RSCN_RCVD", \ + [EFC_EVT_SCR_RCVD] = "EFC_EVT_SCR_RCVD", \ + [EFC_EVT_ELS_RCVD] = "EFC_EVT_ELS_RCVD", \ + [EFC_EVT_FCP_CMD_RCVD] = "EFC_EVT_FCP_CMD_RCVD", \ + [EFC_EVT_NODE_DEL_INI_COMPLETE] = "EFC_EVT_NODE_DEL_INI_COMPLETE",\ + [EFC_EVT_NODE_DEL_TGT_COMPLETE] = "EFC_EVT_NODE_DEL_TGT_COMPLETE",\ + [EFC_EVT_LAST] = "EFC_EVT_LAST", \ +} + +int +efc_sm_post_event(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *data); +void +efc_sm_transition(struct efc_sm_ctx *ctx, + void (*state)(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg), + void *data); +void efc_sm_disable(struct efc_sm_ctx *ctx); +const char *efc_sm_event_name(enum efc_sm_event evt); + +#endif /* ! _EFC_SM_H */ From patchwork Thu Jan 7 22:58:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358592 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5BF9AC43381 for ; Thu, 7 Jan 2021 23:00:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 232A4235FF for ; Thu, 7 Jan 2021 23:00:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728465AbhAGXAf (ORCPT ); Thu, 7 Jan 2021 18:00:35 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51900 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728417AbhAGXAe (ORCPT ); Thu, 7 Jan 2021 18:00:34 -0500 Received: from mail-pg1-x535.google.com (mail-pg1-x535.google.com [IPv6:2607:f8b0:4864:20::535]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 420BBC0612FF for ; Thu, 7 Jan 2021 14:59:21 -0800 (PST) Received: by mail-pg1-x535.google.com with SMTP id z21so6340133pgj.4 for ; Thu, 07 Jan 2021 14:59:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=0ymt5fb9QnyyRya3ea6uU0PrHs96lt2yhWKO7N2M3Ys=; b=a4MK6/UxtQTDD2Vvyt0XF9sAl4lq082Z4NJ6poiuwuZrLXSNJ32hV2QINSJ59B5cGa XKOuT55hfyyecWjgisgB4jZhv+lehn7KLY9tlNhLZ2zznNu/Q4F7bFPjQ0cFQqYuqH3H 5U7+n4SXVcWkXdiv9SREhuJs6qurCj9400FDACz4yrI3oexZBt9u4hRyn8UrMP40jJCH nmlGYK+UUbVbbgNN7/pnl0b0j3sDdxkhwJrGsQ5DbL59WDF6Jg2IYkpin74YGcC9aiE0 KeLhPLPVDFqzvGONFNhLa+2ztNe4HwXzyvfSQhtL0k4QYWP7zsX2c/HJQAB0rDga6pqQ gEyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=0ymt5fb9QnyyRya3ea6uU0PrHs96lt2yhWKO7N2M3Ys=; b=Iiji1o8XpPSGAW5PN7LEYFknduanqLMIpOOzgCn14CS+llYm7zIWELnT75EqpIsM+C BQRb4OSk3ot6rhZcKAP0Jz/i/foL6t46Xu+6BTyesXxXDj6CXdZB59LkdsMQdi7ZNaov Tuwd/iE/56iUWaPELjcBiZSEpIaHdyfjAFQQwQeXbtldOrFg4O4/ICPeeS/FvM4LRfr8 7tT2QwUuNcqtXT0aUmj7/bL9qHJ1aXxzjlnHDdUzdFJEPmfx01qk0r6VUgU35tiQPM4a CrPhKOuuw+qVNy6K2RnuGK8bGYmo3aGNYgLeMvbK82dxrDhc0Dbp/5B8CIbwvGknbR1R 1Sgw== X-Gm-Message-State: AOAM530UcP2v9X+XGXO2rjjmfbhuvopz88t2Bp5eFrOeLO8gaFc48dPQ VCADEJzElWagCIyi1lMBaiNtwfMzIEnMww== X-Google-Smtp-Source: ABdhPJwGHafhQUPyU6gj+V60ojNHlC+lhJV0st9ZiCQ7qXyyLnOqG/x4D+aSyF9KtZNx3SK1RD4Ocw== X-Received: by 2002:a63:174f:: with SMTP id 15mr4057368pgx.49.1610060360196; Thu, 07 Jan 2021 14:59:20 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:19 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 10/31] elx: libefc: FC Domain state machine interfaces Date: Thu, 7 Jan 2021 14:58:44 -0800 Message-Id: <20210107225905.18186-11-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc library population. This patch adds library interface definitions for: - FC Domain registration, allocation and deallocation sequence Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc/efc_domain.c | 1093 ++++++++++++++++++++++++++ drivers/scsi/elx/libefc/efc_domain.h | 54 ++ 2 files changed, 1147 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_domain.c create mode 100644 drivers/scsi/elx/libefc/efc_domain.h diff --git a/drivers/scsi/elx/libefc/efc_domain.c b/drivers/scsi/elx/libefc/efc_domain.c new file mode 100644 index 000000000000..eddb98a601ad --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_domain.c @@ -0,0 +1,1093 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * domain_sm Domain State Machine: States + */ + +#include "efc.h" + +int +efc_domain_cb(void *arg, int event, void *data) +{ + struct efc *efc = arg; + struct efc_domain *domain = NULL; + int rc = 0; + unsigned long flags = 0; + + if (event != EFC_HW_DOMAIN_FOUND) + domain = data; + + /* Accept domain callback events from the user driver */ + spin_lock_irqsave(&efc->lock, flags); + switch (event) { + case EFC_HW_DOMAIN_FOUND: { + u64 fcf_wwn = 0; + struct efc_domain_record *drec = data; + + /* extract the fcf_wwn */ + fcf_wwn = be64_to_cpu(*((__be64 *)drec->wwn)); + + efc_log_debug(efc, "Domain found: wwn %016llX\n", fcf_wwn); + + /* lookup domain, or allocate a new one */ + domain = efc->domain; + if (!domain) { + domain = efc_domain_alloc(efc, fcf_wwn); + if (!domain) { + efc_log_err(efc, "efc_domain_alloc() failed\n"); + rc = -1; + break; + } + efc_sm_transition(&domain->drvsm, __efc_domain_init, + NULL); + } + efc_domain_post_event(domain, EFC_EVT_DOMAIN_FOUND, drec); + break; + } + + case EFC_HW_DOMAIN_LOST: + domain_trace(domain, "EFC_HW_DOMAIN_LOST:\n"); + efc->hold_frames = true; + efc_domain_post_event(domain, EFC_EVT_DOMAIN_LOST, NULL); + break; + + case EFC_HW_DOMAIN_ALLOC_OK: + domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_OK:\n"); + efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_OK, NULL); + break; + + case EFC_HW_DOMAIN_ALLOC_FAIL: + domain_trace(domain, "EFC_HW_DOMAIN_ALLOC_FAIL:\n"); + efc_domain_post_event(domain, EFC_EVT_DOMAIN_ALLOC_FAIL, + NULL); + break; + + case EFC_HW_DOMAIN_ATTACH_OK: + domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_OK:\n"); + efc_domain_post_event(domain, EFC_EVT_DOMAIN_ATTACH_OK, NULL); + break; + + case EFC_HW_DOMAIN_ATTACH_FAIL: + domain_trace(domain, "EFC_HW_DOMAIN_ATTACH_FAIL:\n"); + efc_domain_post_event(domain, + EFC_EVT_DOMAIN_ATTACH_FAIL, NULL); + break; + + case EFC_HW_DOMAIN_FREE_OK: + domain_trace(domain, "EFC_HW_DOMAIN_FREE_OK:\n"); + efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_OK, NULL); + break; + + case EFC_HW_DOMAIN_FREE_FAIL: + domain_trace(domain, "EFC_HW_DOMAIN_FREE_FAIL:\n"); + efc_domain_post_event(domain, EFC_EVT_DOMAIN_FREE_FAIL, NULL); + break; + + default: + efc_log_warn(efc, "unsupported event %#x\n", event); + } + spin_unlock_irqrestore(&efc->lock, flags); + + if (efc->domain && domain->req_accept_frames) { + domain->req_accept_frames = false; + efc->hold_frames = false; + } + + return rc; +} + +static void +_efc_domain_free(struct kref *arg) +{ + struct efc_domain *domain = container_of(arg, struct efc_domain, ref); + struct efc *efc = domain->efc; + + if (efc->domain_free_cb) + (*efc->domain_free_cb)(efc, efc->domain_free_cb_arg); + + kfree(domain); +} + +void +efc_domain_free(struct efc_domain *domain) +{ + struct efc *efc; + + efc = domain->efc; + + /* Hold frames to clear the domain pointer from the xport lookup */ + efc->hold_frames = false; + + efc_log_debug(efc, "Domain free: wwn %016llX\n", domain->fcf_wwn); + + xa_destroy(&domain->lookup); + efc->domain = NULL; + kref_put(&domain->ref, domain->release); +} + +struct efc_domain * +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn) +{ + struct efc_domain *domain; + + domain = kzalloc(sizeof(*domain), GFP_ATOMIC); + if (!domain) + return NULL; + + domain->efc = efc; + domain->drvsm.app = domain; + + /* initialize refcount */ + kref_init(&domain->ref); + domain->release = _efc_domain_free; + + xa_init(&domain->lookup); + + INIT_LIST_HEAD(&domain->nport_list); + efc->domain = domain; + domain->fcf_wwn = fcf_wwn; + efc_log_debug(efc, "Domain allocated: wwn %016llX\n", domain->fcf_wwn); + + return domain; +} + +void +efc_register_domain_free_cb(struct efc *efc, + void (*callback)(struct efc *efc, void *arg), + void *arg) +{ + /* Register a callback to be called when the domain is freed */ + efc->domain_free_cb = callback; + efc->domain_free_cb_arg = arg; + if (!efc->domain && callback) + (*callback)(efc, arg); +} + +static void +__efc_domain_common(const char *funcname, struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_domain *domain = ctx->app; + + switch (evt) { + case EFC_EVT_ENTER: + case EFC_EVT_REENTER: + case EFC_EVT_EXIT: + case EFC_EVT_ALL_CHILD_NODES_FREE: + /* + * this can arise if an FLOGI fails on the NPORT, + * and the NPORT is shutdown + */ + break; + default: + efc_log_warn(domain->efc, "%-20s %-20s not handled\n", + funcname, efc_sm_event_name(evt)); + } +} + +static void +__efc_domain_common_shutdown(const char *funcname, struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_domain *domain = ctx->app; + + switch (evt) { + case EFC_EVT_ENTER: + case EFC_EVT_REENTER: + case EFC_EVT_EXIT: + break; + case EFC_EVT_DOMAIN_FOUND: + /* save drec, mark domain_found_pending */ + memcpy(&domain->pending_drec, arg, + sizeof(domain->pending_drec)); + domain->domain_found_pending = true; + break; + case EFC_EVT_DOMAIN_LOST: + /* unmark domain_found_pending */ + domain->domain_found_pending = false; + break; + + default: + efc_log_warn(domain->efc, "%-20s %-20s not handled\n", + funcname, efc_sm_event_name(evt)); + } +} + +#define std_domain_state_decl(...)\ + struct efc_domain *domain = NULL;\ + struct efc *efc = NULL;\ + \ + WARN_ON(!ctx || !ctx->app);\ + domain = ctx->app;\ + WARN_ON(!domain->efc);\ + efc = domain->efc + +void +__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_ENTER: + domain->attached = false; + break; + + case EFC_EVT_DOMAIN_FOUND: { + u32 i; + struct efc_domain_record *drec = arg; + struct efc_nport *nport; + + u64 my_wwnn = efc->req_wwnn; + u64 my_wwpn = efc->req_wwpn; + __be64 bewwpn; + + if (my_wwpn == 0 || my_wwnn == 0) { + efc_log_debug(efc, + "using default hardware WWN configuration\n"); + my_wwpn = efc->def_wwpn; + my_wwnn = efc->def_wwnn; + } + + efc_log_debug(efc, + "Creating base nport using WWPN %016llX WWNN %016llX\n", + my_wwpn, my_wwnn); + + /* Allocate a nport and transition to __efc_nport_allocated */ + nport = efc_nport_alloc(domain, my_wwpn, my_wwnn, U32_MAX, + efc->enable_ini, efc->enable_tgt); + + if (!nport) { + efc_log_err(efc, "efc_nport_alloc() failed\n"); + break; + } + efc_sm_transition(&nport->sm, __efc_nport_allocated, NULL); + + bewwpn = cpu_to_be64(nport->wwpn); + + /* allocate struct efc_nport object for local port + * Note: drec->fc_id is ALPA from read_topology only if loop + */ + if (efc_cmd_nport_alloc(efc, nport, NULL, (uint8_t *)&bewwpn)) { + efc_log_err(efc, "Can't allocate port\n"); + efc_nport_free(nport); + break; + } + + domain->is_loop = drec->is_loop; + + /* + * If the loop position map includes ALPA == 0, + * then we are in a public loop (NL_PORT) + * Note that the first element of the loopmap[] + * contains the count of elements, and if + * ALPA == 0 is present, it will occupy the first + * location after the count. + */ + domain->is_nlport = drec->map.loop[1] == 0x00; + + if (!domain->is_loop) { + /* Initiate HW domain alloc */ + if (efc_cmd_domain_alloc(efc, domain, drec->index)) { + efc_log_err(efc, + "Failed to initiate HW domain allocation\n"); + break; + } + efc_sm_transition(ctx, __efc_domain_wait_alloc, arg); + break; + } + + efc_log_debug(efc, "%s fc_id=%#x speed=%d\n", + drec->is_loop ? + (domain->is_nlport ? + "public-loop" : "loop") : "other", + drec->fc_id, drec->speed); + + nport->fc_id = drec->fc_id; + nport->topology = EFC_NPORT_TOPO_FC_AL; + snprintf(nport->display_name, sizeof(nport->display_name), + "s%06x", drec->fc_id); + + if (efc->enable_ini) { + u32 count = drec->map.loop[0]; + + efc_log_debug(efc, "%d position map entries\n", + count); + for (i = 1; i <= count; i++) { + if (drec->map.loop[i] != drec->fc_id) { + struct efc_node *node; + + efc_log_debug(efc, "%#x -> %#x\n", + drec->fc_id, + drec->map.loop[i]); + node = efc_node_alloc(nport, + drec->map.loop[i], + false, true); + if (!node) { + efc_log_err(efc, + "efc_node_alloc() failed\n"); + break; + } + efc_node_transition(node, + __efc_d_wait_loop, + NULL); + } + } + } + + /* Initiate HW domain alloc */ + if (efc_cmd_domain_alloc(efc, domain, drec->index)) { + efc_log_err(efc, + "Failed to initiate HW domain allocation\n"); + break; + } + efc_sm_transition(ctx, __efc_domain_wait_alloc, arg); + break; + } + default: + __efc_domain_common(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_DOMAIN_ALLOC_OK: { + struct fc_els_flogi *sp; + struct efc_nport *nport; + + nport = domain->nport; + if (WARN_ON(!nport)) + return; + + sp = (struct fc_els_flogi *)nport->service_params; + + /* Save the domain service parameters */ + memcpy(domain->service_params + 4, domain->dma.virt, + sizeof(struct fc_els_flogi) - 4); + memcpy(nport->service_params + 4, domain->dma.virt, + sizeof(struct fc_els_flogi) - 4); + + /* + * Update the nport's service parameters, + * user might have specified non-default names + */ + sp->fl_wwpn = cpu_to_be64(nport->wwpn); + sp->fl_wwnn = cpu_to_be64(nport->wwnn); + + /* + * Take the loop topology path, + * unless we are an NL_PORT (public loop) + */ + if (domain->is_loop && !domain->is_nlport) { + /* + * For loop, we already have our FC ID + * and don't need fabric login. + * Transition to the allocated state and + * post an event to attach to + * the domain. Note that this breaks the + * normal action/transition + * pattern here to avoid a race with the + * domain attach callback. + */ + /* sm: is_loop / domain_attach */ + efc_sm_transition(ctx, __efc_domain_allocated, NULL); + __efc_domain_attach_internal(domain, nport->fc_id); + break; + } + { + struct efc_node *node; + + /* alloc fabric node, send FLOGI */ + node = efc_node_find(nport, FC_FID_FLOGI); + if (node) { + efc_log_err(efc, + "Fabric Controller node already exists\n"); + break; + } + node = efc_node_alloc(nport, FC_FID_FLOGI, + false, false); + if (!node) { + efc_log_err(efc, + "Error: efc_node_alloc() failed\n"); + } else { + efc_node_transition(node, + __efc_fabric_init, NULL); + } + /* Accept frames */ + domain->req_accept_frames = true; + } + /* sm: / start fabric logins */ + efc_sm_transition(ctx, __efc_domain_allocated, NULL); + break; + } + + case EFC_EVT_DOMAIN_ALLOC_FAIL: + efc_log_err(efc, "%s recv'd waiting for DOMAIN_ALLOC_OK;", + efc_sm_event_name(evt)); + efc_log_err(efc, "shutting down domain\n"); + domain->req_domain_free = true; + break; + + case EFC_EVT_DOMAIN_FOUND: + /* Should not happen */ + break; + + case EFC_EVT_DOMAIN_LOST: + efc_log_debug(efc, + "%s received while waiting for hw_domain_alloc()\n", + efc_sm_event_name(evt)); + efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL); + break; + + default: + __efc_domain_common(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_allocated(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_DOMAIN_REQ_ATTACH: { + int rc = 0; + u32 fc_id; + + if (WARN_ON(!arg)) + return; + + fc_id = *((u32 *)arg); + efc_log_debug(efc, "Requesting hw domain attach fc_id x%x\n", + fc_id); + /* Update nport lookup */ + rc = xa_err(xa_store(&domain->lookup, fc_id, domain->nport, + GFP_ATOMIC)); + if (rc) { + efc_log_err(efc, "Sport lookup store failed: %d\n", rc); + return; + } + + /* Update display name for the nport */ + efc_node_fcid_display(fc_id, domain->nport->display_name, + sizeof(domain->nport->display_name)); + + /* Issue domain attach call */ + rc = efc_cmd_domain_attach(efc, domain, fc_id); + if (rc) { + efc_log_err(efc, "efc_hw_domain_attach failed: %d\n", + rc); + return; + } + /* sm: / domain_attach */ + efc_sm_transition(ctx, __efc_domain_wait_attach, NULL); + break; + } + + case EFC_EVT_DOMAIN_FOUND: + /* Should not happen */ + efc_log_err(efc, "%s: evt: %d should not happen\n", + __func__, evt); + break; + + case EFC_EVT_DOMAIN_LOST: { + efc_log_debug(efc, + "%s received while in EFC_EVT_DOMAIN_REQ_ATTACH\n", + efc_sm_event_name(evt)); + if (!list_empty(&domain->nport_list)) { + /* + * if there are nports, transition to + * wait state and send shutdown to each + * nport + */ + struct efc_nport *nport = NULL, *nport_next = NULL; + + efc_sm_transition(ctx, __efc_domain_wait_nports_free, + NULL); + list_for_each_entry_safe(nport, nport_next, + &domain->nport_list, + list_entry) { + efc_sm_post_event(&nport->sm, + EFC_EVT_SHUTDOWN, NULL); + } + } else { + /* no nports exist, free domain */ + efc_sm_transition(ctx, __efc_domain_wait_shutdown, + NULL); + if (efc_cmd_domain_free(efc, domain)) + efc_log_err(efc, "hw_domain_free failed\n"); + } + + break; + } + + default: + __efc_domain_common(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_wait_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_DOMAIN_ATTACH_OK: { + struct efc_node *node = NULL; + struct efc_nport *nport, *next_nport; + unsigned long index; + + /* + * Set domain notify pending state to avoid + * duplicate domain event post + */ + domain->domain_notify_pend = true; + + /* Mark as attached */ + domain->attached = true; + + /* Transition to ready */ + /* sm: / forward event to all nports and nodes */ + efc_sm_transition(ctx, __efc_domain_ready, NULL); + + /* We have an FCFI, so we can accept frames */ + domain->req_accept_frames = true; + + /* + * Notify all nodes that the domain attach request + * has completed + * Note: nport will have already received notification + * of nport attached as a result of the HW's port attach. + */ + list_for_each_entry_safe(nport, next_nport, + &domain->nport_list, list_entry) { + xa_for_each(&nport->lookup, index, node) { + efc_node_post_event(node, + EFC_EVT_DOMAIN_ATTACH_OK, + NULL); + } + } + domain->domain_notify_pend = false; + break; + } + + case EFC_EVT_DOMAIN_ATTACH_FAIL: + efc_log_debug(efc, + "%s received while waiting for hw attach\n", + efc_sm_event_name(evt)); + break; + + case EFC_EVT_DOMAIN_FOUND: + /* Should not happen */ + efc_log_err(efc, "%s: evt: %d should not happen\n", + __func__, evt); + break; + + case EFC_EVT_DOMAIN_LOST: + /* + * Domain lost while waiting for an attach to complete, + * go to a state that waits for the domain attach to + * complete, then handle domain lost + */ + efc_sm_transition(ctx, __efc_domain_wait_domain_lost, NULL); + break; + + case EFC_EVT_DOMAIN_REQ_ATTACH: + /* + * In P2P we can get an attach request from + * the other FLOGI path, so drop this one + */ + break; + + default: + __efc_domain_common(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_ENTER: { + /* start any pending vports */ + if (efc_vport_start(domain)) { + efc_log_debug(domain->efc, + "efc_vport_start didn't start vports\n"); + } + break; + } + case EFC_EVT_DOMAIN_LOST: { + if (!list_empty(&domain->nport_list)) { + /* + * if there are nports, transition to wait state + * and send shutdown to each nport + */ + struct efc_nport *nport = NULL, *nport_next = NULL; + + efc_sm_transition(ctx, __efc_domain_wait_nports_free, + NULL); + list_for_each_entry_safe(nport, nport_next, + &domain->nport_list, + list_entry) { + efc_sm_post_event(&nport->sm, + EFC_EVT_SHUTDOWN, NULL); + } + } else { + /* no nports exist, free domain */ + efc_sm_transition(ctx, __efc_domain_wait_shutdown, + NULL); + if (efc_cmd_domain_free(efc, domain)) + efc_log_err(efc, "hw_domain_free failed\n"); + } + break; + } + + case EFC_EVT_DOMAIN_FOUND: + /* Should not happen */ + efc_log_err(efc, "%s: evt: %d should not happen\n", + __func__, evt); + break; + + case EFC_EVT_DOMAIN_REQ_ATTACH: { + /* can happen during p2p */ + u32 fc_id; + + fc_id = *((u32 *)arg); + + /* Assume that the domain is attached */ + WARN_ON(!domain->attached); + + /* + * Verify that the requested FC_ID + * is the same as the one we're working with + */ + WARN_ON(domain->nport->fc_id != fc_id); + break; + } + + default: + __efc_domain_common(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_wait_nports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + /* Wait for nodes to free prior to the domain shutdown */ + switch (evt) { + case EFC_EVT_ALL_CHILD_NODES_FREE: { + int rc; + + /* sm: / efc_hw_domain_free */ + efc_sm_transition(ctx, __efc_domain_wait_shutdown, NULL); + + /* Request efc_hw_domain_free and wait for completion */ + rc = efc_cmd_domain_free(efc, domain); + if (rc) { + efc_log_err(efc, "efc_hw_domain_free() failed: %d\n", + rc); + } + break; + } + default: + __efc_domain_common_shutdown(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + switch (evt) { + case EFC_EVT_DOMAIN_FREE_OK: + /* sm: / domain_free */ + if (domain->domain_found_pending) { + /* + * save fcf_wwn and drec from this domain, + * free current domain and allocate + * a new one with the same fcf_wwn + * could use a SLI-4 "re-register VPI" + * operation here? + */ + u64 fcf_wwn = domain->fcf_wwn; + struct efc_domain_record drec = domain->pending_drec; + + efc_log_debug(efc, "Reallocating domain\n"); + domain->req_domain_free = true; + domain = efc_domain_alloc(efc, fcf_wwn); + + if (!domain) { + efc_log_err(efc, + "efc_domain_alloc() failed\n"); + return; + } + /* + * got a new domain; at this point, + * there are at least two domains + * once the req_domain_free flag is processed, + * the associated domain will be removed. + */ + efc_sm_transition(&domain->drvsm, __efc_domain_init, + NULL); + efc_sm_post_event(&domain->drvsm, + EFC_EVT_DOMAIN_FOUND, &drec); + } else { + domain->req_domain_free = true; + } + break; + default: + __efc_domain_common_shutdown(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + std_domain_state_decl(); + + domain_sm_trace(domain); + + /* + * Wait for the domain alloc/attach completion + * after receiving a domain lost. + */ + switch (evt) { + case EFC_EVT_DOMAIN_ALLOC_OK: + case EFC_EVT_DOMAIN_ATTACH_OK: { + if (!list_empty(&domain->nport_list)) { + /* + * if there are nports, transition to + * wait state and send shutdown to each nport + */ + struct efc_nport *nport = NULL, *nport_next = NULL; + + efc_sm_transition(ctx, __efc_domain_wait_nports_free, + NULL); + list_for_each_entry_safe(nport, nport_next, + &domain->nport_list, + list_entry) { + efc_sm_post_event(&nport->sm, + EFC_EVT_SHUTDOWN, NULL); + } + } else { + /* no nports exist, free domain */ + efc_sm_transition(ctx, __efc_domain_wait_shutdown, + NULL); + if (efc_cmd_domain_free(efc, domain)) + efc_log_err(efc, "hw_domain_free() failed\n"); + } + break; + } + case EFC_EVT_DOMAIN_ALLOC_FAIL: + case EFC_EVT_DOMAIN_ATTACH_FAIL: + efc_log_err(efc, "[domain] %-20s: failed\n", + efc_sm_event_name(evt)); + break; + + default: + __efc_domain_common_shutdown(__func__, ctx, evt, arg); + } +} + +void +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id) +{ + memcpy(domain->dma.virt, + ((uint8_t *)domain->flogi_service_params) + 4, + sizeof(struct fc_els_flogi) - 4); + (void)efc_sm_post_event(&domain->drvsm, EFC_EVT_DOMAIN_REQ_ATTACH, + &s_id); +} + +void +efc_domain_attach(struct efc_domain *domain, u32 s_id) +{ + __efc_domain_attach_internal(domain, s_id); +} + +int +efc_domain_post_event(struct efc_domain *domain, + enum efc_sm_event event, void *arg) +{ + int rc; + bool req_domain_free; + + rc = efc_sm_post_event(&domain->drvsm, event, arg); + + req_domain_free = domain->req_domain_free; + domain->req_domain_free = false; + + if (req_domain_free) + efc_domain_free(domain); + + return rc; +} + +static void +efct_domain_process_pending(struct efc_domain *domain) +{ + struct efc *efc = domain->efc; + struct efc_hw_sequence *seq = NULL; + u32 processed = 0; + unsigned long flags = 0; + + for (;;) { + /* need to check for hold frames condition after each frame + * processed because any given frame could cause a transition + * to a state that holds frames + */ + if (efc->hold_frames) + break; + + /* Get next frame/sequence */ + spin_lock_irqsave(&efc->pend_frames_lock, flags); + + if (!list_empty(&efc->pend_frames)) { + seq = list_first_entry(&efc->pend_frames, + struct efc_hw_sequence, list_entry); + list_del(&seq->list_entry); + } + + if (!seq) { + processed = efc->pend_frames_processed; + efc->pend_frames_processed = 0; + spin_unlock_irqrestore(&efc->pend_frames_lock, flags); + break; + } + efc->pend_frames_processed++; + + spin_unlock_irqrestore(&efc->pend_frames_lock, flags); + + /* now dispatch frame(s) to dispatch function */ + if (efc_domain_dispatch_frame(domain, seq)) + efc->tt.hw_seq_free(efc, seq); + + seq = NULL; + } + + if (processed != 0) + efc_log_debug(efc, "%u domain frames held and processed\n", + processed); +} + +void +efc_dispatch_frame(struct efc *efc, struct efc_hw_sequence *seq) +{ + struct efc_domain *domain = efc->domain; + + /* + * If we are holding frames or the domain is not yet registered or + * there's already frames on the pending list, + * then add the new frame to pending list + */ + if (!domain || efc->hold_frames || !list_empty(&efc->pend_frames)) { + unsigned long flags = 0; + + spin_lock_irqsave(&efc->pend_frames_lock, flags); + INIT_LIST_HEAD(&seq->list_entry); + list_add_tail(&seq->list_entry, &efc->pend_frames); + spin_unlock_irqrestore(&efc->pend_frames_lock, flags); + + if (domain) { + /* immediately process pending frames */ + efct_domain_process_pending(domain); + } + } else { + /* + * We are not holding frames and pending list is empty, + * just process frame. A non-zero return means the frame + * was not handled - so cleanup + */ + if (efc_domain_dispatch_frame(domain, seq)) + efc->tt.hw_seq_free(efc, seq); + } +} + +int +efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq) +{ + struct efc_domain *domain = (struct efc_domain *)arg; + struct efc *efc = domain->efc; + struct fc_frame_header *hdr; + struct efc_node *node = NULL; + struct efc_nport *nport = NULL; + unsigned long flags = 0; + u32 s_id, d_id, rc = EFC_HW_SEQ_FREE; + + if (!seq->header || !seq->header->dma.virt || !seq->payload->dma.virt) { + efc_log_err(efc, "Sequence header or payload is null\n"); + return rc; + } + + hdr = seq->header->dma.virt; + + /* extract the s_id and d_id */ + s_id = ntoh24(hdr->fh_s_id); + d_id = ntoh24(hdr->fh_d_id); + + spin_lock_irqsave(&efc->lock, flags); + + nport = efc_nport_find(domain, d_id); + if (!nport) { + if (hdr->fh_type == FC_TYPE_FCP) { + /* Drop frame */ + efc_log_warn(efc, "FCP frame with invalid d_id x%x\n", + d_id); + goto out; + } + + /* p2p will use this case */ + nport = domain->nport; + if (!nport || !kref_get_unless_zero(&nport->ref)) { + efc_log_err(efc, "Physical nport is NULL\n"); + goto out; + } + } + + /* Lookup the node given the remote s_id */ + node = efc_node_find(nport, s_id); + + /* If not found, then create a new node */ + if (!node) { + /* + * If this is solicited data or control based on R_CTL and + * there is no node context, then we can drop the frame + */ + if ((hdr->fh_r_ctl == FC_RCTL_DD_SOL_DATA) || + (hdr->fh_r_ctl == FC_RCTL_DD_SOL_CTL)) { + efc_log_debug(efc, + "solicited data/ctrl frame without node,drop\n"); + goto out_release; + } + + node = efc_node_alloc(nport, s_id, false, false); + if (!node) { + efc_log_err(efc, "efc_node_alloc() failed\n"); + goto out_release; + } + /* don't send PLOGI on efc_d_init entry */ + efc_node_init_device(node, false); + } + + if (node->hold_frames || !list_empty(&node->pend_frames)) { + + /* add frame to node's pending list */ + spin_lock_irqsave(&node->pend_frames_lock, flags); + INIT_LIST_HEAD(&seq->list_entry); + list_add_tail(&seq->list_entry, &node->pend_frames); + spin_unlock_irqrestore(&node->pend_frames_lock, flags); + rc = EFC_HW_SEQ_HOLD; + goto out_release; + } + + /* now dispatch frame to the node frame handler */ + efc_node_dispatch_frame(node, seq); + +out_release: + kref_put(&nport->ref, nport->release); +out: + spin_unlock_irqrestore(&efc->lock, flags); + return rc; +} + +void +efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq) +{ + struct fc_frame_header *hdr = seq->header->dma.virt; + u32 port_id; + struct efc_node *node = (struct efc_node *)arg; + struct efc *efc = node->efc; + + port_id = ntoh24(hdr->fh_s_id); + + if (WARN_ON(port_id != node->rnode.fc_id)) + return; + + if ((!(ntoh24(hdr->fh_f_ctl) & FC_FC_END_SEQ)) || + !(ntoh24(hdr->fh_f_ctl) & FC_FC_SEQ_INIT)) { + node_printf(node, + "Dropping frame hdr = %08x %08x %08x %08x %08x %08x\n", + cpu_to_be32(((u32 *)hdr)[0]), + cpu_to_be32(((u32 *)hdr)[1]), + cpu_to_be32(((u32 *)hdr)[2]), + cpu_to_be32(((u32 *)hdr)[3]), + cpu_to_be32(((u32 *)hdr)[4]), + cpu_to_be32(((u32 *)hdr)[5])); + return; + } + + switch (hdr->fh_r_ctl) { + case FC_RCTL_ELS_REQ: + case FC_RCTL_ELS_REP: + efc_node_recv_els_frame(node, seq); + break; + + case FC_RCTL_BA_ABTS: + case FC_RCTL_BA_ACC: + case FC_RCTL_BA_RJT: + case FC_RCTL_BA_NOP: + efc_log_err(efc, "Received ABTS:\n"); + break; + + case FC_RCTL_DD_UNSOL_CMD: + case FC_RCTL_DD_UNSOL_CTL: + switch (hdr->fh_type) { + case FC_TYPE_FCP: + if ((hdr->fh_r_ctl & 0xf) == FC_RCTL_DD_UNSOL_CMD) { + if (!node->fcp_enabled) { + efc_node_recv_fcp_cmd(node, seq); + break; + } + efc_log_err(efc, + "Received FCP CMD. Dropping IO\n"); + } else if ((hdr->fh_r_ctl & 0xf) == + FC_RCTL_DD_SOL_DATA) { + node_printf(node, + "solicited data received.Dropping IO\n"); + } + break; + + case FC_TYPE_CT: + efc_node_recv_ct_frame(node, seq); + break; + default: + break; + } + break; + default: + efc_log_err(efc, "Unhandled frame rctl: %02x\n", hdr->fh_r_ctl); + } +} diff --git a/drivers/scsi/elx/libefc/efc_domain.h b/drivers/scsi/elx/libefc/efc_domain.h new file mode 100644 index 000000000000..5468ea7ab19b --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_domain.h @@ -0,0 +1,54 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * Declare driver's domain handler exported interface + */ + +#ifndef __EFCT_DOMAIN_H__ +#define __EFCT_DOMAIN_H__ + +struct efc_domain * +efc_domain_alloc(struct efc *efc, uint64_t fcf_wwn); +void +efc_domain_free(struct efc_domain *domain); + +void +__efc_domain_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg); +void +__efc_domain_wait_alloc(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +__efc_domain_allocated(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +__efc_domain_wait_attach(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +__efc_domain_ready(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg); +void +__efc_domain_wait_nports_free(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +__efc_domain_wait_shutdown(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +__efc_domain_wait_domain_lost(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg); +void +efc_domain_attach(struct efc_domain *domain, u32 s_id); +int +efc_domain_post_event(struct efc_domain *domain, enum efc_sm_event event, + void *arg); +void +__efc_domain_attach_internal(struct efc_domain *domain, u32 s_id); + +int +efc_domain_dispatch_frame(void *arg, struct efc_hw_sequence *seq); +void +efc_node_dispatch_frame(void *arg, struct efc_hw_sequence *seq); + +#endif /* __EFCT_DOMAIN_H__ */ From patchwork Thu Jan 7 22:58:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358589 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D5645C433E6 for ; Thu, 7 Jan 2021 23:00:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A3C2A23601 for ; Thu, 7 Jan 2021 23:00:47 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728669AbhAGXAr (ORCPT ); Thu, 7 Jan 2021 18:00:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51884 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728152AbhAGXAq (ORCPT ); Thu, 7 Jan 2021 18:00:46 -0500 Received: from mail-pf1-x42d.google.com (mail-pf1-x42d.google.com [IPv6:2607:f8b0:4864:20::42d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EA1B3C0612A1 for ; Thu, 7 Jan 2021 14:59:26 -0800 (PST) Received: by mail-pf1-x42d.google.com with SMTP id s21so4972533pfu.13 for ; Thu, 07 Jan 2021 14:59:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=w/A3zzzUW4fFQTlbPbjaYgGwAP8pSQzWoDGnWhP1GrY=; b=jtcSkvw1HXjsTaCPPKzqdmpMIEQUfpdREqR4Mrj0/CjgXYLWyLFd+RyPc+Td5CbMZ2 l2R+JF/wOpPM9zWEVCSzq/T7d+6n/dhtHwEShsSL+cuteiCYHAuFDdWX7o7M99ytKub+ pcR4YkOhEvrZVgRtVQ/Hy7Tw0uFFEK3jnjy8tq+aZ8XZB6XCQk8nAvULegEjJxhs2/jW KiYpLj9DexpUtDZ7LNmHeHX9UAOkYKVaPATOGGIfnDTi1QZUWfQXmxsw+oE6RMjvY9uM 0r5MpuOEXAc4QZROjEYQownimy01QAal85gZonk969TwRBOiKgkiMT0bwiUsoEr4Vmg/ YnnQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=w/A3zzzUW4fFQTlbPbjaYgGwAP8pSQzWoDGnWhP1GrY=; b=OZT0y03ckG99iISDJxGLyw3LXoeonxgvzEqnvRN4VOE8NQFZX/7P0JMBCBoKs3Jjdu ygLEyKLMvA62wOrNjlkK+5sNk8LDJqhPM1NQndIkECKThDUyAqc4r7LLcOsXt1x/bvcQ eOf3UUFQs2VHaafjcRjKqVzqjqbhqndw1yIVXMW+i7k/0LcWd3btk98+xdLK4u7aUfVl CAe81gavITaWoLXuX4KTAdxuzmFf7M1g8M0I+ByJfzRXXrGHfuBQbA+FhYIVog1TxYGT 1k8ULE/QNlZaUSrkfOOGYM9XYRxrlLS6tSBcioly1z+/sygpPAEc6Ja1KAMs+bUQ0/On XOTQ== X-Gm-Message-State: AOAM531xOX3dGErCD9qH5PD3R+D73Mh/Q6mVbmg0z8xrYclUyySVkDiv XbH0KW1Sr68LbmGYFjXW4g3oc1/Q3MkOZg== X-Google-Smtp-Source: ABdhPJz0fgi2DuWNgVx0eIZMbfu/0/e3KluQm1RyiI/EFAV33ymrPegdqaEE66QnwHAe2yh1ss8/Uw== X-Received: by 2002:a62:7c01:0:b029:19e:1e23:1821 with SMTP id x1-20020a627c010000b029019e1e231821mr4131205pfc.72.1610060361165; Thu, 07 Jan 2021 14:59:21 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:20 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Daniel Wagner Subject: [PATCH v7 11/31] elx: libefc: SLI and FC PORT state machine interfaces Date: Thu, 7 Jan 2021 14:58:45 -0800 Message-Id: <20210107225905.18186-12-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc library population. This patch adds library interface definitions for: - SLI and FC port (aka n_port_id) registration, allocation and deallocation. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc/efc_nport.c | 792 ++++++++++++++++++++++++++++ drivers/scsi/elx/libefc/efc_nport.h | 50 ++ 2 files changed, 842 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_nport.c create mode 100644 drivers/scsi/elx/libefc/efc_nport.h diff --git a/drivers/scsi/elx/libefc/efc_nport.c b/drivers/scsi/elx/libefc/efc_nport.c new file mode 100644 index 000000000000..ec84198fb209 --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_nport.c @@ -0,0 +1,792 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * NPORT + * + * Port object for physical port and NPIV ports. + */ + +/* + * NPORT REFERENCE COUNTING + * + * A nport reference should be taken when: + * - an nport is allocated + * - a vport populates associated nport + * - a remote node is allocated + * - a unsolicited frame is processed + * The reference should be dropped when: + * - the unsolicited frame processesing is done + * - the remote node is removed + * - the vport is removed + * - the nport is removed + */ + +#include "efc.h" + +void +efc_nport_cb(void *arg, int event, void *data) +{ + struct efc *efc = arg; + struct efc_nport *nport = data; + unsigned long flags = 0; + + efc_log_debug(efc, "nport event: %s\n", efc_sm_event_name(event)); + + spin_lock_irqsave(&efc->lock, flags); + efc_sm_post_event(&nport->sm, event, NULL); + spin_unlock_irqrestore(&efc->lock, flags); +} + +static struct efc_nport * +efc_nport_find_wwn(struct efc_domain *domain, uint64_t wwnn, uint64_t wwpn) +{ + struct efc_nport *nport = NULL; + + /* Find a nport, given the WWNN and WWPN */ + list_for_each_entry(nport, &domain->nport_list, list_entry) { + if (nport->wwnn == wwnn && nport->wwpn == wwpn) + return nport; + } + return NULL; +} + +static void +_efc_nport_free(struct kref *arg) +{ + struct efc_nport *nport = container_of(arg, struct efc_nport, ref); + + kfree(nport); +} + +struct efc_nport * +efc_nport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn, + u32 fc_id, bool enable_ini, bool enable_tgt) +{ + struct efc_nport *nport; + + if (domain->efc->enable_ini) + enable_ini = 0; + + /* Return a failure if this nport has already been allocated */ + if ((wwpn != 0) || (wwnn != 0)) { + nport = efc_nport_find_wwn(domain, wwnn, wwpn); + if (nport) { + efc_log_err(domain->efc, + "Err: NPORT %016llX %016llX already allocated\n", + wwnn, wwpn); + return NULL; + } + } + + nport = kzalloc(sizeof(*nport), GFP_ATOMIC); + if (!nport) + return nport; + + /* initialize refcount */ + kref_init(&nport->ref); + nport->release = _efc_nport_free; + + nport->efc = domain->efc; + snprintf(nport->display_name, sizeof(nport->display_name), "------"); + nport->domain = domain; + xa_init(&nport->lookup); + nport->instance_index = domain->nport_count++; + nport->sm.app = nport; + nport->enable_ini = enable_ini; + nport->enable_tgt = enable_tgt; + nport->enable_rscn = (nport->enable_ini || + (nport->enable_tgt && enable_target_rscn(nport->efc))); + + /* Copy service parameters from domain */ + memcpy(nport->service_params, domain->service_params, + sizeof(struct fc_els_flogi)); + + /* Update requested fc_id */ + nport->fc_id = fc_id; + + /* Update the nport's service parameters for the new wwn's */ + nport->wwpn = wwpn; + nport->wwnn = wwnn; + snprintf(nport->wwnn_str, sizeof(nport->wwnn_str), "%016llX", + (unsigned long long)wwnn); + + /* + * if this is the "first" nport of the domain, + * then make it the "phys" nport + */ + if (list_empty(&domain->nport_list)) + domain->nport = nport; + + INIT_LIST_HEAD(&nport->list_entry); + list_add_tail(&nport->list_entry, &domain->nport_list); + + kref_get(&domain->ref); + + efc_log_debug(domain->efc, "New Nport [%s]\n", nport->display_name); + + return nport; +} + +void +efc_nport_free(struct efc_nport *nport) +{ + struct efc_domain *domain; + + if (!nport) + return; + + domain = nport->domain; + efc_log_debug(domain->efc, "[%s] free nport\n", nport->display_name); + list_del(&nport->list_entry); + /* + * if this is the physical nport, + * then clear it out of the domain + */ + if (nport == domain->nport) + domain->nport = NULL; + + xa_destroy(&nport->lookup); + xa_erase(&domain->lookup, nport->fc_id); + + if (list_empty(&domain->nport_list)) + efc_domain_post_event(domain, EFC_EVT_ALL_CHILD_NODES_FREE, + NULL); + + kref_put(&domain->ref, domain->release); + kref_put(&nport->ref, nport->release); + +} + +struct efc_nport * +efc_nport_find(struct efc_domain *domain, u32 d_id) +{ + struct efc_nport *nport; + + /* Find a nport object, given an FC_ID */ + nport = xa_load(&domain->lookup, d_id); + if (!nport || !kref_get_unless_zero(&nport->ref)) + return NULL; + + return nport; +} + +int +efc_nport_attach(struct efc_nport *nport, u32 fc_id) +{ + int rc; + struct efc_node *node; + struct efc *efc = nport->efc; + unsigned long index; + + /* Set our lookup */ + rc = xa_err(xa_store(&nport->domain->lookup, fc_id, nport, GFP_ATOMIC)); + if (rc) { + efc_log_err(efc, "Sport lookup store failed: %d\n", rc); + return rc; + } + + /* Update our display_name */ + efc_node_fcid_display(fc_id, nport->display_name, + sizeof(nport->display_name)); + + xa_for_each(&nport->lookup, index, node) { + efc_node_update_display_name(node); + } + + efc_log_debug(nport->efc, "[%s] attach nport: fc_id x%06x\n", + nport->display_name, fc_id); + + /* Register a nport, given an FC_ID */ + rc = efc_cmd_nport_attach(efc, nport, fc_id); + if (rc != EFC_HW_RTN_SUCCESS) { + efc_log_err(nport->efc, + "efc_hw_port_attach failed: %d\n", rc); + return EFC_FAIL; + } + return EFC_SUCCESS; +} + +static void +efc_nport_shutdown(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + struct efc_node *node; + unsigned long index; + + xa_for_each(&nport->lookup, index, node) { + if (!(node->rnode.fc_id == FC_FID_FLOGI && nport->is_vport)) { + efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL); + continue; + } + + /* + * If this is a vport, logout of the fabric + * controller so that it deletes the vport + * on the switch. + */ + /* if link is down, don't send logo */ + if (efc->link_status == EFC_LINK_STATUS_DOWN) { + efc_node_post_event(node, EFC_EVT_SHUTDOWN, NULL); + continue; + } + + efc_log_debug(efc, "[%s] nport shutdown vport, send logo\n", + node->display_name); + + if (!efc_send_logo(node)) { + /* sent LOGO, wait for response */ + efc_node_transition(node, __efc_d_wait_logo_rsp, NULL); + continue; + } + + /* + * failed to send LOGO, + * go ahead and cleanup node anyways + */ + node_printf(node, "Failed to send LOGO\n"); + efc_node_post_event(node, EFC_EVT_SHUTDOWN_EXPLICIT_LOGO, NULL); + } +} + +static void +efc_vport_link_down(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + struct efc_vport_spec *vport; + + /* Clear the nport reference in the vport specification */ + list_for_each_entry(vport, &efc->vport_list, list_entry) { + if (vport->nport == nport) { + kref_put(&nport->ref, nport->release); + vport->nport = NULL; + break; + } + } +} + +static void +__efc_nport_common(const char *funcname, struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc_domain *domain = nport->domain; + struct efc *efc = nport->efc; + + switch (evt) { + case EFC_EVT_ENTER: + case EFC_EVT_REENTER: + case EFC_EVT_EXIT: + case EFC_EVT_ALL_CHILD_NODES_FREE: + break; + case EFC_EVT_NPORT_ATTACH_OK: + efc_sm_transition(ctx, __efc_nport_attached, NULL); + break; + case EFC_EVT_SHUTDOWN: + /* Flag this nport as shutting down */ + nport->shutting_down = true; + + if (nport->is_vport) + efc_vport_link_down(nport); + + if (xa_empty(&nport->lookup)) { + /* Remove the nport from the domain's lookup table */ + xa_erase(&domain->lookup, nport->fc_id); + efc_sm_transition(ctx, __efc_nport_wait_port_free, + NULL); + if (efc_cmd_nport_free(efc, nport)) { + efc_log_debug(nport->efc, + "efc_hw_port_free failed\n"); + /* Not much we can do, free the nport anyways */ + efc_nport_free(nport); + } + } else { + /* sm: node list is not empty / shutdown nodes */ + efc_sm_transition(ctx, + __efc_nport_wait_shutdown, NULL); + efc_nport_shutdown(nport); + } + break; + default: + efc_log_debug(nport->efc, "[%s] %-20s %-20s not handled\n", + nport->display_name, funcname, + efc_sm_event_name(evt)); + } +} + +void +__efc_nport_allocated(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc_domain *domain = nport->domain; + + nport_sm_trace(nport); + + switch (evt) { + /* the physical nport is attached */ + case EFC_EVT_NPORT_ATTACH_OK: + WARN_ON(nport != domain->nport); + efc_sm_transition(ctx, __efc_nport_attached, NULL); + break; + + case EFC_EVT_NPORT_ALLOC_OK: + /* ignore */ + break; + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +void +__efc_nport_vport_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc *efc = nport->efc; + + nport_sm_trace(nport); + + switch (evt) { + case EFC_EVT_ENTER: { + __be64 be_wwpn = cpu_to_be64(nport->wwpn); + + if (nport->wwpn == 0) + efc_log_debug(efc, "vport: letting f/w select WWN\n"); + + if (nport->fc_id != U32_MAX) { + efc_log_debug(efc, "vport: hard coding port id: %x\n", + nport->fc_id); + } + + efc_sm_transition(ctx, __efc_nport_vport_wait_alloc, NULL); + /* If wwpn is zero, then we'll let the f/w */ + if (efc_cmd_nport_alloc(efc, nport, nport->domain, + nport->wwpn == 0 ? NULL : + (uint8_t *)&be_wwpn)) { + efc_log_err(efc, "Can't allocate port\n"); + break; + } + + break; + } + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +void +__efc_nport_vport_wait_alloc(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc *efc = nport->efc; + + nport_sm_trace(nport); + + switch (evt) { + case EFC_EVT_NPORT_ALLOC_OK: { + struct fc_els_flogi *sp; + + sp = (struct fc_els_flogi *)nport->service_params; + /* + * If we let f/w assign wwn's, + * then nport wwn's with those returned by hw + */ + if (nport->wwnn == 0) { + nport->wwnn = be64_to_cpu(nport->sli_wwnn); + nport->wwpn = be64_to_cpu(nport->sli_wwpn); + snprintf(nport->wwnn_str, sizeof(nport->wwnn_str), + "%016llX", nport->wwpn); + } + + /* Update the nport's service parameters */ + sp->fl_wwpn = cpu_to_be64(nport->wwpn); + sp->fl_wwnn = cpu_to_be64(nport->wwnn); + + /* + * if nport->fc_id is uninitialized, + * then request that the fabric node use FDISC + * to find an fc_id. + * Otherwise we're restoring vports, or we're in + * fabric emulation mode, so attach the fc_id + */ + if (nport->fc_id == U32_MAX) { + struct efc_node *fabric; + + fabric = efc_node_alloc(nport, FC_FID_FLOGI, false, + false); + if (!fabric) { + efc_log_err(efc, "efc_node_alloc() failed\n"); + return; + } + efc_node_transition(fabric, __efc_vport_fabric_init, + NULL); + } else { + snprintf(nport->wwnn_str, sizeof(nport->wwnn_str), + "%016llX", nport->wwpn); + efc_nport_attach(nport, nport->fc_id); + } + efc_sm_transition(ctx, __efc_nport_vport_allocated, NULL); + break; + } + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +void +__efc_nport_vport_allocated(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc *efc = nport->efc; + + nport_sm_trace(nport); + + /* + * This state is entered after the nport is allocated; + * it then waits for a fabric node + * FDISC to complete, which requests a nport attach. + * The nport attach complete is handled in this state. + */ + switch (evt) { + case EFC_EVT_NPORT_ATTACH_OK: { + struct efc_node *node; + + /* Find our fabric node, and forward this event */ + node = efc_node_find(nport, FC_FID_FLOGI); + if (!node) { + efc_log_debug(efc, "can't find node %06x\n", + FC_FID_FLOGI); + break; + } + /* sm: / forward nport attach to fabric node */ + efc_node_post_event(node, evt, NULL); + efc_sm_transition(ctx, __efc_nport_attached, NULL); + break; + } + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +static void +efc_vport_update_spec(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + struct efc_vport_spec *vport; + unsigned long flags = 0; + + spin_lock_irqsave(&efc->vport_lock, flags); + list_for_each_entry(vport, &efc->vport_list, list_entry) { + if (vport->nport == nport) { + vport->wwnn = nport->wwnn; + vport->wwpn = nport->wwpn; + vport->tgt_data = nport->tgt_data; + vport->ini_data = nport->ini_data; + break; + } + } + spin_unlock_irqrestore(&efc->vport_lock, flags); +} + +void +__efc_nport_attached(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc *efc = nport->efc; + + nport_sm_trace(nport); + + switch (evt) { + case EFC_EVT_ENTER: { + struct efc_node *node; + unsigned long index; + + efc_log_debug(efc, + "[%s] NPORT attached WWPN %016llX WWNN %016llX\n", + nport->display_name, + nport->wwpn, nport->wwnn); + + xa_for_each(&nport->lookup, index, node) { + efc_node_update_display_name(node); + } + + nport->tgt_id = nport->fc_id; + + efc->tt.new_nport(efc, nport); + + /* + * Update the vport (if its not the physical nport) + * parameters + */ + if (nport->is_vport) + efc_vport_update_spec(nport); + break; + } + + case EFC_EVT_EXIT: + efc_log_debug(efc, + "[%s] NPORT deattached WWPN %016llX WWNN %016llX\n", + nport->display_name, + nport->wwpn, nport->wwnn); + + efc->tt.del_nport(efc, nport); + break; + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + + +void +__efc_nport_wait_shutdown(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + struct efc_domain *domain = nport->domain; + struct efc *efc = nport->efc; + + nport_sm_trace(nport); + + switch (evt) { + case EFC_EVT_NPORT_ALLOC_OK: + case EFC_EVT_NPORT_ALLOC_FAIL: + case EFC_EVT_NPORT_ATTACH_OK: + case EFC_EVT_NPORT_ATTACH_FAIL: + /* ignore these events - just wait for the all free event */ + break; + + case EFC_EVT_ALL_CHILD_NODES_FREE: { + /* + * Remove the nport from the domain's + * sparse vector lookup table + */ + xa_erase(&domain->lookup, nport->fc_id); + efc_sm_transition(ctx, __efc_nport_wait_port_free, NULL); + if (efc_cmd_nport_free(efc, nport)) { + efc_log_err(nport->efc, "efc_hw_port_free failed\n"); + /* Not much we can do, free the nport anyways */ + efc_nport_free(nport); + } + break; + } + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +void +__efc_nport_wait_port_free(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_nport *nport = ctx->app; + + nport_sm_trace(nport); + + switch (evt) { + case EFC_EVT_NPORT_ATTACH_OK: + /* Ignore as we are waiting for the free CB */ + break; + case EFC_EVT_NPORT_FREE_OK: { + /* All done, free myself */ + efc_nport_free(nport); + break; + } + default: + __efc_nport_common(__func__, ctx, evt, arg); + } +} + +static int +efc_vport_nport_alloc(struct efc_domain *domain, struct efc_vport_spec *vport) +{ + struct efc_nport *nport; + + lockdep_assert_held(&domain->efc->lock); + + nport = efc_nport_alloc(domain, vport->wwpn, vport->wwnn, vport->fc_id, + vport->enable_ini, vport->enable_tgt); + vport->nport = nport; + if (!nport) + return EFC_FAIL; + + kref_get(&nport->ref); + nport->is_vport = true; + nport->tgt_data = vport->tgt_data; + nport->ini_data = vport->ini_data; + + efc_sm_transition(&nport->sm, __efc_nport_vport_init, NULL); + + return EFC_SUCCESS; +} + +int +efc_vport_start(struct efc_domain *domain) +{ + struct efc *efc = domain->efc; + struct efc_vport_spec *vport; + struct efc_vport_spec *next; + int rc = EFC_SUCCESS; + unsigned long flags = 0; + + /* Use the vport spec to find the associated vports and start them */ + spin_lock_irqsave(&efc->vport_lock, flags); + list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) { + if (!vport->nport) { + if (efc_vport_nport_alloc(domain, vport)) + rc = EFC_FAIL; + } + } + spin_unlock_irqrestore(&efc->vport_lock, flags); + + return rc; +} + +int +efc_nport_vport_new(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn, + u32 fc_id, bool ini, bool tgt, void *tgt_data, + void *ini_data) +{ + struct efc *efc = domain->efc; + struct efc_vport_spec *vport; + int rc = EFC_SUCCESS; + unsigned long flags = 0; + + if (ini && domain->efc->enable_ini == 0) { + efc_log_debug(efc, + "driver initiator functionality not enabled\n"); + return EFC_FAIL; + } + + if (tgt && domain->efc->enable_tgt == 0) { + efc_log_debug(efc, + "driver target functionality not enabled\n"); + return EFC_FAIL; + } + + /* + * Create a vport spec if we need to recreate + * this vport after a link up event + */ + vport = efc_vport_create_spec(domain->efc, wwnn, wwpn, fc_id, ini, tgt, + tgt_data, ini_data); + if (!vport) { + efc_log_err(efc, "failed to create vport object entry\n"); + return EFC_FAIL; + } + + spin_lock_irqsave(&efc->lock, flags); + rc = efc_vport_nport_alloc(domain, vport); + spin_unlock_irqrestore(&efc->lock, flags); + + return rc; +} + +int +efc_nport_vport_del(struct efc *efc, struct efc_domain *domain, + u64 wwpn, uint64_t wwnn) +{ + struct efc_nport *nport; + int found = 0; + struct efc_vport_spec *vport; + struct efc_vport_spec *next; + unsigned long flags = 0; + + spin_lock_irqsave(&efc->vport_lock, flags); + /* walk the efc_vport_list and remove from there */ + list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) { + if (vport->wwpn == wwpn && vport->wwnn == wwnn) { + list_del(&vport->list_entry); + kfree(vport); + break; + } + } + spin_unlock_irqrestore(&efc->vport_lock, flags); + + if (!domain) { + /* No domain means no nport to look for */ + return EFC_SUCCESS; + } + + spin_lock_irqsave(&efc->lock, flags); + list_for_each_entry(nport, &domain->nport_list, list_entry) { + if (nport->wwpn == wwpn && nport->wwnn == wwnn) { + found = 1; + break; + } + } + + if (found) { + kref_put(&nport->ref, nport->release); + /* Shutdown this NPORT */ + efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL); + } + spin_unlock_irqrestore(&efc->lock, flags); + return EFC_SUCCESS; +} + +void +efc_vport_del_all(struct efc *efc) +{ + struct efc_vport_spec *vport; + struct efc_vport_spec *next; + unsigned long flags = 0; + + spin_lock_irqsave(&efc->vport_lock, flags); + list_for_each_entry_safe(vport, next, &efc->vport_list, list_entry) { + list_del(&vport->list_entry); + kfree(vport); + } + spin_unlock_irqrestore(&efc->vport_lock, flags); +} + +struct efc_vport_spec * +efc_vport_create_spec(struct efc *efc, uint64_t wwnn, uint64_t wwpn, + u32 fc_id, bool enable_ini, + bool enable_tgt, void *tgt_data, void *ini_data) +{ + struct efc_vport_spec *vport; + unsigned long flags = 0; + + /* + * walk the efc_vport_list and return failure + * if a valid(vport with non zero WWPN and WWNN) vport entry + * is already created + */ + spin_lock_irqsave(&efc->vport_lock, flags); + list_for_each_entry(vport, &efc->vport_list, list_entry) { + if ((wwpn && vport->wwpn == wwpn) && + (wwnn && vport->wwnn == wwnn)) { + efc_log_err(efc, + "Failed: VPORT %016llX %016llX already allocated\n", + wwnn, wwpn); + spin_unlock_irqrestore(&efc->vport_lock, flags); + return NULL; + } + } + + vport = kzalloc(sizeof(*vport), GFP_ATOMIC); + if (!vport) { + spin_unlock_irqrestore(&efc->vport_lock, flags); + return NULL; + } + + vport->wwnn = wwnn; + vport->wwpn = wwpn; + vport->fc_id = fc_id; + vport->enable_tgt = enable_tgt; + vport->enable_ini = enable_ini; + vport->tgt_data = tgt_data; + vport->ini_data = ini_data; + + INIT_LIST_HEAD(&vport->list_entry); + list_add_tail(&vport->list_entry, &efc->vport_list); + spin_unlock_irqrestore(&efc->vport_lock, flags); + return vport; +} diff --git a/drivers/scsi/elx/libefc/efc_nport.h b/drivers/scsi/elx/libefc/efc_nport.h new file mode 100644 index 000000000000..b575ea205bbf --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_nport.h @@ -0,0 +1,50 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/** + * EFC FC port (NPORT) exported declarations + * + */ + +#ifndef __EFC_NPORT_H__ +#define __EFC_NPORT_H__ + +struct efc_nport * +efc_nport_find(struct efc_domain *domain, u32 d_id); +struct efc_nport * +efc_nport_alloc(struct efc_domain *domain, uint64_t wwpn, uint64_t wwnn, + u32 fc_id, bool enable_ini, bool enable_tgt); +void +efc_nport_free(struct efc_nport *nport); +int +efc_nport_attach(struct efc_nport *nport, u32 fc_id); + +void +__efc_nport_allocated(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_wait_shutdown(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_wait_port_free(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_vport_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_vport_wait_alloc(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_vport_allocated(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_nport_attached(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); + +int +efc_vport_start(struct efc_domain *domain); + +#endif /* __EFC_NPORT_H__ */ From patchwork Thu Jan 7 22:58:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358591 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DAB48C433E6 for ; Thu, 7 Jan 2021 23:00:38 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A4635235FF for ; Thu, 7 Jan 2021 23:00:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728477AbhAGXAh (ORCPT ); Thu, 7 Jan 2021 18:00:37 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728453AbhAGXAf (ORCPT ); Thu, 7 Jan 2021 18:00:35 -0500 Received: from mail-pl1-x634.google.com (mail-pl1-x634.google.com [IPv6:2607:f8b0:4864:20::634]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 8A988C061285 for ; Thu, 7 Jan 2021 14:59:24 -0800 (PST) Received: by mail-pl1-x634.google.com with SMTP id e2so4619783plt.12 for ; Thu, 07 Jan 2021 14:59:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=jFgfrmfqic7zlCfzdNxpkar1XdvoS37lXR/ePrZafHU=; b=PTa5l+wk8eLobZ/J+c4jsTzJBidvqYXHMWU6zyz4q6+yB4ibiX7/kVCZqVPW909FGf hekMXYgmw3l1N5+ajv6Zn9jeZ+pAFla3iNKdgfpRwNfOQH76Om3EGDUoR5wO7/ddfAVA ULk/yEgTA55QcX/aCXEVBRwEw5EPsettDP/JWS5JZ/+qzPuQhnemPZkIxDUtc8pSxurr mSB8SKqiJmJm/GQxQArExXsk93o1z1VLfrCtMq2TL2z25kq2zOP0fH9dHeQ6DotbwPBT Tgv9bRijDkyOYSjPCbla2rfohXb2wvSIL/5gnCn0C9ISMY5ytcPKMlupNrIW7qe5v5dJ JtSw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=jFgfrmfqic7zlCfzdNxpkar1XdvoS37lXR/ePrZafHU=; b=adRlPWehdHVx+IVG3FOdr2mS8qF+WJ1qPqFHfK6b3YWG8K1WFJ1fra+sBRWFpokHeP CHUsk9tz/4qnCgge3Kc2SgVNha7WETbLYhc5XzKzwGvb1TxjJbnxNHdOo/XsN+gGzuaJ ItEfG2Srp2S6nTni9kvq6D+3u6Zeb50sV0xaJorLt9lJHz/v38Fw7acR8PRn/FTBXmDd VtUSFAbDgKrddHWVuDpMo833CK8KZeZpAuQgFccZcJyYoAiDvC7rDQSPm5eLkSScpTck bST/2QZ1TS8l3mU/CJgrL7WRnH19df+ETEapUI88NPQGbdqIBFUhmz0KhPzg7RCUaR7o lbhA== X-Gm-Message-State: AOAM530LJ/a58/fMkiI2G1QbpzJBFu9iImmNuduaSrQ6hSG4ww8WUp6P 4txtCX6jIKcRH8M+aAi5YSxSAiOo4byLtg== X-Google-Smtp-Source: ABdhPJzw9EQf3jKLSILgAOx57+TBRFkZQy3LQJK62U1I4BT0Mf18vUJ7df+L5Y3TU74hl4nlzSnUAg== X-Received: by 2002:a17:902:b904:b029:dc:18f2:9419 with SMTP id bf4-20020a170902b904b02900dc18f29419mr1003663plb.66.1610060363349; Thu, 07 Jan 2021 14:59:23 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:22 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 13/31] elx: libefc: Fabric node state machine interfaces Date: Thu, 7 Jan 2021 14:58:47 -0800 Message-Id: <20210107225905.18186-14-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc library population. This patch adds library interface definitions for: - Fabric node initialization and logins. - Name/Directory Services node. - Fabric Controller node to process rscn events. These are all interactions with remote ports that correspond to well-known fabric entities Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc/efc_fabric.c | 1565 ++++++++++++++++++++++++++ drivers/scsi/elx/libefc/efc_fabric.h | 116 ++ 2 files changed, 1681 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_fabric.c create mode 100644 drivers/scsi/elx/libefc/efc_fabric.h diff --git a/drivers/scsi/elx/libefc/efc_fabric.c b/drivers/scsi/elx/libefc/efc_fabric.c new file mode 100644 index 000000000000..37ae8b685f1a --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_fabric.c @@ -0,0 +1,1565 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * This file implements remote node state machines for: + * - Fabric logins. + * - Fabric controller events. + * - Name/directory services interaction. + * - Point-to-point logins. + */ + +/* + * fabric_sm Node State Machine: Fabric States + * ns_sm Node State Machine: Name/Directory Services States + * p2p_sm Node State Machine: Point-to-Point Node States + */ + +#include "efc.h" + +static void +efc_fabric_initiate_shutdown(struct efc_node *node) +{ + struct efc *efc = node->efc; + + WRITE_ONCE(node->els_io_enabled, false); + + if (node->attached) { + int rc; + + /* issue hw node free; don't care if succeeds right away + * or sometime later, will check node->attached later in + * shutdown process + */ + rc = efc_cmd_node_detach(efc, &node->rnode); + if (rc != EFC_HW_RTN_SUCCESS && + rc != EFC_HW_RTN_SUCCESS_SYNC) { + node_printf(node, "Failed freeing HW node, rc=%d\n", + rc); + } + } + /* + * node has either been detached or is in the process of being detached, + * call common node's initiate cleanup function + */ + efc_node_initiate_cleanup(node); +} + +static void +__efc_fabric_common(const char *funcname, struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = NULL; + + node = ctx->app; + + switch (evt) { + case EFC_EVT_DOMAIN_ATTACH_OK: + break; + case EFC_EVT_SHUTDOWN: + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + + default: + /* call default event handler common to all nodes */ + __efc_node_common(funcname, ctx, evt, arg); + } +} + +void +__efc_fabric_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg) +{ + struct efc_node *node = ctx->app; + struct efc *efc = node->efc; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_REENTER: + efc_log_debug(efc, ">>> reenter !!\n"); + fallthrough; + + case EFC_EVT_ENTER: + /* send FLOGI */ + efc_send_flogi(node); + efc_node_transition(node, __efc_fabric_flogi_wait_rsp, NULL); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +efc_fabric_set_topology(struct efc_node *node, + enum efc_nport_topology topology) +{ + node->nport->topology = topology; +} + +void +efc_fabric_notify_topology(struct efc_node *node) +{ + struct efc_node *tmp_node; + enum efc_nport_topology topology = node->nport->topology; + unsigned long index; + + /* + * now loop through the nodes in the nport + * and send topology notification + */ + xa_for_each(&node->nport->lookup, index, tmp_node) { + if (tmp_node != node) { + efc_node_post_event(tmp_node, + EFC_EVT_NPORT_TOPOLOGY_NOTIFY, + (void *)topology); + } + } +} + +static bool efc_rnode_is_nport(struct fc_els_flogi *rsp) +{ + return !(ntohs(rsp->fl_csp.sp_features) & FC_SP_FT_FPORT); +} + +void +__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + + memcpy(node->nport->domain->flogi_service_params, + cbdata->els_rsp.virt, + sizeof(struct fc_els_flogi)); + + /* Check to see if the fabric is an F_PORT or and N_PORT */ + if (!efc_rnode_is_nport(cbdata->els_rsp.virt)) { + /* sm: if not nport / efc_domain_attach */ + /* ext_status has the fc_id, attach domain */ + efc_fabric_set_topology(node, EFC_NPORT_TOPO_FABRIC); + efc_fabric_notify_topology(node); + WARN_ON(node->nport->domain->attached); + efc_domain_attach(node->nport->domain, + cbdata->ext_status); + efc_node_transition(node, + __efc_fabric_wait_domain_attach, + NULL); + break; + } + + /* sm: if nport and p2p_winner / efc_domain_attach */ + efc_fabric_set_topology(node, EFC_NPORT_TOPO_P2P); + if (efc_p2p_setup(node->nport)) { + node_printf(node, + "p2p setup failed, shutting down node\n"); + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + } + + if (node->nport->p2p_winner) { + efc_node_transition(node, + __efc_p2p_wait_domain_attach, + NULL); + if (node->nport->domain->attached && + !node->nport->domain->domain_notify_pend) { + /* + * already attached, + * just send ATTACH_OK + */ + node_printf(node, + "p2p winner, domain already attached\n"); + efc_node_post_event(node, + EFC_EVT_DOMAIN_ATTACH_OK, + NULL); + } + } else { + /* + * peer is p2p winner; + * PLOGI will be received on the + * remote SID=1 node; + * this node has served its purpose + */ + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + } + + break; + } + + case EFC_EVT_ELS_REQ_ABORTED: + case EFC_EVT_SRRS_ELS_REQ_RJT: + case EFC_EVT_SRRS_ELS_REQ_FAIL: { + struct efc_nport *nport = node->nport; + /* + * with these errors, we have no recovery, + * so shutdown the nport, leave the link + * up and the domain ready + */ + if (efc_node_check_els_req(ctx, evt, arg, ELS_FLOGI, + __efc_fabric_common, __func__)) { + return; + } + node_printf(node, + "FLOGI failed evt=%s, shutting down nport [%s]\n", + efc_sm_event_name(evt), nport->display_name); + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + efc_sm_post_event(&nport->sm, EFC_EVT_SHUTDOWN, NULL); + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_vport_fabric_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + /* sm: / send FDISC */ + efc_send_fdisc(node); + efc_node_transition(node, __efc_fabric_fdisc_wait_rsp, NULL); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + /* fc_id is in ext_status */ + if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC, + __efc_fabric_common, __func__)) { + return; + } + + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / efc_nport_attach */ + efc_nport_attach(node->nport, cbdata->ext_status); + efc_node_transition(node, __efc_fabric_wait_domain_attach, + NULL); + break; + } + + case EFC_EVT_SRRS_ELS_REQ_RJT: + case EFC_EVT_SRRS_ELS_REQ_FAIL: { + if (efc_node_check_els_req(ctx, evt, arg, ELS_FDISC, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + efc_log_err(node->efc, "FDISC failed, shutting down nport\n"); + /* sm: / shutdown nport */ + efc_sm_post_event(&node->nport->sm, EFC_EVT_SHUTDOWN, NULL); + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +static int +efc_start_ns_node(struct efc_nport *nport) +{ + struct efc_node *ns; + + /* Instantiate a name services node */ + ns = efc_node_find(nport, FC_FID_DIR_SERV); + if (!ns) { + ns = efc_node_alloc(nport, FC_FID_DIR_SERV, false, false); + if (!ns) + return EFC_FAIL; + } + /* + * for found ns, should we be transitioning from here? + * breaks transition only + * 1. from within state machine or + * 2. if after alloc + */ + if (ns->efc->nodedb_mask & EFC_NODEDB_PAUSE_NAMESERVER) + efc_node_pause(ns, __efc_ns_init); + else + efc_node_transition(ns, __efc_ns_init, NULL); + return EFC_SUCCESS; +} + +static int +efc_start_fabctl_node(struct efc_nport *nport) +{ + struct efc_node *fabctl; + + fabctl = efc_node_find(nport, FC_FID_FCTRL); + if (!fabctl) { + fabctl = efc_node_alloc(nport, FC_FID_FCTRL, + false, false); + if (!fabctl) + return EFC_FAIL; + } + /* + * for found ns, should we be transitioning from here? + * breaks transition only + * 1. from within state machine or + * 2. if after alloc + */ + efc_node_transition(fabctl, __efc_fabctl_init, NULL); + return EFC_SUCCESS; +} + +void +__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + case EFC_EVT_DOMAIN_ATTACH_OK: + case EFC_EVT_NPORT_ATTACH_OK: { + int rc; + + rc = efc_start_ns_node(node->nport); + if (rc) + return; + + /* sm: if enable_ini / start fabctl node */ + /* Instantiate the fabric controller (sends SCR) */ + if (node->nport->enable_rscn) { + rc = efc_start_fabctl_node(node->nport); + if (rc) + return; + } + efc_node_transition(node, __efc_fabric_idle, NULL); + break; + } + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabric_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, + void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_DOMAIN_ATTACH_OK: + break; + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + /* sm: / send PLOGI */ + efc_send_plogi(node); + efc_node_transition(node, __efc_ns_plogi_wait_rsp, NULL); + break; + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + int rc; + + /* Save service parameters */ + if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / save sparams, efc_node_attach */ + efc_node_save_sparms(node, cbdata->els_rsp.virt); + rc = efc_node_attach(node); + efc_node_transition(node, __efc_ns_wait_node_attach, NULL); + if (rc == EFC_HW_RTN_SUCCESS_SYNC) + efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, + NULL); + break; + } + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_NODE_ATTACH_OK: + node->attached = true; + /* sm: / send RFTID */ + efc_ns_send_rftid(node); + efc_node_transition(node, __efc_ns_rftid_wait_rsp, NULL); + break; + + case EFC_EVT_NODE_ATTACH_FAIL: + /* node attach failed, shutdown the node */ + node->attached = false; + node_printf(node, "Node attach failed\n"); + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + + case EFC_EVT_SHUTDOWN: + node_printf(node, "Shutdown event received\n"); + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_node_transition(node, + __efc_fabric_wait_attach_evt_shutdown, + NULL); + break; + + /* + * if receive RSCN just ignore, + * we haven't sent GID_PT yet (ACC sent by fabctl node) + */ + case EFC_EVT_RSCN_RCVD: + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + /* wait for any of these attach events and then shutdown */ + case EFC_EVT_NODE_ATTACH_OK: + node->attached = true; + node_printf(node, "Attach evt=%s, proceed to shutdown\n", + efc_sm_event_name(evt)); + efc_fabric_initiate_shutdown(node); + break; + + case EFC_EVT_NODE_ATTACH_FAIL: + node->attached = false; + node_printf(node, "Attach evt=%s, proceed to shutdown\n", + efc_sm_event_name(evt)); + efc_fabric_initiate_shutdown(node); + break; + + /* ignore shutdown event as we're already in shutdown path */ + case EFC_EVT_SHUTDOWN: + node_printf(node, "Shutdown event received\n"); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: + if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFT_ID, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / send RFFID */ + efc_ns_send_rffid(node); + efc_node_transition(node, __efc_ns_rffid_wait_rsp, NULL); + break; + + /* + * if receive RSCN just ignore, + * we haven't sent GID_PT yet (ACC sent by fabctl node) + */ + case EFC_EVT_RSCN_RCVD: + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + /* + * Waits for an RFFID response event; + * if rscn enabled, a GIDPT name services request is issued. + */ + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_RFF_ID, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + if (node->nport->enable_rscn) { + /* sm: if enable_rscn / send GIDPT */ + efc_ns_send_gidpt(node); + + efc_node_transition(node, __efc_ns_gidpt_wait_rsp, + NULL); + } else { + /* if 'T' only, we're done, go to idle */ + efc_node_transition(node, __efc_ns_idle, NULL); + } + break; + } + /* + * if receive RSCN just ignore, + * we haven't sent GID_PT yet (ACC sent by fabctl node) + */ + case EFC_EVT_RSCN_RCVD: + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +static int +efc_process_gidpt_payload(struct efc_node *node, + void *data, u32 gidpt_len) +{ + u32 i, j; + struct efc_node *newnode; + struct efc_nport *nport = node->nport; + struct efc *efc = node->efc; + u32 port_id = 0, port_count, plist_count; + struct efc_node *n; + struct efc_node **active_nodes; + int residual; + struct { + struct fc_ct_hdr hdr; + struct fc_gid_pn_resp pn_rsp; + } *rsp; + struct fc_gid_pn_resp *gidpt; + unsigned long index; + + rsp = data; + gidpt = &rsp->pn_rsp; + residual = be16_to_cpu(rsp->hdr.ct_mr_size); + + if (residual != 0) + efc_log_debug(node->efc, "residual is %u words\n", residual); + + if (be16_to_cpu(rsp->hdr.ct_cmd) == FC_FS_RJT) { + node_printf(node, + "GIDPT request failed: rsn x%x rsn_expl x%x\n", + rsp->hdr.ct_reason, rsp->hdr.ct_explan); + return EFC_FAIL; + } + + plist_count = (gidpt_len - sizeof(struct fc_ct_hdr)) / sizeof(*gidpt); + + /* Count the number of nodes */ + port_count = 0; + xa_for_each(&nport->lookup, index, n) { + port_count++; + } + + /* Allocate a buffer for all nodes */ + active_nodes = kzalloc(port_count * sizeof(*active_nodes), GFP_ATOMIC); + if (!active_nodes) { + node_printf(node, "efc_malloc failed\n"); + return EFC_FAIL; + } + + /* Fill buffer with fc_id of active nodes */ + i = 0; + xa_for_each(&nport->lookup, index, n) { + port_id = n->rnode.fc_id; + switch (port_id) { + case FC_FID_FLOGI: + case FC_FID_FCTRL: + case FC_FID_DIR_SERV: + break; + default: + if (port_id != FC_FID_DOM_MGR) + active_nodes[i++] = n; + break; + } + } + + /* update the active nodes buffer */ + for (i = 0; i < plist_count; i++) { + hton24(gidpt[i].fp_fid, port_id); + + for (j = 0; j < port_count; j++) { + if (active_nodes[j] && + port_id == active_nodes[j]->rnode.fc_id) { + active_nodes[j] = NULL; + } + } + + if (gidpt[i].fp_resvd & FC_NS_FID_LAST) + break; + } + + /* Those remaining in the active_nodes[] are now gone ! */ + for (i = 0; i < port_count; i++) { + /* + * if we're an initiator and the remote node + * is a target, then post the node missing event. + * if we're target and we have enabled + * target RSCN, then post the node missing event. + */ + if (!active_nodes[i]) + continue; + + if ((node->nport->enable_ini && active_nodes[i]->targ) || + (node->nport->enable_tgt && enable_target_rscn(efc))) { + efc_node_post_event(active_nodes[i], + EFC_EVT_NODE_MISSING, NULL); + } else { + node_printf(node, + "GID_PT: skipping non-tgt port_id x%06x\n", + active_nodes[i]->rnode.fc_id); + } + } + kfree(active_nodes); + + for (i = 0; i < plist_count; i++) { + hton24(gidpt[i].fp_fid, port_id); + + /* Don't create node for ourselves */ + if (port_id == node->rnode.nport->fc_id) { + if (gidpt[i].fp_resvd & FC_NS_FID_LAST) + break; + continue; + } + + newnode = efc_node_find(nport, port_id); + if (!newnode) { + if (!node->nport->enable_ini) + continue; + + newnode = efc_node_alloc(nport, port_id, false, false); + if (!newnode) { + efc_log_err(efc, "efc_node_alloc() failed\n"); + return EFC_FAIL; + } + /* + * send PLOGI automatically + * if initiator + */ + efc_node_init_device(newnode, true); + } + + if (node->nport->enable_ini && newnode->targ) { + efc_node_post_event(newnode, EFC_EVT_NODE_REFOUND, + NULL); + } + + if (gidpt[i].fp_resvd & FC_NS_FID_LAST) + break; + } + return EFC_SUCCESS; +} + +void +__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + /* + * Wait for a GIDPT response from the name server. Process the FC_IDs + * that are reported by creating new remote ports, as needed. + */ + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + if (efc_node_check_ns_req(ctx, evt, arg, FC_NS_GID_PT, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / process GIDPT payload */ + efc_process_gidpt_payload(node, cbdata->els_rsp.virt, + cbdata->els_rsp.len); + efc_node_transition(node, __efc_ns_idle, NULL); + break; + } + + case EFC_EVT_SRRS_ELS_REQ_FAIL: { + /* not much we can do; will retry with the next RSCN */ + node_printf(node, "GID_PT failed to complete\n"); + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + efc_node_transition(node, __efc_ns_idle, NULL); + break; + } + + /* if receive RSCN here, queue up another discovery processing */ + case EFC_EVT_RSCN_RCVD: { + node_printf(node, "RSCN received during GID_PT processing\n"); + node->rscn_pending = true; + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + struct efc *efc = node->efc; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + /* + * Wait for RSCN received events (posted from the fabric controller) + * and restart the GIDPT name services query and processing. + */ + + switch (evt) { + case EFC_EVT_ENTER: + if (!node->rscn_pending) + break; + + node_printf(node, "RSCN pending, restart discovery\n"); + node->rscn_pending = false; + fallthrough; + + case EFC_EVT_RSCN_RCVD: { + /* sm: / send GIDPT */ + /* + * If target RSCN processing is enabled, + * and this is target only (not initiator), + * and tgt_rscn_delay is non-zero, + * then we delay issuing the GID_PT + */ + if (efc->tgt_rscn_delay_msec != 0 && + !node->nport->enable_ini && node->nport->enable_tgt && + enable_target_rscn(efc)) { + efc_node_transition(node, __efc_ns_gidpt_delay, NULL); + } else { + efc_ns_send_gidpt(node); + efc_node_transition(node, __efc_ns_gidpt_wait_rsp, + NULL); + } + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +static void +gidpt_delay_timer_cb(struct timer_list *t) +{ + struct efc_node *node = from_timer(node, t, gidpt_delay_timer); + + del_timer(&node->gidpt_delay_timer); + + efc_node_post_event(node, EFC_EVT_GIDPT_DELAY_EXPIRED, NULL); +} + +void +__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + struct efc *efc = node->efc; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: { + u64 delay_msec, tmp; + + /* + * Compute the delay time. + * Set to tgt_rscn_delay, if the time since last GIDPT + * is less than tgt_rscn_period, then use tgt_rscn_period. + */ + delay_msec = efc->tgt_rscn_delay_msec; + tmp = jiffies_to_msecs(jiffies) - node->time_last_gidpt_msec; + if (tmp < efc->tgt_rscn_period_msec) + delay_msec = efc->tgt_rscn_period_msec; + + timer_setup(&node->gidpt_delay_timer, &gidpt_delay_timer_cb, + 0); + mod_timer(&node->gidpt_delay_timer, + jiffies + msecs_to_jiffies(delay_msec)); + + break; + } + + case EFC_EVT_GIDPT_DELAY_EXPIRED: + node->time_last_gidpt_msec = jiffies_to_msecs(jiffies); + + efc_ns_send_gidpt(node); + efc_node_transition(node, __efc_ns_gidpt_wait_rsp, NULL); + break; + + case EFC_EVT_RSCN_RCVD: { + efc_log_debug(efc, + "RSCN received while in GIDPT delay - no action\n"); + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabctl_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + /* no need to login to fabric controller, just send SCR */ + efc_send_scr(node); + efc_node_transition(node, __efc_fabctl_wait_scr_rsp, NULL); + break; + + case EFC_EVT_NODE_ATTACH_OK: + node->attached = true; + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + /* + * Fabric controller node state machine: + * Wait for an SCR response from the fabric controller. + */ + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: + if (efc_node_check_els_req(ctx, evt, arg, ELS_SCR, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + efc_node_transition(node, __efc_fabctl_ready, NULL); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +static void +efc_process_rscn(struct efc_node *node, struct efc_node_cb *cbdata) +{ + struct efc *efc = node->efc; + struct efc_nport *nport = node->nport; + struct efc_node *ns; + + /* Forward this event to the name-services node */ + ns = efc_node_find(nport, FC_FID_DIR_SERV); + if (ns) + efc_node_post_event(ns, EFC_EVT_RSCN_RCVD, cbdata); + else + efc_log_warn(efc, "can't find name server node\n"); +} + +void +__efc_fabctl_ready(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + /* + * Fabric controller node state machine: Ready. + * In this state, the fabric controller sends a RSCN, which is received + * by this node and is forwarded to the name services node object; and + * the RSCN LS_ACC is sent. + */ + switch (evt) { + case EFC_EVT_RSCN_RCVD: { + struct fc_frame_header *hdr = cbdata->header->dma.virt; + + /* + * sm: / process RSCN (forward to name services node), + * send LS_ACC + */ + efc_process_rscn(node, cbdata); + efc_send_ls_acc(node, be16_to_cpu(hdr->fh_ox_id)); + efc_node_transition(node, __efc_fabctl_wait_ls_acc_cmpl, + NULL); + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_SRRS_ELS_CMPL_OK: + WARN_ON(!node->els_cmpl_cnt); + node->els_cmpl_cnt--; + efc_node_transition(node, __efc_fabctl_ready, NULL); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +static uint64_t +efc_get_wwpn(struct fc_els_flogi *sp) +{ + return be64_to_cpu(sp->fl_wwnn); +} + +static int +efc_rnode_is_winner(struct efc_nport *nport) +{ + struct fc_els_flogi *remote_sp; + u64 remote_wwpn; + u64 local_wwpn = nport->wwpn; + u64 wwn_bump = 0; + + remote_sp = (struct fc_els_flogi *)nport->domain->flogi_service_params; + remote_wwpn = efc_get_wwpn(remote_sp); + + local_wwpn ^= wwn_bump; + + efc_log_debug(nport->efc, "r: %llx\n", + be64_to_cpu(remote_sp->fl_wwpn)); + efc_log_debug(nport->efc, "l: %llx\n", local_wwpn); + + if (remote_wwpn == local_wwpn) { + efc_log_warn(nport->efc, + "WWPN of remote node [%08x %08x] matches local WWPN\n", + (u32)(local_wwpn >> 32ll), + (u32)local_wwpn); + return -1; + } + + return (remote_wwpn > local_wwpn); +} + +void +__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node *node = ctx->app; + struct efc *efc = node->efc; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_DOMAIN_ATTACH_OK: { + struct efc_nport *nport = node->nport; + struct efc_node *rnode; + + /* + * this transient node (SID=0 (recv'd FLOGI) + * or DID=fabric (sent FLOGI)) + * is the p2p winner, will use a separate node + * to send PLOGI to peer + */ + WARN_ON(!node->nport->p2p_winner); + + rnode = efc_node_find(nport, node->nport->p2p_remote_port_id); + if (rnode) { + /* + * the "other" transient p2p node has + * already kicked off the + * new node from which PLOGI is sent + */ + node_printf(node, + "Node with fc_id x%x already exists\n", + rnode->rnode.fc_id); + } else { + /* + * create new node (SID=1, DID=2) + * from which to send PLOGI + */ + rnode = efc_node_alloc(nport, + nport->p2p_remote_port_id, + false, false); + if (!rnode) { + efc_log_err(efc, "node alloc failed\n"); + return; + } + + efc_fabric_notify_topology(node); + /* sm: / allocate p2p remote node */ + efc_node_transition(rnode, __efc_p2p_rnode_init, + NULL); + } + + /* + * the transient node (SID=0 or DID=fabric) + * has served its purpose + */ + if (node->rnode.fc_id == 0) { + /* + * if this is the SID=0 node, + * move to the init state in case peer + * has restarted FLOGI discovery and FLOGI is pending + */ + /* don't send PLOGI on efc_d_init entry */ + efc_node_init_device(node, false); + } else { + /* + * if this is the DID=fabric node + * (we initiated FLOGI), shut it down + */ + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + } + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_p2p_rnode_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + /* sm: / send PLOGI */ + efc_send_plogi(node); + efc_node_transition(node, __efc_p2p_wait_plogi_rsp, NULL); + break; + + case EFC_EVT_ABTS_RCVD: + /* sm: send BA_ACC */ + efc_send_bls_acc(node, cbdata->header->dma.virt); + + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_SRRS_ELS_CMPL_OK: + WARN_ON(!node->els_cmpl_cnt); + node->els_cmpl_cnt--; + + /* sm: if p2p_winner / domain_attach */ + if (node->nport->p2p_winner) { + efc_node_transition(node, + __efc_p2p_wait_domain_attach, + NULL); + if (!node->nport->domain->attached) { + node_printf(node, "Domain not attached\n"); + efc_domain_attach(node->nport->domain, + node->nport->p2p_port_id); + } else { + node_printf(node, "Domain already attached\n"); + efc_node_post_event(node, + EFC_EVT_DOMAIN_ATTACH_OK, + NULL); + } + } else { + /* this node has served its purpose; + * we'll expect a PLOGI on a separate + * node (remote SID=0x1); return this node + * to init state in case peer + * restarts discovery -- it may already + * have (pending frames may exist). + */ + /* don't send PLOGI on efc_d_init entry */ + efc_node_init_device(node, false); + } + break; + + case EFC_EVT_SRRS_ELS_CMPL_FAIL: + /* + * LS_ACC failed, possibly due to link down; + * shutdown node and wait + * for FLOGI discovery to restart + */ + node_printf(node, "FLOGI LS_ACC failed, shutting down\n"); + WARN_ON(!node->els_cmpl_cnt); + node->els_cmpl_cnt--; + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + + case EFC_EVT_ABTS_RCVD: { + /* sm: / send BA_ACC */ + efc_send_bls_acc(node, cbdata->header->dma.virt); + break; + } + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_SRRS_ELS_REQ_OK: { + int rc; + + if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / save sparams, efc_node_attach */ + efc_node_save_sparms(node, cbdata->els_rsp.virt); + rc = efc_node_attach(node); + efc_node_transition(node, __efc_p2p_wait_node_attach, NULL); + if (rc == EFC_HW_RTN_SUCCESS_SYNC) + efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, + NULL); + break; + } + case EFC_EVT_SRRS_ELS_REQ_FAIL: { + if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI, + __efc_fabric_common, __func__)) { + return; + } + node_printf(node, "PLOGI failed, shutting down\n"); + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + } + + case EFC_EVT_PLOGI_RCVD: { + struct fc_frame_header *hdr = cbdata->header->dma.virt; + /* if we're in external loopback mode, just send LS_ACC */ + if (node->efc->external_loopback) { + efc_send_plogi_acc(node, be16_to_cpu(hdr->fh_ox_id)); + } else { + /* + * if this isn't external loopback, + * pass to default handler + */ + __efc_fabric_common(__func__, ctx, evt, arg); + } + break; + } + case EFC_EVT_PRLI_RCVD: + /* I, or I+T */ + /* sent PLOGI and before completion was seen, received the + * PRLI from the remote node (WCQEs and RCQEs come in on + * different queues and order of processing cannot be assumed) + * Save OXID so PRLI can be sent after the attach and continue + * to wait for PLOGI response + */ + efc_process_prli_payload(node, cbdata->payload->dma.virt); + efc_send_ls_acc_after_attach(node, + cbdata->header->dma.virt, + EFC_NODE_SEND_LS_ACC_PRLI); + efc_node_transition(node, __efc_p2p_wait_plogi_rsp_recvd_prli, + NULL); + break; + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + /* + * Since we've received a PRLI, we have a port login and will + * just need to wait for the PLOGI response to do the node + * attach and then we can send the LS_ACC for the PRLI. If, + * during this time, we receive FCP_CMNDs (which is possible + * since we've already sent a PRLI and our peer may have + * accepted). + * At this time, we are not waiting on any other unsolicited + * frames to continue with the login process. Thus, it will not + * hurt to hold frames here. + */ + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_SRRS_ELS_REQ_OK: { /* PLOGI response received */ + int rc; + + /* Completion from PLOGI sent */ + if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + /* sm: / save sparams, efc_node_attach */ + efc_node_save_sparms(node, cbdata->els_rsp.virt); + rc = efc_node_attach(node); + efc_node_transition(node, __efc_p2p_wait_node_attach, NULL); + if (rc == EFC_HW_RTN_SUCCESS_SYNC) + efc_node_post_event(node, EFC_EVT_NODE_ATTACH_OK, + NULL); + break; + } + case EFC_EVT_SRRS_ELS_REQ_FAIL: /* PLOGI response received */ + case EFC_EVT_SRRS_ELS_REQ_RJT: + /* PLOGI failed, shutdown the node */ + if (efc_node_check_els_req(ctx, evt, arg, ELS_PLOGI, + __efc_fabric_common, __func__)) { + return; + } + WARN_ON(!node->els_req_cnt); + node->els_req_cnt--; + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +void +__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg) +{ + struct efc_node_cb *cbdata = arg; + struct efc_node *node = ctx->app; + + efc_node_evt_set(ctx, evt, __func__); + + node_sm_trace(); + + switch (evt) { + case EFC_EVT_ENTER: + efc_node_hold_frames(node); + break; + + case EFC_EVT_EXIT: + efc_node_accept_frames(node); + break; + + case EFC_EVT_NODE_ATTACH_OK: + node->attached = true; + switch (node->send_ls_acc) { + case EFC_NODE_SEND_LS_ACC_PRLI: { + efc_d_send_prli_rsp(node->ls_acc_io, + node->ls_acc_oxid); + node->send_ls_acc = EFC_NODE_SEND_LS_ACC_NONE; + node->ls_acc_io = NULL; + break; + } + case EFC_NODE_SEND_LS_ACC_PLOGI: /* Can't happen in P2P */ + case EFC_NODE_SEND_LS_ACC_NONE: + default: + /* Normal case for I */ + /* sm: send_plogi_acc is not set / send PLOGI acc */ + efc_node_transition(node, __efc_d_port_logged_in, + NULL); + break; + } + break; + + case EFC_EVT_NODE_ATTACH_FAIL: + /* node attach failed, shutdown the node */ + node->attached = false; + node_printf(node, "Node attach failed\n"); + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_fabric_initiate_shutdown(node); + break; + + case EFC_EVT_SHUTDOWN: + node_printf(node, "%s received\n", efc_sm_event_name(evt)); + node->shutdown_reason = EFC_NODE_SHUTDOWN_DEFAULT; + efc_node_transition(node, + __efc_fabric_wait_attach_evt_shutdown, + NULL); + break; + case EFC_EVT_PRLI_RCVD: + node_printf(node, "%s: PRLI received before node is attached\n", + efc_sm_event_name(evt)); + efc_process_prli_payload(node, cbdata->payload->dma.virt); + efc_send_ls_acc_after_attach(node, + cbdata->header->dma.virt, + EFC_NODE_SEND_LS_ACC_PRLI); + break; + + default: + __efc_fabric_common(__func__, ctx, evt, arg); + } +} + +int +efc_p2p_setup(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + int rnode_winner; + + rnode_winner = efc_rnode_is_winner(nport); + + /* set nport flags to indicate p2p "winner" */ + if (rnode_winner == 1) { + nport->p2p_remote_port_id = 0; + nport->p2p_port_id = 0; + nport->p2p_winner = false; + } else if (rnode_winner == 0) { + nport->p2p_remote_port_id = 2; + nport->p2p_port_id = 1; + nport->p2p_winner = true; + } else { + /* no winner; only okay if external loopback enabled */ + if (nport->efc->external_loopback) { + /* + * External loopback mode enabled; + * local nport and remote node + * will be registered with an NPortID = 1; + */ + efc_log_debug(efc, + "External loopback mode enabled\n"); + nport->p2p_remote_port_id = 1; + nport->p2p_port_id = 1; + nport->p2p_winner = true; + } else { + efc_log_warn(efc, + "failed to determine p2p winner\n"); + return rnode_winner; + } + } + return EFC_SUCCESS; +} diff --git a/drivers/scsi/elx/libefc/efc_fabric.h b/drivers/scsi/elx/libefc/efc_fabric.h new file mode 100644 index 000000000000..b0947ae6fdca --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_fabric.h @@ -0,0 +1,116 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * Declarations for the interface exported by efc_fabric + */ + +#ifndef __EFCT_FABRIC_H__ +#define __EFCT_FABRIC_H__ +#include "scsi/fc/fc_els.h" +#include "scsi/fc/fc_fs.h" +#include "scsi/fc/fc_ns.h" + +void +__efc_fabric_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_flogi_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_domain_attach_wait(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_wait_domain_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); + +void +__efc_vport_fabric_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_fdisc_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_wait_nport_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); + +void +__efc_ns_init(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg); +void +__efc_ns_plogi_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_ns_rftid_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_ns_rffid_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_ns_wait_node_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_wait_attach_evt_shutdown(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_ns_logo_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event, void *arg); +void +__efc_ns_gidpt_wait_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_ns_idle(struct efc_sm_ctx *ctx, enum efc_sm_event evt, void *arg); +void +__efc_ns_gidpt_delay(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabctl_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabctl_wait_node_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabctl_wait_scr_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabctl_ready(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabctl_wait_ls_acc_cmpl(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_fabric_idle(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); + +void +__efc_p2p_rnode_init(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_domain_attach_wait(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_wait_flogi_acc_cmpl(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_wait_plogi_rsp(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_wait_plogi_rsp_recvd_prli(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_wait_domain_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); +void +__efc_p2p_wait_node_attach(struct efc_sm_ctx *ctx, + enum efc_sm_event evt, void *arg); + +int +efc_p2p_setup(struct efc_nport *nport); +void +efc_fabric_set_topology(struct efc_node *node, + enum efc_nport_topology topology); +void efc_fabric_notify_topology(struct efc_node *node); + +#endif /* __EFCT_FABRIC_H__ */ From patchwork Thu Jan 7 22:58:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358590 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 09306C43381 for ; Thu, 7 Jan 2021 23:00:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id DC414235FF for ; Thu, 7 Jan 2021 23:00:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728666AbhAGXAk (ORCPT ); Thu, 7 Jan 2021 18:00:40 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51920 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728557AbhAGXAj (ORCPT ); Thu, 7 Jan 2021 18:00:39 -0500 Received: from mail-pl1-x62b.google.com (mail-pl1-x62b.google.com [IPv6:2607:f8b0:4864:20::62b]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 426C2C0612A0 for ; Thu, 7 Jan 2021 14:59:26 -0800 (PST) Received: by mail-pl1-x62b.google.com with SMTP id be12so4640111plb.4 for ; Thu, 07 Jan 2021 14:59:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q/k3Ffne1RynA6tDI3Hnd5RLT9rDtpXQ9yh8aUydQ2g=; b=nDEXAk5u2d4rXK4MnLO1Czr348iuxOrwkFDJTOuR91g6n0sNebXf00yTeqO4jrdy1M PcOnVfd7j7ReIC+tCCKtSoobvpEz9yNjZpIEkT5ws6Wm+KSLnFVafqJ3pLqGedjH2IpY fTGeaVaNzcNxgDBSSdiFbrRvA8nGa6QZMQe3QXtdUKjP7kovh61Ypp5nDeq/57aA+J5g mCacEXeOHoCh2Xug1rYTZ/LFYpYQsitvP2MzBrElUANnv/ixolwJU9JHg79AyhUb7f5M qirMN3u4ZPQrA7RJjDykgIemAWZbt/3nyKvoulxA+2wIXACOKhnfG7dPA74wPpbIDV1X 4wwQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q/k3Ffne1RynA6tDI3Hnd5RLT9rDtpXQ9yh8aUydQ2g=; b=g6fTfubCVGbFW2dKBjnNsIvKj1blK6wZDT/ghmtaHyuMqV1PnXT8KVF7b6kaxkVZAe j5TeThMDkKNc9On0xVgBI9szdNGDWaXdjw42JOwsLqLxWVtTASuxUylykRXdhFjRyHcu o1dBRLyhSMDQoe5qLOIJ3SIcK5uHapktegYmzr8IDcZi/i7WEpwGUDQo6esQNy0VOuKx pW6aZ4xxZGLntiwVhyUzH0Paj8g4Y2jx7VO4LpSMeo5kGrBSQV5yIY623ebS4s7tYvZY 400mlXSRFGsm9BnsnpUUuCLN6CWvLXZ+EVrwomFErI5xUrRMGutUJqOApE3OtRodq7tC Iozg== X-Gm-Message-State: AOAM532iysveP2Z1aIjCW3aaMbjm1y66I+n7klBT/reNLgd2K9ldBkSw yVK10h0wgOhmJucWVPum2+j+VTcMPejD7Q== X-Google-Smtp-Source: ABdhPJxNvh3L47dXCSDmjWdBfAeXn9K25xCcMf15E3mlqKqL//IQGK9rPM9B8l6Fj56ba+aTbKcWBQ== X-Received: by 2002:a17:90a:970b:: with SMTP id x11mr721055pjo.16.1610060365250; Thu, 07 Jan 2021 14:59:25 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:24 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna Subject: [PATCH v7 15/31] elx: libefc: Extended link Service IO handling Date: Thu, 7 Jan 2021 14:58:49 -0800 Message-Id: <20210107225905.18186-16-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc library population. This patch adds library interface definitions for: Functions to build and send ELS/CT/BLS commands and responses. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- v7: Change els functions to return status from efc_els_send_rsp() --- drivers/scsi/elx/libefc/efc_els.c | 1099 +++++++++++++++++++++++++++++ drivers/scsi/elx/libefc/efc_els.h | 107 +++ 2 files changed, 1206 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_els.c create mode 100644 drivers/scsi/elx/libefc/efc_els.h diff --git a/drivers/scsi/elx/libefc/efc_els.c b/drivers/scsi/elx/libefc/efc_els.c new file mode 100644 index 000000000000..c8a8124d2c94 --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_els.c @@ -0,0 +1,1099 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +/* + * Functions to build and send ELS/CT/BLS commands and responses. + */ + +#include "efc.h" +#include "efc_els.h" +#include "../libefc_sli/sli4.h" + +#define EFC_LOG_ENABLE_ELS_TRACE(efc) \ + (((efc) != NULL) ? (((efc)->logmask & (1U << 1)) != 0) : 0) + +#define node_els_trace() \ + do { \ + if (EFC_LOG_ENABLE_ELS_TRACE(efc)) \ + efc_log_info(efc, "[%s] %-20s\n", \ + node->display_name, __func__); \ + } while (0) + +#define els_io_printf(els, fmt, ...) \ + efc_log_err((struct efc *)els->node->efc,\ + "[%s] %-8s " fmt, \ + els->node->display_name,\ + els->display_name, ##__VA_ARGS__) + +#define EFC_ELS_RSP_LEN 1024 +#define EFC_ELS_GID_PT_RSP_LEN 8096 + +struct efc_els_io_req * +efc_els_io_alloc(struct efc_node *node, u32 reqlen) +{ + return efc_els_io_alloc_size(node, reqlen, EFC_ELS_RSP_LEN); +} + +struct efc_els_io_req * +efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen) +{ + struct efc *efc; + struct efc_els_io_req *els; + unsigned long flags = 0; + + efc = node->efc; + + spin_lock_irqsave(&node->els_ios_lock, flags); + + if (!READ_ONCE(node->els_io_enabled)) { + efc_log_err(efc, "els io alloc disabled\n"); + spin_unlock_irqrestore(&node->els_ios_lock, flags); + return NULL; + } + + els = mempool_alloc(efc->els_io_pool, GFP_ATOMIC); + if (!els) { + atomic_add_return(1, &efc->els_io_alloc_failed_count); + spin_unlock_irqrestore(&node->els_ios_lock, flags); + return NULL; + } + + /* initialize refcount */ + kref_init(&els->ref); + els->release = _efc_els_io_free; + + /* populate generic io fields */ + els->node = node; + + /* now allocate DMA for request and response */ + els->io.req.size = reqlen; + els->io.req.virt = dma_alloc_coherent(&efc->pci->dev, els->io.req.size, + &els->io.req.phys, GFP_DMA); + if (!els->io.req.virt) { + mempool_free(els, efc->els_io_pool); + spin_unlock_irqrestore(&node->els_ios_lock, flags); + return NULL; + } + + els->io.rsp.size = rsplen; + els->io.rsp.virt = dma_alloc_coherent(&efc->pci->dev, els->io.rsp.size, + &els->io.rsp.phys, GFP_DMA); + if (!els->io.rsp.virt) { + dma_free_coherent(&efc->pci->dev, els->io.req.size, + els->io.req.virt, els->io.req.phys); + mempool_free(els, efc->els_io_pool); + els = NULL; + } + + if (els) { + /* initialize fields */ + els->els_retries_remaining = EFC_FC_ELS_DEFAULT_RETRIES; + + /* add els structure to ELS IO list */ + INIT_LIST_HEAD(&els->list_entry); + list_add_tail(&els->list_entry, &node->els_ios_list); + } + + spin_unlock_irqrestore(&node->els_ios_lock, flags); + return els; +} + +void +efc_els_io_free(struct efc_els_io_req *els) +{ + kref_put(&els->ref, els->release); +} + +void +_efc_els_io_free(struct kref *arg) +{ + struct efc_els_io_req *els = + container_of(arg, struct efc_els_io_req, ref); + struct efc *efc; + struct efc_node *node; + int send_empty_event = false; + unsigned long flags = 0; + + node = els->node; + efc = node->efc; + + spin_lock_irqsave(&node->els_ios_lock, flags); + + list_del(&els->list_entry); + /* Send list empty event if the IO allocator + * is disabled, and the list is empty + * If node->els_io_enabled was not checked, + * the event would be posted continually + */ + send_empty_event = (!READ_ONCE(node->els_io_enabled)) && + list_empty(&node->els_ios_list); + + spin_unlock_irqrestore(&node->els_ios_lock, flags); + + /* free ELS request and response buffers */ + dma_free_coherent(&efc->pci->dev, els->io.rsp.size, + els->io.rsp.virt, els->io.rsp.phys); + dma_free_coherent(&efc->pci->dev, els->io.req.size, + els->io.req.virt, els->io.req.phys); + + mempool_free(els, efc->els_io_pool); + + if (send_empty_event) + efc_scsi_io_list_empty(node->efc, node); +} + +static void +efc_els_retry(struct efc_els_io_req *els); + +static void +efc_els_delay_timer_cb(struct timer_list *t) +{ + struct efc_els_io_req *els = from_timer(els, t, delay_timer); + + /* Retry delay timer expired, retry the ELS request */ + efc_els_retry(els); +} + +static int +efc_els_req_cb(void *arg, u32 length, int status, u32 ext_status) +{ + struct efc_els_io_req *els; + struct efc_node *node; + struct efc *efc; + struct efc_node_cb cbdata; + u32 reason_code; + + els = arg; + node = els->node; + efc = node->efc; + + if (status) + els_io_printf(els, "status x%x ext x%x\n", status, ext_status); + + /* set the response len element of els->rsp */ + els->io.rsp.len = length; + + cbdata.status = status; + cbdata.ext_status = ext_status; + cbdata.header = NULL; + cbdata.els_rsp = els->io.rsp; + + /* set the response len element of els->rsp */ + cbdata.rsp_len = length; + + /* FW returns the number of bytes received on the link in + * the WCQE, not the amount placed in the buffer; use this info to + * check if there was an overrun. + */ + if (length > els->io.rsp.size) { + efc_log_warn(efc, + "ELS response returned len=%d > buflen=%zu\n", + length, els->io.rsp.size); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata); + return EFC_SUCCESS; + } + + /* Post event to ELS IO object */ + switch (status) { + case SLI4_FC_WCQE_STATUS_SUCCESS: + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_OK, &cbdata); + break; + + case SLI4_FC_WCQE_STATUS_LS_RJT: + reason_code = (ext_status >> 16) & 0xff; + + /* delay and retry if reason code is Logical Busy */ + switch (reason_code) { + case ELS_RJT_BUSY: + els->node->els_req_cnt--; + els_io_printf(els, + "LS_RJT Logical Busy, delay and retry\n"); + timer_setup(&els->delay_timer, + efc_els_delay_timer_cb, 0); + mod_timer(&els->delay_timer, + jiffies + msecs_to_jiffies(5000)); + break; + default: + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_RJT, + &cbdata); + break; + } + break; + + case SLI4_FC_WCQE_STATUS_LOCAL_REJECT: + switch (ext_status) { + case SLI4_FC_LOCAL_REJECT_SEQUENCE_TIMEOUT: + efc_els_retry(els); + break; + default: + efc_log_err(efc, "LOCAL_REJECT with ext status:%x\n", + ext_status); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, + &cbdata); + break; + } + break; + default: /* Other error */ + efc_log_warn(efc, "els req failed status x%x, ext_status x%x\n", + status, ext_status); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata); + break; + } + + return EFC_SUCCESS; +} + +void efc_disc_io_complete(struct efc_disc_io *io, u32 len, u32 status, + u32 ext_status) +{ + struct efc_els_io_req *els = + container_of(io, struct efc_els_io_req, io); + + WARN_ON_ONCE(!els->cb); + + ((efc_hw_srrs_cb_t) els->cb) (els, len, status, ext_status); +} + +static int efc_els_send_req(struct efc_node *node, struct efc_els_io_req *els, + enum efc_disc_io_type io_type) +{ + int rc = EFC_SUCCESS; + struct efc *efc = node->efc; + struct efc_node_cb cbdata; + + /* update ELS request counter */ + els->node->els_req_cnt++; + + /* Prepare the IO request details */ + els->io.io_type = io_type; + els->io.xmit_len = els->io.req.size; + els->io.rsp_len = els->io.rsp.size; + els->io.rpi = node->rnode.indicator; + els->io.vpi = node->nport->indicator; + els->io.s_id = node->nport->fc_id; + els->io.d_id = node->rnode.fc_id; + + if (node->rnode.attached) + els->io.rpi_registered = true; + + els->cb = efc_els_req_cb; + + rc = efc->tt.send_els(efc, &els->io); + if (!rc) + return rc; + + cbdata.status = EFC_STATUS_INVALID; + cbdata.ext_status = EFC_STATUS_INVALID; + cbdata.els_rsp = els->io.rsp; + efc_log_err(efc, "efc_els_send failed: %d\n", rc); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata); + + return rc; +} + +static void +efc_els_retry(struct efc_els_io_req *els) +{ + struct efc *efc; + struct efc_node_cb cbdata; + u32 rc; + + efc = els->node->efc; + cbdata.status = EFC_STATUS_INVALID; + cbdata.ext_status = EFC_STATUS_INVALID; + cbdata.els_rsp = els->io.rsp; + + + if (els->els_retries_remaining) { + els->els_retries_remaining--; + rc = efc->tt.send_els(efc, &els->io); + } else { + rc = EFC_FAIL; + } + + if (rc) { + efc_log_err(efc, "ELS retries exhausted\n"); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_REQ_FAIL, &cbdata); + } +} + +static int +efc_els_acc_cb(void *arg, u32 length, int status, u32 ext_status) +{ + struct efc_els_io_req *els; + struct efc_node *node; + struct efc *efc; + struct efc_node_cb cbdata; + + els = arg; + node = els->node; + efc = node->efc; + + cbdata.status = status; + cbdata.ext_status = ext_status; + cbdata.header = NULL; + cbdata.els_rsp = els->io.rsp; + + /* Post node event */ + switch (status) { + case SLI4_FC_WCQE_STATUS_SUCCESS: + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_OK, &cbdata); + break; + + default: /* Other error */ + efc_log_warn(efc, "[%s] %-8s failed status x%x, ext x%x\n", + node->display_name, els->display_name, + status, ext_status); + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_FAIL, &cbdata); + break; + } + + return EFC_SUCCESS; +} + +static int +efc_els_send_rsp(struct efc_els_io_req *els, u32 rsplen) +{ + int rc = EFC_SUCCESS; + struct efc_node_cb cbdata; + struct efc_node *node = els->node; + struct efc *efc = node->efc; + + /* increment ELS completion counter */ + node->els_cmpl_cnt++; + + els->io.io_type = EFC_DISC_IO_ELS_RESP; + els->cb = efc_els_acc_cb; + + /* Prepare the IO request details */ + els->io.xmit_len = rsplen; + els->io.rsp_len = els->io.rsp.size; + els->io.rpi = node->rnode.indicator; + els->io.vpi = node->nport->indicator; + if (node->nport->fc_id != U32_MAX) + els->io.s_id = node->nport->fc_id; + else + els->io.s_id = els->io.iparam.els.s_id; + els->io.d_id = node->rnode.fc_id; + + if (node->attached) + els->io.rpi_registered = true; + + rc = efc->tt.send_els(efc, &els->io); + if (!rc) + return rc; + + cbdata.status = EFC_STATUS_INVALID; + cbdata.ext_status = EFC_STATUS_INVALID; + cbdata.els_rsp = els->io.rsp; + efc_els_io_cleanup(els, EFC_EVT_SRRS_ELS_CMPL_FAIL, &cbdata); + + return rc; +} + +int +efc_send_plogi(struct efc_node *node) +{ + struct efc_els_io_req *els; + struct efc *efc = node->efc; + struct fc_els_flogi *plogi; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*plogi)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + els->display_name = "plogi"; + + /* Build PLOGI request */ + plogi = els->io.req.virt; + + memcpy(plogi, node->nport->service_params, sizeof(*plogi)); + + plogi->fl_cmd = ELS_PLOGI; + memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd)); + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_flogi(struct efc_node *node) +{ + struct efc_els_io_req *els; + struct efc *efc; + struct fc_els_flogi *flogi; + + efc = node->efc; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*flogi)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "flogi"; + + /* Build FLOGI request */ + flogi = els->io.req.virt; + + memcpy(flogi, node->nport->service_params, sizeof(*flogi)); + flogi->fl_cmd = ELS_FLOGI; + memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd)); + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_fdisc(struct efc_node *node) +{ + struct efc_els_io_req *els; + struct efc *efc; + struct fc_els_flogi *fdisc; + + efc = node->efc; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*fdisc)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "fdisc"; + + /* Build FDISC request */ + fdisc = els->io.req.virt; + + memcpy(fdisc, node->nport->service_params, sizeof(*fdisc)); + fdisc->fl_cmd = ELS_FDISC; + memset(fdisc->_fl_resvd, 0, sizeof(fdisc->_fl_resvd)); + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_prli(struct efc_node *node) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els; + struct { + struct fc_els_prli prli; + struct fc_els_spp spp; + } *pp; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*pp)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "prli"; + + /* Build PRLI request */ + pp = els->io.req.virt; + + memset(pp, 0, sizeof(*pp)); + + pp->prli.prli_cmd = ELS_PRLI; + pp->prli.prli_spp_len = 16; + pp->prli.prli_len = cpu_to_be16(sizeof(*pp)); + pp->spp.spp_type = FC_TYPE_FCP; + pp->spp.spp_type_ext = 0; + pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR; + pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS | + (node->nport->enable_ini ? + FCP_SPPF_INIT_FCN : 0) | + (node->nport->enable_tgt ? + FCP_SPPF_TARG_FCN : 0)); + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_logo(struct efc_node *node) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els; + struct fc_els_logo *logo; + struct fc_els_flogi *sparams; + + node_els_trace(); + + sparams = (struct fc_els_flogi *)node->nport->service_params; + + els = efc_els_io_alloc(node, sizeof(*logo)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "logo"; + + /* Build LOGO request */ + + logo = els->io.req.virt; + + memset(logo, 0, sizeof(*logo)); + logo->fl_cmd = ELS_LOGO; + hton24(logo->fl_n_port_id, node->rnode.nport->fc_id); + logo->fl_n_port_wwn = sparams->fl_wwpn; + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_adisc(struct efc_node *node) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els; + struct fc_els_adisc *adisc; + struct fc_els_flogi *sparams; + struct efc_nport *nport = node->nport; + + node_els_trace(); + + sparams = (struct fc_els_flogi *)node->nport->service_params; + + els = efc_els_io_alloc(node, sizeof(*adisc)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "adisc"; + + /* Build ADISC request */ + + adisc = els->io.req.virt; + + memset(adisc, 0, sizeof(*adisc)); + adisc->adisc_cmd = ELS_ADISC; + hton24(adisc->adisc_hard_addr, nport->fc_id); + adisc->adisc_wwpn = sparams->fl_wwpn; + adisc->adisc_wwnn = sparams->fl_wwnn; + hton24(adisc->adisc_port_id, node->rnode.nport->fc_id); + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_scr(struct efc_node *node) +{ + struct efc_els_io_req *els; + struct efc *efc = node->efc; + struct fc_els_scr *req; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*req)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "scr"; + + req = els->io.req.virt; + + memset(req, 0, sizeof(*req)); + req->scr_cmd = ELS_SCR; + req->scr_reg_func = ELS_SCRF_FULL; + + return efc_els_send_req(node, els, EFC_DISC_IO_ELS_REQ); +} + +int +efc_send_ls_rjt(struct efc_node *node, u32 ox_id, u32 reason_code, + u32 reason_code_expl, u32 vendor_unique) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct fc_els_ls_rjt *rjt; + + els = efc_els_io_alloc(node, sizeof(*rjt)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + node_els_trace(); + + els->display_name = "ls_rjt"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + rjt = els->io.req.virt; + memset(rjt, 0, sizeof(*rjt)); + + rjt->er_cmd = ELS_LS_RJT; + rjt->er_reason = reason_code; + rjt->er_explan = reason_code_expl; + + return efc_els_send_rsp(els, sizeof(*rjt)); +} + +int +efc_send_plogi_acc(struct efc_node *node, u32 ox_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct fc_els_flogi *plogi; + struct fc_els_flogi *req = (struct fc_els_flogi *)node->service_params; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*plogi)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "plogi_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + plogi = els->io.req.virt; + + /* copy our port's service parameters to payload */ + memcpy(plogi, node->nport->service_params, sizeof(*plogi)); + plogi->fl_cmd = ELS_LS_ACC; + memset(plogi->_fl_resvd, 0, sizeof(plogi->_fl_resvd)); + + /* Set Application header support bit if requested */ + if (req->fl_csp.sp_features & cpu_to_be16(FC_SP_FT_BCAST)) + plogi->fl_csp.sp_features |= cpu_to_be16(FC_SP_FT_BCAST); + + return efc_els_send_rsp(els, sizeof(*plogi)); +} + +int +efc_send_flogi_p2p_acc(struct efc_node *node, u32 ox_id, u32 s_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct fc_els_flogi *flogi; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*flogi)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "flogi_p2p_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + els->io.iparam.els.s_id = s_id; + + flogi = els->io.req.virt; + + /* copy our port's service parameters to payload */ + memcpy(flogi, node->nport->service_params, sizeof(*flogi)); + flogi->fl_cmd = ELS_LS_ACC; + memset(flogi->_fl_resvd, 0, sizeof(flogi->_fl_resvd)); + + memset(flogi->fl_cssp, 0, sizeof(flogi->fl_cssp)); + + return efc_els_send_rsp(els, sizeof(*flogi)); +} + +int +efc_send_prli_acc(struct efc_node *node, u32 ox_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct { + struct fc_els_prli prli; + struct fc_els_spp spp; + } *pp; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*pp)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "prli_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + pp = els->io.req.virt; + memset(pp, 0, sizeof(*pp)); + + pp->prli.prli_cmd = ELS_LS_ACC; + pp->prli.prli_spp_len = 0x10; + pp->prli.prli_len = cpu_to_be16(sizeof(*pp)); + pp->spp.spp_type = FC_TYPE_FCP; + pp->spp.spp_type_ext = 0; + pp->spp.spp_flags = FC_SPP_EST_IMG_PAIR | FC_SPP_RESP_ACK; + + pp->spp.spp_params = cpu_to_be32(FCP_SPPF_RD_XRDY_DIS | + (node->nport->enable_ini ? + FCP_SPPF_INIT_FCN : 0) | + (node->nport->enable_tgt ? + FCP_SPPF_TARG_FCN : 0)); + + return efc_els_send_rsp(els, sizeof(*pp)); +} + +int +efc_send_prlo_acc(struct efc_node *node, u32 ox_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct { + struct fc_els_prlo prlo; + struct fc_els_spp spp; + } *pp; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*pp)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "prlo_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + pp = els->io.req.virt; + memset(pp, 0, sizeof(*pp)); + pp->prlo.prlo_cmd = ELS_LS_ACC; + pp->prlo.prlo_obs = 0x10; + pp->prlo.prlo_len = cpu_to_be16(sizeof(*pp)); + + pp->spp.spp_type = FC_TYPE_FCP; + pp->spp.spp_type_ext = 0; + pp->spp.spp_flags = FC_SPP_RESP_ACK; + + return efc_els_send_rsp(els, sizeof(*pp)); +} + +int +efc_send_ls_acc(struct efc_node *node, u32 ox_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct fc_els_ls_acc *acc; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*acc)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "ls_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + acc = els->io.req.virt; + memset(acc, 0, sizeof(*acc)); + + acc->la_cmd = ELS_LS_ACC; + + return efc_els_send_rsp(els, sizeof(*acc)); +} + +int +efc_send_logo_acc(struct efc_node *node, u32 ox_id) +{ + struct efc_els_io_req *els = NULL; + struct efc *efc = node->efc; + struct fc_els_ls_acc *logo; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*logo)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "logo_acc"; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + logo = els->io.req.virt; + memset(logo, 0, sizeof(*logo)); + + logo->la_cmd = ELS_LS_ACC; + + return efc_els_send_rsp(els, sizeof(*logo)); +} + +int +efc_send_adisc_acc(struct efc_node *node, u32 ox_id) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els = NULL; + struct fc_els_adisc *adisc; + struct fc_els_flogi *sparams; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*adisc)); + if (!els) { + efc_log_err(efc, "els IO alloc failed\n"); + return EFC_FAIL; + } + + els->display_name = "adisc_acc"; + + /* Go ahead and send the ELS_ACC */ + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + els->io.iparam.els.ox_id = ox_id; + + sparams = (struct fc_els_flogi *)node->nport->service_params; + adisc = els->io.req.virt; + memset(adisc, 0, sizeof(*adisc)); + adisc->adisc_cmd = ELS_LS_ACC; + adisc->adisc_wwpn = sparams->fl_wwpn; + adisc->adisc_wwnn = sparams->fl_wwnn; + hton24(adisc->adisc_port_id, node->rnode.nport->fc_id); + + return efc_els_send_rsp(els, sizeof(*adisc)); +} + +static inline void +fcct_build_req_header(struct fc_ct_hdr *hdr, u16 cmd, u16 max_size) +{ + hdr->ct_rev = FC_CT_REV; + hdr->ct_fs_type = FC_FST_DIR; + hdr->ct_fs_subtype = FC_NS_SUBTYPE; + hdr->ct_options = 0; + hdr->ct_cmd = cpu_to_be16(cmd); + /* words */ + hdr->ct_mr_size = cpu_to_be16(max_size / (sizeof(u32))); + hdr->ct_reason = 0; + hdr->ct_explan = 0; + hdr->ct_vendor = 0; +} + +int +efc_ns_send_rftid(struct efc_node *node) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els; + struct { + struct fc_ct_hdr hdr; + struct fc_ns_rft_id rftid; + } *ct; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*ct)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ; + els->io.iparam.ct.type = FC_TYPE_CT; + els->io.iparam.ct.df_ctl = 0; + els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT; + + els->display_name = "rftid"; + + ct = els->io.req.virt; + memset(ct, 0, sizeof(*ct)); + fcct_build_req_header(&ct->hdr, FC_NS_RFT_ID, + sizeof(struct fc_ns_rft_id)); + + hton24(ct->rftid.fr_fid.fp_fid, node->rnode.nport->fc_id); + ct->rftid.fr_fts.ff_type_map[FC_TYPE_FCP / FC_NS_BPW] = + cpu_to_be32(1 << (FC_TYPE_FCP % FC_NS_BPW)); + + return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ); +} + +int +efc_ns_send_rffid(struct efc_node *node) +{ + struct efc *efc = node->efc; + struct efc_els_io_req *els; + struct { + struct fc_ct_hdr hdr; + struct fc_ns_rff_id rffid; + } *ct; + + node_els_trace(); + + els = efc_els_io_alloc(node, sizeof(*ct)); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ; + els->io.iparam.ct.type = FC_TYPE_CT; + els->io.iparam.ct.df_ctl = 0; + els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT; + + els->display_name = "rffid"; + ct = els->io.req.virt; + + memset(ct, 0, sizeof(*ct)); + fcct_build_req_header(&ct->hdr, FC_NS_RFF_ID, + sizeof(struct fc_ns_rff_id)); + + hton24(ct->rffid.fr_fid.fp_fid, node->rnode.nport->fc_id); + if (node->nport->enable_ini) + ct->rffid.fr_feat |= FCP_FEAT_INIT; + if (node->nport->enable_tgt) + ct->rffid.fr_feat |= FCP_FEAT_TARG; + ct->rffid.fr_type = FC_TYPE_FCP; + + return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ); +} + +int +efc_ns_send_gidpt(struct efc_node *node) +{ + struct efc_els_io_req *els = NULL; + struct efc *efc = node->efc; + struct { + struct fc_ct_hdr hdr; + struct fc_ns_gid_pt gidpt; + } *ct; + + node_els_trace(); + + els = efc_els_io_alloc_size(node, sizeof(*ct), EFC_ELS_GID_PT_RSP_LEN); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + els->io.iparam.ct.r_ctl = FC_RCTL_ELS_REQ; + els->io.iparam.ct.type = FC_TYPE_CT; + els->io.iparam.ct.df_ctl = 0; + els->io.iparam.ct.timeout = EFC_FC_ELS_SEND_DEFAULT_TIMEOUT; + + els->display_name = "gidpt"; + + ct = els->io.req.virt; + + memset(ct, 0, sizeof(*ct)); + fcct_build_req_header(&ct->hdr, FC_NS_GID_PT, + sizeof(struct fc_ns_gid_pt)); + + ct->gidpt.fn_pt_type = FC_TYPE_FCP; + + return efc_els_send_req(node, els, EFC_DISC_IO_CT_REQ); +} + +void +efc_els_io_cleanup(struct efc_els_io_req *els, int evt, void *arg) +{ + /* don't want further events that could come; e.g. abort requests + * from the node state machine; thus, disable state machine + */ + els->els_req_free = true; + efc_node_post_els_resp(els->node, evt, arg); + + efc_els_io_free(els); +} + +static int +efc_ct_acc_cb(void *arg, u32 length, int status, u32 ext_status) +{ + struct efc_els_io_req *els = arg; + + efc_els_io_free(els); + + return EFC_SUCCESS; +} + +int +efc_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id, + struct fc_ct_hdr *ct_hdr, u32 cmd_rsp_code, + u32 reason_code, u32 reason_code_explanation) +{ + struct efc_els_io_req *els = NULL; + struct fc_ct_hdr *rsp = NULL; + + els = efc_els_io_alloc(node, 256); + if (!els) { + efc_log_err(efc, "IO alloc failed\n"); + return EFC_FAIL; + } + + rsp = els->io.rsp.virt; + + *rsp = *ct_hdr; + + fcct_build_req_header(rsp, cmd_rsp_code, 0); + rsp->ct_reason = reason_code; + rsp->ct_explan = reason_code_explanation; + + els->display_name = "ct_rsp"; + els->cb = efc_ct_acc_cb; + + /* Prepare the IO request details */ + els->io.io_type = EFC_DISC_IO_CT_RESP; + els->io.xmit_len = sizeof(*rsp); + + els->io.rpi = node->rnode.indicator; + els->io.d_id = node->rnode.fc_id; + + memset(&els->io.iparam, 0, sizeof(els->io.iparam)); + + els->io.iparam.ct.ox_id = ox_id; + els->io.iparam.ct.r_ctl = 3; + els->io.iparam.ct.type = FC_TYPE_CT; + els->io.iparam.ct.df_ctl = 0; + els->io.iparam.ct.timeout = 5; + + if (efc->tt.send_els(efc, &els->io)) { + efc_els_io_free(els); + return EFC_FAIL; + } + return EFC_SUCCESS; +} + +int +efc_send_bls_acc(struct efc_node *node, struct fc_frame_header *hdr) +{ + struct sli_bls_params bls; + struct fc_ba_acc *acc; + struct efc *efc = node->efc; + + memset(&bls, 0, sizeof(bls)); + bls.ox_id = be16_to_cpu(hdr->fh_ox_id); + bls.rx_id = be16_to_cpu(hdr->fh_rx_id); + bls.s_id = ntoh24(hdr->fh_d_id); + bls.d_id = node->rnode.fc_id; + bls.rpi = node->rnode.indicator; + bls.vpi = node->nport->indicator; + + acc = (void *)bls.payload; + acc->ba_ox_id = cpu_to_be16(bls.ox_id); + acc->ba_rx_id = cpu_to_be16(bls.rx_id); + acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX); + + return efc->tt.send_bls(efc, FC_RCTL_BA_ACC, &bls); +} diff --git a/drivers/scsi/elx/libefc/efc_els.h b/drivers/scsi/elx/libefc/efc_els.h new file mode 100644 index 000000000000..29926e690ec4 --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_els.h @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#ifndef __EFC_ELS_H__ +#define __EFC_ELS_H__ + +#define EFC_STATUS_INVALID INT_MAX +#define EFC_ELS_IO_POOL_SZ 1024 + +struct efc_els_io_req { + struct list_head list_entry; + struct kref ref; + void (*release)(struct kref *arg); + struct efc_node *node; + void *cb; + uint32_t els_retries_remaining; + bool els_req_free; + struct timer_list delay_timer; + + const char *display_name; + + struct efc_disc_io io; +}; + +typedef int(*efc_hw_srrs_cb_t)(void *arg, u32 length, int status, + u32 ext_status); + +void _efc_els_io_free(struct kref *arg); +struct efc_els_io_req * +efc_els_io_alloc(struct efc_node *node, u32 reqlen); +struct efc_els_io_req * +efc_els_io_alloc_size(struct efc_node *node, u32 reqlen, u32 rsplen); +void efc_els_io_free(struct efc_els_io_req *els); + +/* ELS command send */ +typedef void (*els_cb_t)(struct efc_node *node, + struct efc_node_cb *cbdata, void *arg); +int +efc_send_plogi(struct efc_node *node); +int +efc_send_flogi(struct efc_node *node); +int +efc_send_fdisc(struct efc_node *node); +int +efc_send_prli(struct efc_node *node); +int +efc_send_prlo(struct efc_node *node); +int +efc_send_logo(struct efc_node *node); +int +efc_send_adisc(struct efc_node *node); +int +efc_send_pdisc(struct efc_node *node); +int +efc_send_scr(struct efc_node *node); +int +efc_ns_send_rftid(struct efc_node *node); +int +efc_ns_send_rffid(struct efc_node *node); +int +efc_ns_send_gidpt(struct efc_node *node); +void +efc_els_io_cleanup(struct efc_els_io_req *els, int evt, void *arg); + +/* ELS acc send */ +int +efc_send_ls_acc(struct efc_node *node, u32 ox_id); +int +efc_send_ls_rjt(struct efc_node *node, u32 ox_id, u32 reason_cod, + u32 reason_code_expl, u32 vendor_unique); +int +efc_send_flogi_p2p_acc(struct efc_node *node, u32 ox_id, u32 s_id); +int +efc_send_flogi_acc(struct efc_node *node, u32 ox_id, u32 is_fport); +int +efc_send_plogi_acc(struct efc_node *node, u32 ox_id); +int +efc_send_prli_acc(struct efc_node *node, u32 ox_id); +int +efc_send_logo_acc(struct efc_node *node, u32 ox_id); +int +efc_send_prlo_acc(struct efc_node *node, u32 ox_id); +int +efc_send_adisc_acc(struct efc_node *node, u32 ox_id); + +int +efc_bls_send_acc_hdr(struct efc *efc, struct efc_node *node, + struct fc_frame_header *hdr); +int +efc_bls_send_rjt_hdr(struct efc_els_io_req *io, struct fc_frame_header *hdr); + +int +efc_els_io_list_empty(struct efc_node *node, struct list_head *list); + +/* CT */ +int +efc_send_ct_rsp(struct efc *efc, struct efc_node *node, u16 ox_id, + struct fc_ct_hdr *ct_hdr, u32 cmd_rsp_code, u32 reason_code, + u32 reason_code_explanation); + +int +efc_send_bls_acc(struct efc_node *node, struct fc_frame_header *hdr); + +#endif /* __EFC_ELS_H__ */ From patchwork Thu Jan 7 22:58:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358588 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4121FC433E0 for ; Thu, 7 Jan 2021 23:00:48 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 16103235FF for ; Thu, 7 Jan 2021 23:00:48 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728305AbhAGXAr (ORCPT ); Thu, 7 Jan 2021 18:00:47 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51880 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728425AbhAGXAq (ORCPT ); Thu, 7 Jan 2021 18:00:46 -0500 Received: from mail-pl1-x630.google.com (mail-pl1-x630.google.com [IPv6:2607:f8b0:4864:20::630]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 3D78DC0612A2 for ; Thu, 7 Jan 2021 14:59:27 -0800 (PST) Received: by mail-pl1-x630.google.com with SMTP id r4so4620736pls.11 for ; Thu, 07 Jan 2021 14:59:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=MUHeDdChzWubO9dOA/aHEJir7HQ6nozQH1CV9eK+RvI=; b=V3ldWVD54nsLgvL30vNWTlRrdHBVqE/HkOZO6rAg2uokpjGxkmk9C4I0KfmkgDLH37 7fncXhfI300TZHHi6Sw3ujSFJqiOddNsb0May+h1N5SqD4sts7+bKgm3n15KClUaKta/ d7bG0nq5sSJKmFvaGBls7PT2Y4gGKxm2xCoHVzRsgnGKK0Wkj0mXZKg2QQBtCpH4C+Md ES9qrviVcDqEYNec8RK3MlVM6YO2ZhYAAVsROjxxqUYQllx5CFrBAwldJ+NjlUZbVajP u4C1lwL5rvsaXtuL4tXDTXvIMwDDIwTjZckNtZoGlXZCyGLnzGrrQU1lk2dEKGitJgoh ghkg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=MUHeDdChzWubO9dOA/aHEJir7HQ6nozQH1CV9eK+RvI=; b=VnU3ggSKhgfMGrKKekW/FEIv8CfuJx3vWMyYfezczGYgjgo0W7qIJiFpcgr32b7muU kgLmgD7yTaJAlzmwbvB9GM/rogdvTmF+7HYaLyN5g/DRPceNiCCY+ZJKaYcN0qzVBtow YWWvNNP4xs9D3VfNWrXGjpy7qkt7O2dt/rA6KL49Ng3D7RljEc/2bn0Qzl2Nc7Rnl+cA zMRbnjyTbqfLiUea568v9Jcd04OQvzfOB5lbIb2KZHvwLZ6SE96e8vIguwkT1Luetj6x XdmoNEy7F4siow/waIkIWO/3gFD/BSBbbfDyzWAUHflGMB5kf1KGNLzJZhjk21qEl7fT Xaog== X-Gm-Message-State: AOAM530cpH2aWeVjydbdLDHAcU1PzRV8y0/CzL6EeRoRmlm2Lmmu3lnA oYn6nySFH9hKDuODEGA7Pi7fqYlDI0EAzw== X-Google-Smtp-Source: ABdhPJyfxK5+354A45yU5yN2as7BUY+gX/MaRLf2bzYq2e6MBllQ200pLKxjEcLNVZLN9VbUGgrvDw== X-Received: by 2002:a17:90a:be0a:: with SMTP id a10mr691256pjs.139.1610060366120; Thu, 07 Jan 2021 14:59:26 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:25 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Daniel Wagner Subject: [PATCH v7 16/31] elx: libefc: Register discovery objects with hardware Date: Thu, 7 Jan 2021 14:58:50 -0800 Message-Id: <20210107225905.18186-17-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the libefc library population. This patch adds library interface definitions for: -Registrations for VFI, VPI and RPI. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- drivers/scsi/elx/libefc/efc_cmds.c | 855 +++++++++++++++++++++++++++++ drivers/scsi/elx/libefc/efc_cmds.h | 35 ++ 2 files changed, 890 insertions(+) create mode 100644 drivers/scsi/elx/libefc/efc_cmds.c create mode 100644 drivers/scsi/elx/libefc/efc_cmds.h diff --git a/drivers/scsi/elx/libefc/efc_cmds.c b/drivers/scsi/elx/libefc/efc_cmds.c new file mode 100644 index 000000000000..d2850d231a7a --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_cmds.c @@ -0,0 +1,855 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#include "efclib.h" +#include "../libefc_sli/sli4.h" +#include "efc_cmds.h" +#include "efc_sm.h" + +static void +efc_nport_free_resources(struct efc_nport *nport, int evt, void *data) +{ + struct efc *efc = nport->efc; + + /* Clear the nport attached flag */ + nport->attached = false; + + /* Free the service parameters buffer */ + if (nport->dma.virt) { + dma_free_coherent(&efc->pci->dev, nport->dma.size, + nport->dma.virt, nport->dma.phys); + memset(&nport->dma, 0, sizeof(struct efc_dma)); + } + + /* Free the SLI resources */ + sli_resource_free(efc->sli, SLI4_RSRC_VPI, nport->indicator); + + efc_nport_cb(efc, evt, nport); +} + +static int +efc_nport_get_mbox_status(struct efc_nport *nport, u8 *mqe, int status) +{ + struct efc *efc = nport->efc; + struct sli4_mbox_command_header *hdr = + (struct sli4_mbox_command_header *)mqe; + + if (status || le16_to_cpu(hdr->status)) { + efc_log_debug(efc, "bad status vpi=%#x st=%x hdr=%x\n", + nport->indicator, status, le16_to_cpu(hdr->status)); + return EFC_FAIL; + } + + return EFC_SUCCESS; +} + +static int +efc_nport_free_unreg_vpi_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_nport *nport = arg; + int evt = EFC_EVT_NPORT_FREE_OK; + int rc; + + rc = efc_nport_get_mbox_status(nport, mqe, status); + if (rc) + evt = EFC_EVT_NPORT_FREE_FAIL; + + efc_nport_free_resources(nport, evt, mqe); + return rc; +} + +static void +efc_nport_free_unreg_vpi(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + int rc; + u8 data[SLI4_BMBX_SIZE]; + + rc = sli_cmd_unreg_vpi(efc->sli, data, nport->indicator, + SLI4_UNREG_TYPE_PORT); + if (rc) { + efc_log_err(efc, "UNREG_VPI format failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_FAIL, data); + return; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_nport_free_unreg_vpi_cb, nport); + if (rc) { + efc_log_err(efc, "UNREG_VPI command failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_FAIL, data); + } +} + +static void +efc_nport_send_evt(struct efc_nport *nport, int evt, void *data) +{ + struct efc *efc = nport->efc; + + /* Now inform the registered callbacks */ + efc_nport_cb(efc, evt, nport); + + /* Set the nport attached flag */ + if (evt == EFC_EVT_NPORT_ATTACH_OK) + nport->attached = true; + + /* If there is a pending free request, then handle it now */ + if (nport->free_req_pending) + efc_nport_free_unreg_vpi(nport); +} + +static int +efc_nport_alloc_init_vpi_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_nport *nport = arg; + + if (efc_nport_get_mbox_status(nport, mqe, status)) { + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, mqe); + return EFC_FAIL; + } + + efc_nport_send_evt(nport, EFC_EVT_NPORT_ALLOC_OK, mqe); + return EFC_SUCCESS; +} + +static void +efc_nport_alloc_init_vpi(struct efc_nport *nport) +{ + struct efc *efc = nport->efc; + u8 data[SLI4_BMBX_SIZE]; + int rc; + + /* If there is a pending free request, then handle it now */ + if (nport->free_req_pending) { + efc_nport_free_resources(nport, EFC_EVT_NPORT_FREE_OK, data); + return; + } + + rc = sli_cmd_init_vpi(efc->sli, data, + nport->indicator, nport->domain->indicator); + if (rc) { + efc_log_err(efc, "INIT_VPI format failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data); + return; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_nport_alloc_init_vpi_cb, nport); + if (rc) { + efc_log_err(efc, "INIT_VPI command failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data); + } +} + +static int +efc_nport_alloc_read_sparm64_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_nport *nport = arg; + u8 *payload = NULL; + + if (efc_nport_get_mbox_status(nport, mqe, status)) { + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, mqe); + return EFC_FAIL; + } + + payload = nport->dma.virt; + + memcpy(&nport->sli_wwpn, payload + SLI4_READ_SPARM64_WWPN_OFFSET, + sizeof(nport->sli_wwpn)); + memcpy(&nport->sli_wwnn, payload + SLI4_READ_SPARM64_WWNN_OFFSET, + sizeof(nport->sli_wwnn)); + + dma_free_coherent(&efc->pci->dev, nport->dma.size, nport->dma.virt, + nport->dma.phys); + memset(&nport->dma, 0, sizeof(struct efc_dma)); + efc_nport_alloc_init_vpi(nport); + return EFC_SUCCESS; +} + +static void +efc_nport_alloc_read_sparm64(struct efc *efc, struct efc_nport *nport) +{ + u8 data[SLI4_BMBX_SIZE]; + int rc; + + /* Allocate memory for the service parameters */ + nport->dma.size = EFC_SPARAM_DMA_SZ; + nport->dma.virt = dma_alloc_coherent(&efc->pci->dev, + nport->dma.size, &nport->dma.phys, + GFP_DMA); + if (!nport->dma.virt) { + efc_log_err(efc, "Failed to allocate DMA memory\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data); + return; + } + + rc = sli_cmd_read_sparm64(efc->sli, data, + &nport->dma, nport->indicator); + if (rc) { + efc_log_err(efc, "READ_SPARM64 format failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data); + return; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_nport_alloc_read_sparm64_cb, nport); + if (rc) { + efc_log_err(efc, "READ_SPARM64 command failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ALLOC_FAIL, data); + } +} + +int +efc_cmd_nport_alloc(struct efc *efc, struct efc_nport *nport, + struct efc_domain *domain, u8 *wwpn) +{ + u32 index; + + nport->indicator = U32_MAX; + nport->free_req_pending = false; + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + if (wwpn) + memcpy(&nport->sli_wwpn, wwpn, sizeof(nport->sli_wwpn)); + + /* + * allocate a VPI object for the port and stores it in the + * indicator field of the port object. + */ + if (sli_resource_alloc(efc->sli, SLI4_RSRC_VPI, + &nport->indicator, &index)) { + efc_log_err(efc, "VPI allocation failure\n"); + return EFC_FAIL; + } + + if (domain) { + /* + * If the WWPN is NULL, fetch the default + * WWPN and WWNN before initializing the VPI + */ + if (!wwpn) + efc_nport_alloc_read_sparm64(efc, nport); + else + efc_nport_alloc_init_vpi(nport); + } else if (!wwpn) { + /* domain NULL and wwpn non-NULL */ + efc_log_err(efc, "need WWN for physical port\n"); + sli_resource_free(efc->sli, SLI4_RSRC_VPI, nport->indicator); + return EFC_FAIL; + } + + return EFC_SUCCESS; +} + +static int +efc_nport_attach_reg_vpi_cb(struct efc *efc, int status, u8 *mqe, + void *arg) +{ + struct efc_nport *nport = arg; + + if (efc_nport_get_mbox_status(nport, mqe, status)) { + efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, mqe); + return EFC_FAIL; + } + + efc_nport_send_evt(nport, EFC_EVT_NPORT_ATTACH_OK, mqe); + return EFC_SUCCESS; +} + +int +efc_cmd_nport_attach(struct efc *efc, struct efc_nport *nport, u32 fc_id) +{ + u8 buf[SLI4_BMBX_SIZE]; + int rc = EFC_SUCCESS; + + if (!nport) { + efc_log_err(efc, "bad param(s) nport=%p\n", nport); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + nport->fc_id = fc_id; + + /* register previously-allocated VPI with the device */ + rc = sli_cmd_reg_vpi(efc->sli, buf, nport->fc_id, + nport->sli_wwpn, nport->indicator, + nport->domain->indicator, false); + if (rc) { + efc_log_err(efc, "REG_VPI format failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, buf); + return rc; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, buf, + efc_nport_attach_reg_vpi_cb, nport); + if (rc) { + efc_log_err(efc, "REG_VPI command failure\n"); + efc_nport_free_resources(nport, EFC_EVT_NPORT_ATTACH_FAIL, buf); + } + + return rc; +} + +int +efc_cmd_nport_free(struct efc *efc, struct efc_nport *nport) +{ + if (!nport) { + efc_log_err(efc, "bad parameter(s) nport=%p\n", nport); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + /* Issue the UNREG_VPI command to free the assigned VPI context */ + if (nport->attached) + efc_nport_free_unreg_vpi(nport); + else + nport->free_req_pending = true; + + return EFC_SUCCESS; +} + +static int +efc_domain_get_mbox_status(struct efc_domain *domain, u8 *mqe, int status) +{ + struct efc *efc = domain->efc; + struct sli4_mbox_command_header *hdr = + (struct sli4_mbox_command_header *)mqe; + + if (status || le16_to_cpu(hdr->status)) { + efc_log_debug(efc, "bad status vfi=%#x st=%x hdr=%x\n", + domain->indicator, status, + le16_to_cpu(hdr->status)); + return EFC_FAIL; + } + + return EFC_SUCCESS; +} + +static void +efc_domain_free_resources(struct efc_domain *domain, int evt, void *data) +{ + struct efc *efc = domain->efc; + + /* Free the service parameters buffer */ + if (domain->dma.virt) { + dma_free_coherent(&efc->pci->dev, + domain->dma.size, domain->dma.virt, + domain->dma.phys); + memset(&domain->dma, 0, sizeof(struct efc_dma)); + } + + /* Free the SLI resources */ + sli_resource_free(efc->sli, SLI4_RSRC_VFI, domain->indicator); + + efc_domain_cb(efc, evt, domain); +} + +static void +efc_domain_send_nport_evt(struct efc_domain *domain, + int port_evt, int domain_evt, void *data) +{ + struct efc *efc = domain->efc; + + /* Send alloc/attach ok to the physical nport */ + efc_nport_send_evt(domain->nport, port_evt, NULL); + + /* Now inform the registered callbacks */ + efc_domain_cb(efc, domain_evt, domain); +} + +static int +efc_domain_alloc_read_sparm64_cb(struct efc *efc, int status, u8 *mqe, + void *arg) +{ + struct efc_domain *domain = arg; + + if (efc_domain_get_mbox_status(domain, mqe, status)) { + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, mqe); + return EFC_FAIL; + } + + efc_domain_send_nport_evt(domain, EFC_EVT_NPORT_ALLOC_OK, + EFC_HW_DOMAIN_ALLOC_OK, mqe); + return EFC_SUCCESS; +} + +static void +efc_domain_alloc_read_sparm64(struct efc_domain *domain) +{ + struct efc *efc = domain->efc; + u8 data[SLI4_BMBX_SIZE]; + int rc; + + rc = sli_cmd_read_sparm64(efc->sli, data, &domain->dma, 0); + if (rc) { + efc_log_err(efc, "READ_SPARM64 format failure\n"); + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, data); + return; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_domain_alloc_read_sparm64_cb, domain); + if (rc) { + efc_log_err(efc, "READ_SPARM64 command failure\n"); + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, data); + } +} + +static int +efc_domain_alloc_init_vfi_cb(struct efc *efc, int status, u8 *mqe, + void *arg) +{ + struct efc_domain *domain = arg; + + if (efc_domain_get_mbox_status(domain, mqe, status)) { + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, mqe); + return EFC_FAIL; + } + + efc_domain_alloc_read_sparm64(domain); + return EFC_SUCCESS; +} + +static void +efc_domain_alloc_init_vfi(struct efc_domain *domain) +{ + struct efc *efc = domain->efc; + struct efc_nport *nport = domain->nport; + u8 data[SLI4_BMBX_SIZE]; + int rc; + + /* + * For FC, the HW alread registered an FCFI. + * Copy FCF information into the domain and jump to INIT_VFI. + */ + domain->fcf_indicator = efc->fcfi; + rc = sli_cmd_init_vfi(efc->sli, data, domain->indicator, + domain->fcf_indicator, nport->indicator); + if (rc) { + efc_log_err(efc, "INIT_VFI format failure\n"); + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, data); + return; + } + + efc_log_err(efc, "%s issue mbox\n", __func__); + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_domain_alloc_init_vfi_cb, domain); + if (rc) { + efc_log_err(efc, "INIT_VFI command failure\n"); + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ALLOC_FAIL, data); + } +} + +int +efc_cmd_domain_alloc(struct efc *efc, struct efc_domain *domain, u32 fcf) +{ + u32 index; + + if (!domain || !domain->nport) { + efc_log_err(efc, "bad parameter(s) domain=%p nport=%p\n", + domain, domain ? domain->nport : NULL); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + /* allocate memory for the service parameters */ + domain->dma.size = EFC_SPARAM_DMA_SZ; + domain->dma.virt = dma_alloc_coherent(&efc->pci->dev, + domain->dma.size, + &domain->dma.phys, GFP_DMA); + if (!domain->dma.virt) { + efc_log_err(efc, "Failed to allocate DMA memory\n"); + return EFC_FAIL; + } + + domain->fcf = fcf; + domain->fcf_indicator = U32_MAX; + domain->indicator = U32_MAX; + + if (sli_resource_alloc(efc->sli, SLI4_RSRC_VFI, &domain->indicator, + &index)) { + efc_log_err(efc, "VFI allocation failure\n"); + + dma_free_coherent(&efc->pci->dev, + domain->dma.size, domain->dma.virt, + domain->dma.phys); + memset(&domain->dma, 0, sizeof(struct efc_dma)); + + return EFC_FAIL; + } + + efc_domain_alloc_init_vfi(domain); + return EFC_SUCCESS; +} + +static int +efc_domain_attach_reg_vfi_cb(struct efc *efc, int status, u8 *mqe, + void *arg) +{ + struct efc_domain *domain = arg; + + if (efc_domain_get_mbox_status(domain, mqe, status)) { + efc_domain_free_resources(domain, + EFC_HW_DOMAIN_ATTACH_FAIL, mqe); + return EFC_FAIL; + } + + efc_domain_send_nport_evt(domain, EFC_EVT_NPORT_ATTACH_OK, + EFC_HW_DOMAIN_ATTACH_OK, mqe); + return EFC_SUCCESS; +} + +int +efc_cmd_domain_attach(struct efc *efc, struct efc_domain *domain, u32 fc_id) +{ + u8 buf[SLI4_BMBX_SIZE]; + int rc = EFC_SUCCESS; + + if (!domain) { + efc_log_err(efc, "bad param(s) domain=%p\n", domain); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + domain->nport->fc_id = fc_id; + + rc = sli_cmd_reg_vfi(efc->sli, buf, SLI4_BMBX_SIZE, domain->indicator, + domain->fcf_indicator, domain->dma, + domain->nport->indicator, domain->nport->sli_wwpn, + domain->nport->fc_id); + if (rc) { + efc_log_err(efc, "REG_VFI format failure\n"); + goto cleanup; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, buf, + efc_domain_attach_reg_vfi_cb, domain); + if (rc) { + efc_log_err(efc, "REG_VFI command failure\n"); + goto cleanup; + } + + return rc; + +cleanup: + efc_domain_free_resources(domain, EFC_HW_DOMAIN_ATTACH_FAIL, buf); + + return rc; +} + +static int +efc_domain_free_unreg_vfi_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_domain *domain = arg; + int evt = EFC_HW_DOMAIN_FREE_OK; + int rc; + + rc = efc_domain_get_mbox_status(domain, mqe, status); + if (rc) { + evt = EFC_HW_DOMAIN_FREE_FAIL; + rc = EFC_FAIL; + } + + efc_domain_free_resources(domain, evt, mqe); + return rc; +} + +static void +efc_domain_free_unreg_vfi(struct efc_domain *domain) +{ + struct efc *efc = domain->efc; + int rc; + u8 data[SLI4_BMBX_SIZE]; + + rc = sli_cmd_unreg_vfi(efc->sli, data, domain->indicator, + SLI4_UNREG_TYPE_DOMAIN); + if (rc) { + efc_log_err(efc, "UNREG_VFI format failure\n"); + goto cleanup; + } + + rc = efc->tt.issue_mbox_rqst(efc->base, data, + efc_domain_free_unreg_vfi_cb, domain); + if (rc) { + efc_log_err(efc, "UNREG_VFI command failure\n"); + goto cleanup; + } + + return; + +cleanup: + efc_domain_free_resources(domain, EFC_HW_DOMAIN_FREE_FAIL, data); +} + +int +efc_cmd_domain_free(struct efc *efc, struct efc_domain *domain) +{ + if (!domain) { + efc_log_err(efc, "bad parameter(s) domain=%p\n", domain); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + efc_domain_free_unreg_vfi(domain); + return EFC_SUCCESS; +} + +int +efc_cmd_node_alloc(struct efc *efc, struct efc_remote_node *rnode, u32 fc_addr, + struct efc_nport *nport) +{ + /* Check for invalid indicator */ + if (rnode->indicator != U32_MAX) { + efc_log_err(efc, + "RPI allocation failure addr=%#x rpi=%#x\n", + fc_addr, rnode->indicator); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, + "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + /* NULL SLI port indicates an unallocated remote node */ + rnode->nport = NULL; + + if (sli_resource_alloc(efc->sli, SLI4_RSRC_RPI, + &rnode->indicator, &rnode->index)) { + efc_log_err(efc, "RPI allocation failure addr=%#x\n", + fc_addr); + return EFC_FAIL; + } + + rnode->fc_id = fc_addr; + rnode->nport = nport; + + return EFC_SUCCESS; +} + +static int +efc_cmd_node_attach_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_remote_node *rnode = arg; + struct sli4_mbox_command_header *hdr = + (struct sli4_mbox_command_header *)mqe; + int evt = 0; + + if (status || le16_to_cpu(hdr->status)) { + efc_log_debug(efc, "bad status cqe=%#x mqe=%#x\n", status, + le16_to_cpu(hdr->status)); + rnode->attached = false; + evt = EFC_EVT_NODE_ATTACH_FAIL; + } else { + rnode->attached = true; + evt = EFC_EVT_NODE_ATTACH_OK; + } + + efc_remote_node_cb(efc, evt, rnode); + + return EFC_SUCCESS; +} + +int +efc_cmd_node_attach(struct efc *efc, struct efc_remote_node *rnode, + struct efc_dma *sparms) +{ + int rc = EFC_FAIL; + u8 buf[SLI4_BMBX_SIZE]; + + if (!rnode || !sparms) { + efc_log_err(efc, "bad parameter(s) rnode=%p sparms=%p\n", + rnode, sparms); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + /* + * If the attach count is non-zero, this RPI has already been reg'd. + * Otherwise, register the RPI + */ + if (rnode->index == U32_MAX) { + efc_log_err(efc, "bad parameter rnode->index invalid\n"); + return EFC_FAIL; + } + + /* Update a remote node object with the remote port's service params */ + if (!sli_cmd_reg_rpi(efc->sli, buf, rnode->indicator, + rnode->nport->indicator, rnode->fc_id, sparms, 0, 0)) + rc = efc->tt.issue_mbox_rqst(efc->base, buf, + efc_cmd_node_attach_cb, rnode); + + return rc; +} + +int +efc_node_free_resources(struct efc *efc, struct efc_remote_node *rnode) +{ + int rc = EFC_SUCCESS; + + if (!rnode) { + efc_log_err(efc, "bad parameter rnode=%p\n", rnode); + return EFC_FAIL; + } + + if (rnode->nport) { + if (rnode->attached) { + efc_log_err(efc, "rnode is still attached\n"); + return EFC_FAIL; + } + if (rnode->indicator != U32_MAX) { + if (sli_resource_free(efc->sli, SLI4_RSRC_RPI, + rnode->indicator)) { + efc_log_err(efc, + "RPI free fail RPI %d addr=%#x\n", + rnode->indicator, rnode->fc_id); + rc = EFC_FAIL; + } else { + rnode->indicator = U32_MAX; + rnode->index = U32_MAX; + } + } + } + + return rc; +} + +static int +efc_cmd_node_free_cb(struct efc *efc, int status, u8 *mqe, void *arg) +{ + struct efc_remote_node *rnode = arg; + struct sli4_mbox_command_header *hdr = + (struct sli4_mbox_command_header *)mqe; + int evt = EFC_EVT_NODE_FREE_FAIL; + int rc = EFC_SUCCESS; + + if (status || le16_to_cpu(hdr->status)) { + efc_log_debug(efc, "bad status cqe=%#x mqe=%#x\n", status, + le16_to_cpu(hdr->status)); + + /* + * In certain cases, a non-zero MQE status is OK (all must be + * true): + * - node is attached + * - status is 0x1400 + */ + if (!rnode->attached || + (le16_to_cpu(hdr->status) != SLI4_MBX_STATUS_RPI_NOT_REG)) + rc = EFC_FAIL; + } + + if (!rc) { + rnode->attached = false; + evt = EFC_EVT_NODE_FREE_OK; + } + + efc_remote_node_cb(efc, evt, rnode); + + return rc; +} + +int +efc_cmd_node_detach(struct efc *efc, struct efc_remote_node *rnode) +{ + u8 buf[SLI4_BMBX_SIZE]; + int rc = EFC_FAIL; + + if (!rnode) { + efc_log_err(efc, "bad parameter rnode=%p\n", rnode); + return EFC_FAIL; + } + + /* + * Check if the chip is in an error state (UE'd) before proceeding. + */ + if (sli_fw_error_status(efc->sli) > 0) { + efc_log_crit(efc, "Chip is in an error state - reset needed\n"); + return EFC_FAIL; + } + + if (rnode->nport) { + if (!rnode->attached) + return EFC_FAIL; + + rc = EFC_FAIL; + + if (!sli_cmd_unreg_rpi(efc->sli, buf, rnode->indicator, + SLI4_RSRC_RPI, U32_MAX)) + rc = efc->tt.issue_mbox_rqst(efc->base, buf, + efc_cmd_node_free_cb, rnode); + + if (rc != EFC_SUCCESS) { + efc_log_err(efc, "UNREG_RPI failed\n"); + rc = EFC_FAIL; + } + } + + return rc; +} diff --git a/drivers/scsi/elx/libefc/efc_cmds.h b/drivers/scsi/elx/libefc/efc_cmds.h new file mode 100644 index 000000000000..9c0287799e9e --- /dev/null +++ b/drivers/scsi/elx/libefc/efc_cmds.h @@ -0,0 +1,35 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#ifndef __EFC_CMDS_H__ +#define __EFC_CMDS_H__ + +#define EFC_SPARAM_DMA_SZ 112 +int +efc_cmd_nport_alloc(struct efc *efc, struct efc_nport *nport, + struct efc_domain *domain, u8 *wwpn); +int +efc_cmd_nport_attach(struct efc *efc, struct efc_nport *nport, u32 fc_id); +int +efc_cmd_nport_free(struct efc *efc, struct efc_nport *nport); +int +efc_cmd_domain_alloc(struct efc *efc, struct efc_domain *domain, u32 fcf); +int +efc_cmd_domain_attach(struct efc *efc, struct efc_domain *domain, u32 fc_id); +int +efc_cmd_domain_free(struct efc *efc, struct efc_domain *domain); +int +efc_cmd_node_detach(struct efc *efc, struct efc_remote_node *rnode); +int +efc_node_free_resources(struct efc *efc, struct efc_remote_node *rnode); +int +efc_cmd_node_attach(struct efc *efc, struct efc_remote_node *rnode, + struct efc_dma *sparms); +int +efc_cmd_node_alloc(struct efc *efc, struct efc_remote_node *rnode, u32 fc_addr, + struct efc_nport *nport); + +#endif /* __EFC_CMDS_H */ From patchwork Thu Jan 7 22:58:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358587 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 73B0DC433E6 for ; Thu, 7 Jan 2021 23:00:50 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4BB3C235FC for ; Thu, 7 Jan 2021 23:00:50 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728813AbhAGXAt (ORCPT ); Thu, 7 Jan 2021 18:00:49 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51892 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728425AbhAGXAs (ORCPT ); Thu, 7 Jan 2021 18:00:48 -0500 Received: from mail-pj1-x1035.google.com (mail-pj1-x1035.google.com [IPv6:2607:f8b0:4864:20::1035]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 53C6CC0612A5 for ; Thu, 7 Jan 2021 14:59:30 -0800 (PST) Received: by mail-pj1-x1035.google.com with SMTP id w1so4679346pjc.0 for ; Thu, 07 Jan 2021 14:59:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l3yElRNQ9/m8FSGyRpVj1Qodzb2nx+JhL0vL97klPHU=; b=uuKUcM/5zQrrLdfmuVEFAicwNLG8JivF9wn2+D42cKepXoa8WvGSBsjStyPsY6hR6t zVSnsI5MzNFuwIuYNqym+FeDgsUKU8BVSiH9unXEzkM7xWPIhP3wTKR1bdK6pkO3LLQi 5w9NKaaekneqtM9MZkfWJGwXyEPNaMiOSev8CWlojGBfu01+g3lzXfiFU86EebZ7EFOt 2bRFeyXr8WJqT6ldKzOyZ7QtiWnprlL9yPInCKcIdfzyId9jCW4nFiwZ2JOkD80n59Gr 0PsvXiM1kJZwAEISuJHdwLj6Smus/nFl7YF40CqX6SM4knnQ9+2XVWuDY3qUG8G5RIOJ L3mA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l3yElRNQ9/m8FSGyRpVj1Qodzb2nx+JhL0vL97klPHU=; b=VqkCaK/KcaoBy87j94rE01ObojJk+/ofYSIyvhRhXKnmXWJr7R/FpmbMPTt7xowMxF 19ao9Ff8x9/UGe31TDQkRAfEdtJ8Dy2+K4m360mhxRtSi+hYEpJIkvWvhNeO4bO9dDVf nQEM2sFqQDpechy+V8wMpYp8U0nyOW2J9BsEkpOWuMRcyikM1vPDu29+cOZZrjeoK0xX vZkZpbSZUUywrrMus31/eSUPElic9Hb0JTqzPlTvmIkV2s/TgRblmILWVfe1V3NnLUl7 deIItIfDihyQvFdmkyLNVmMT+LT68F7oeHHCTPpO+yoPTmrBv67fz8uTM89BWHqj0mkA UmxQ== X-Gm-Message-State: AOAM532GUIMfGQg3VGSn+D3Eutrmga5wekQLB5bv733eSZKyfW5HqNCN CPsen6b/q+yMpkbUXEyddZ5fV/t6z/OQmg== X-Google-Smtp-Source: ABdhPJx8JF45D8CR7E8/MLFHof8x/8Fi8QpN7L7W7e9RbzBTCWXZia8XycAjEwWrIhMyzRbC0D8H1A== X-Received: by 2002:a17:902:694c:b029:db:d939:1061 with SMTP id k12-20020a170902694cb02900dbd9391061mr1000984plt.80.1610060369281; Thu, 07 Jan 2021 14:59:29 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:28 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 19/31] elx: efct: Hardware queues creation and deletion Date: Thu, 7 Jan 2021 14:58:53 -0800 Message-Id: <20210107225905.18186-20-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the efct driver population. This patch adds driver definitions for: Routines for queue creation, deletion, and configuration. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/efct/efct_hw.h | 4 + drivers/scsi/elx/efct/efct_hw_queues.c | 679 +++++++++++++++++++++++++ 2 files changed, 683 insertions(+) create mode 100644 drivers/scsi/elx/efct/efct_hw_queues.c diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h index 44f248356d42..39cfe6658783 100644 --- a/drivers/scsi/elx/efct/efct_hw.h +++ b/drivers/scsi/elx/efct/efct_hw.h @@ -610,6 +610,10 @@ efct_hw_setup(struct efct_hw *hw, void *os, struct pci_dev *pdev); enum efct_hw_rtn efct_hw_init(struct efct_hw *hw); enum efct_hw_rtn efct_hw_parse_filter(struct efct_hw *hw, void *value); +enum efct_hw_rtn +efct_hw_init_queues(struct efct_hw *hw); +int +efct_hw_map_wq_cpu(struct efct_hw *hw); uint64_t efct_get_wwnn(struct efct_hw *hw); uint64_t diff --git a/drivers/scsi/elx/efct/efct_hw_queues.c b/drivers/scsi/elx/efct/efct_hw_queues.c new file mode 100644 index 000000000000..cefe56ba4a56 --- /dev/null +++ b/drivers/scsi/elx/efct/efct_hw_queues.c @@ -0,0 +1,679 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#include "efct_driver.h" +#include "efct_hw.h" +#include "efct_unsol.h" + +enum efct_hw_rtn +efct_hw_init_queues(struct efct_hw *hw) +{ + struct hw_eq *eq = NULL; + struct hw_cq *cq = NULL; + struct hw_wq *wq = NULL; + struct hw_mq *mq = NULL; + + struct hw_eq *eqs[EFCT_HW_MAX_NUM_EQ]; + struct hw_cq *cqs[EFCT_HW_MAX_NUM_EQ]; + struct hw_rq *rqs[EFCT_HW_MAX_NUM_EQ]; + u32 i = 0, j; + + hw->eq_count = 0; + hw->cq_count = 0; + hw->mq_count = 0; + hw->wq_count = 0; + hw->rq_count = 0; + hw->hw_rq_count = 0; + INIT_LIST_HEAD(&hw->eq_list); + + for (i = 0; i < hw->config.n_eq; i++) { + /* Create EQ */ + eq = efct_hw_new_eq(hw, EFCT_HW_EQ_DEPTH); + if (!eq) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_NO_MEMORY; + } + + eqs[i] = eq; + + /* Create one MQ */ + if (!i) { + cq = efct_hw_new_cq(eq, + hw->num_qentries[SLI4_QTYPE_CQ]); + if (!cq) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_NO_MEMORY; + } + + mq = efct_hw_new_mq(cq, EFCT_HW_MQ_DEPTH); + if (!mq) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_NO_MEMORY; + } + } + + /* Create WQ */ + cq = efct_hw_new_cq(eq, hw->num_qentries[SLI4_QTYPE_CQ]); + if (!cq) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_NO_MEMORY; + } + + wq = efct_hw_new_wq(cq, hw->num_qentries[SLI4_QTYPE_WQ]); + if (!wq) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_NO_MEMORY; + } + } + + /* Create CQ set */ + if (efct_hw_new_cq_set(eqs, cqs, i, hw->num_qentries[SLI4_QTYPE_CQ])) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_ERROR; + } + + /* Create RQ set */ + if (efct_hw_new_rq_set(cqs, rqs, i, EFCT_HW_RQ_ENTRIES_DEF)) { + efct_hw_queue_teardown(hw); + return EFCT_HW_RTN_ERROR; + } + + for (j = 0; j < i ; j++) { + rqs[j]->filter_mask = 0; + rqs[j]->is_mrq = true; + rqs[j]->base_mrq_id = rqs[0]->hdr->id; + } + + hw->hw_mrq_count = i; + + return EFCT_HW_RTN_SUCCESS; +} + +int +efct_hw_map_wq_cpu(struct efct_hw *hw) +{ + struct efct *efct = hw->os; + u32 cpu = 0, i; + + /* Init cpu_map array */ + hw->wq_cpu_array = kcalloc(num_possible_cpus(), sizeof(void *), + GFP_KERNEL); + if (!hw->wq_cpu_array) + return EFC_FAIL; + + for (i = 0; i < hw->config.n_eq; i++) { + const struct cpumask *maskp; + + /* Get a CPU mask for all CPUs affinitized to this vector */ + maskp = pci_irq_get_affinity(efct->pci, i); + if (!maskp) { + efc_log_debug(efct, "maskp null for vector:%d\n", i); + continue; + } + + /* Loop through all CPUs associated with vector idx */ + for_each_cpu_and(cpu, maskp, cpu_present_mask) { + efc_log_debug(efct, "CPU:%d irq vector:%d\n", cpu, i); + hw->wq_cpu_array[cpu] = hw->hw_wq[i]; + } + } + + return EFC_SUCCESS; +} + +struct hw_eq * +efct_hw_new_eq(struct efct_hw *hw, u32 entry_count) +{ + struct hw_eq *eq = kzalloc(sizeof(*eq), GFP_KERNEL); + + if (!eq) + return NULL; + + eq->type = SLI4_QTYPE_EQ; + eq->hw = hw; + eq->entry_count = entry_count; + eq->instance = hw->eq_count++; + eq->queue = &hw->eq[eq->instance]; + INIT_LIST_HEAD(&eq->cq_list); + + if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_EQ, eq->queue, entry_count, + NULL)) { + efc_log_err(hw->os, "EQ[%d] alloc failure\n", eq->instance); + kfree(eq); + return NULL; + } + + sli_eq_modify_delay(&hw->sli, eq->queue, 1, 0, 8); + hw->hw_eq[eq->instance] = eq; + INIT_LIST_HEAD(&eq->list_entry); + list_add_tail(&eq->list_entry, &hw->eq_list); + efc_log_debug(hw->os, "create eq[%2d] id %3d len %4d\n", eq->instance, + eq->queue->id, eq->entry_count); + return eq; +} + +struct hw_cq * +efct_hw_new_cq(struct hw_eq *eq, u32 entry_count) +{ + struct efct_hw *hw = eq->hw; + struct hw_cq *cq = kzalloc(sizeof(*cq), GFP_KERNEL); + + if (!cq) + return NULL; + + cq->eq = eq; + cq->type = SLI4_QTYPE_CQ; + cq->instance = eq->hw->cq_count++; + cq->entry_count = entry_count; + cq->queue = &hw->cq[cq->instance]; + + INIT_LIST_HEAD(&cq->q_list); + + if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_CQ, cq->queue, + cq->entry_count, eq->queue)) { + efc_log_err(hw->os, "CQ[%d] allocation failure len=%d\n", + eq->instance, eq->entry_count); + kfree(cq); + return NULL; + } + + hw->hw_cq[cq->instance] = cq; + INIT_LIST_HEAD(&cq->list_entry); + list_add_tail(&cq->list_entry, &eq->cq_list); + efc_log_debug(hw->os, "create cq[%2d] id %3d len %4d\n", cq->instance, + cq->queue->id, cq->entry_count); + return cq; +} + +u32 +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[], + u32 num_cqs, u32 entry_count) +{ + u32 i; + struct efct_hw *hw = eqs[0]->hw; + struct sli4 *sli4 = &hw->sli; + struct hw_cq *cq = NULL; + struct sli4_queue *qs[SLI4_MAX_CQ_SET_COUNT]; + struct sli4_queue *assefct[SLI4_MAX_CQ_SET_COUNT]; + + /* Initialise CQS pointers to NULL */ + for (i = 0; i < num_cqs; i++) + cqs[i] = NULL; + + for (i = 0; i < num_cqs; i++) { + cq = kzalloc(sizeof(*cq), GFP_KERNEL); + if (!cq) + goto error; + + cqs[i] = cq; + cq->eq = eqs[i]; + cq->type = SLI4_QTYPE_CQ; + cq->instance = hw->cq_count++; + cq->entry_count = entry_count; + cq->queue = &hw->cq[cq->instance]; + qs[i] = cq->queue; + assefct[i] = eqs[i]->queue; + INIT_LIST_HEAD(&cq->q_list); + } + + if (sli_cq_alloc_set(sli4, qs, num_cqs, entry_count, assefct)) { + efc_log_err(hw->os, "Failed to create CQ Set.\n"); + goto error; + } + + for (i = 0; i < num_cqs; i++) { + hw->hw_cq[cqs[i]->instance] = cqs[i]; + INIT_LIST_HEAD(&cqs[i]->list_entry); + list_add_tail(&cqs[i]->list_entry, &cqs[i]->eq->cq_list); + } + + return EFC_SUCCESS; + +error: + for (i = 0; i < num_cqs; i++) { + kfree(cqs[i]); + cqs[i] = NULL; + } + return EFC_FAIL; +} + +struct hw_mq * +efct_hw_new_mq(struct hw_cq *cq, u32 entry_count) +{ + struct efct_hw *hw = cq->eq->hw; + struct hw_mq *mq = kzalloc(sizeof(*mq), GFP_KERNEL); + + if (!mq) + return NULL; + + mq->cq = cq; + mq->type = SLI4_QTYPE_MQ; + mq->instance = cq->eq->hw->mq_count++; + mq->entry_count = entry_count; + mq->entry_size = EFCT_HW_MQ_DEPTH; + mq->queue = &hw->mq[mq->instance]; + + if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_MQ, mq->queue, mq->entry_size, + cq->queue)) { + efc_log_err(hw->os, "MQ allocation failure\n"); + kfree(mq); + return NULL; + } + + hw->hw_mq[mq->instance] = mq; + INIT_LIST_HEAD(&mq->list_entry); + list_add_tail(&mq->list_entry, &cq->q_list); + efc_log_debug(hw->os, "create mq[%2d] id %3d len %4d\n", mq->instance, + mq->queue->id, mq->entry_count); + return mq; +} + +struct hw_wq * +efct_hw_new_wq(struct hw_cq *cq, u32 entry_count) +{ + struct efct_hw *hw = cq->eq->hw; + struct hw_wq *wq = kzalloc(sizeof(*wq), GFP_KERNEL); + + if (!wq) + return NULL; + + wq->hw = cq->eq->hw; + wq->cq = cq; + wq->type = SLI4_QTYPE_WQ; + wq->instance = cq->eq->hw->wq_count++; + wq->entry_count = entry_count; + wq->queue = &hw->wq[wq->instance]; + wq->wqec_set_count = EFCT_HW_WQEC_SET_COUNT; + wq->wqec_count = wq->wqec_set_count; + wq->free_count = wq->entry_count - 1; + INIT_LIST_HEAD(&wq->pending_list); + + if (sli_queue_alloc(&hw->sli, SLI4_QTYPE_WQ, wq->queue, + wq->entry_count, cq->queue)) { + efc_log_err(hw->os, "WQ allocation failure\n"); + kfree(wq); + return NULL; + } + + hw->hw_wq[wq->instance] = wq; + INIT_LIST_HEAD(&wq->list_entry); + list_add_tail(&wq->list_entry, &cq->q_list); + efc_log_debug(hw->os, "create wq[%2d] id %3d len %4d cls %d\n", + wq->instance, wq->queue->id, wq->entry_count, wq->class); + return wq; +} + +u32 +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[], + u32 num_rq_pairs, u32 entry_count) +{ + struct efct_hw *hw = cqs[0]->eq->hw; + struct hw_rq *rq = NULL; + struct sli4_queue *qs[SLI4_MAX_RQ_SET_COUNT * 2] = { NULL }; + u32 i, q_count, size; + + /* Initialise RQS pointers */ + for (i = 0; i < num_rq_pairs; i++) + rqs[i] = NULL; + + /* + * Allocate an RQ object SET, where each element in set + * encapsulates 2 SLI queues (for rq pair) + */ + for (i = 0, q_count = 0; i < num_rq_pairs; i++, q_count += 2) { + rq = kzalloc(sizeof(*rq), GFP_KERNEL); + if (!rq) + goto error; + + rqs[i] = rq; + rq->instance = hw->hw_rq_count++; + rq->cq = cqs[i]; + rq->type = SLI4_QTYPE_RQ; + rq->entry_count = entry_count; + + /* Header RQ */ + rq->hdr = &hw->rq[hw->rq_count]; + rq->hdr_entry_size = EFCT_HW_RQ_HEADER_SIZE; + hw->hw_rq_lookup[hw->rq_count] = rq->instance; + hw->rq_count++; + qs[q_count] = rq->hdr; + + /* Data RQ */ + rq->data = &hw->rq[hw->rq_count]; + rq->data_entry_size = hw->config.rq_default_buffer_size; + hw->hw_rq_lookup[hw->rq_count] = rq->instance; + hw->rq_count++; + qs[q_count + 1] = rq->data; + + rq->rq_tracker = NULL; + } + + if (sli_fc_rq_set_alloc(&hw->sli, num_rq_pairs, qs, + cqs[0]->queue->id, + rqs[0]->entry_count, + rqs[0]->hdr_entry_size, + rqs[0]->data_entry_size)) { + efc_log_err(hw->os, "RQ Set alloc failure for base CQ=%d\n", + cqs[0]->queue->id); + goto error; + } + + for (i = 0; i < num_rq_pairs; i++) { + hw->hw_rq[rqs[i]->instance] = rqs[i]; + INIT_LIST_HEAD(&rqs[i]->list_entry); + list_add_tail(&rqs[i]->list_entry, &cqs[i]->q_list); + size = sizeof(struct efc_hw_sequence *) * rqs[i]->entry_count; + rqs[i]->rq_tracker = kzalloc(size, GFP_KERNEL); + if (!rqs[i]->rq_tracker) + goto error; + } + + return EFC_SUCCESS; + +error: + for (i = 0; i < num_rq_pairs; i++) { + if (rqs[i]) { + kfree(rqs[i]->rq_tracker); + kfree(rqs[i]); + } + } + + return EFC_FAIL; +} + +void +efct_hw_del_eq(struct hw_eq *eq) +{ + struct hw_cq *cq; + struct hw_cq *cq_next; + + if (!eq) + return; + + list_for_each_entry_safe(cq, cq_next, &eq->cq_list, list_entry) + efct_hw_del_cq(cq); + list_del(&eq->list_entry); + eq->hw->hw_eq[eq->instance] = NULL; + kfree(eq); +} + +void +efct_hw_del_cq(struct hw_cq *cq) +{ + struct hw_q *q; + struct hw_q *q_next; + + if (!cq) + return; + + list_for_each_entry_safe(q, q_next, &cq->q_list, list_entry) { + switch (q->type) { + case SLI4_QTYPE_MQ: + efct_hw_del_mq((struct hw_mq *)q); + break; + case SLI4_QTYPE_WQ: + efct_hw_del_wq((struct hw_wq *)q); + break; + case SLI4_QTYPE_RQ: + efct_hw_del_rq((struct hw_rq *)q); + break; + default: + break; + } + } + list_del(&cq->list_entry); + cq->eq->hw->hw_cq[cq->instance] = NULL; + kfree(cq); +} + +void +efct_hw_del_mq(struct hw_mq *mq) +{ + if (!mq) + return; + + list_del(&mq->list_entry); + mq->cq->eq->hw->hw_mq[mq->instance] = NULL; + kfree(mq); +} + +void +efct_hw_del_wq(struct hw_wq *wq) +{ + if (!wq) + return; + + list_del(&wq->list_entry); + wq->cq->eq->hw->hw_wq[wq->instance] = NULL; + kfree(wq); +} + +void +efct_hw_del_rq(struct hw_rq *rq) +{ + struct efct_hw *hw = NULL; + + if (!rq) + return; + /* Free RQ tracker */ + kfree(rq->rq_tracker); + rq->rq_tracker = NULL; + list_del(&rq->list_entry); + hw = rq->cq->eq->hw; + hw->hw_rq[rq->instance] = NULL; + kfree(rq); +} + +void +efct_hw_queue_teardown(struct efct_hw *hw) +{ + struct hw_eq *eq; + struct hw_eq *eq_next; + + if (!hw->eq_list.next) + return; + + list_for_each_entry_safe(eq, eq_next, &hw->eq_list, list_entry) + efct_hw_del_eq(eq); +} + +static inline int +efct_hw_rqpair_find(struct efct_hw *hw, u16 rq_id) +{ + return efct_hw_queue_hash_find(hw->rq_hash, rq_id); +} + +static struct efc_hw_sequence * +efct_hw_rqpair_get(struct efct_hw *hw, u16 rqindex, u16 bufindex) +{ + struct sli4_queue *rq_hdr = &hw->rq[rqindex]; + struct efc_hw_sequence *seq = NULL; + struct hw_rq *rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]]; + unsigned long flags = 0; + + if (bufindex >= rq_hdr->length) { + efc_log_err(hw->os, + "RQidx %d bufidx %d exceed ring len %d for id %d\n", + rqindex, bufindex, rq_hdr->length, rq_hdr->id); + return NULL; + } + + /* rq_hdr lock also covers rqindex+1 queue */ + spin_lock_irqsave(&rq_hdr->lock, flags); + + seq = rq->rq_tracker[bufindex]; + rq->rq_tracker[bufindex] = NULL; + + if (!seq) { + efc_log_err(hw->os, + "RQbuf NULL, rqidx %d, bufidx %d, cur q idx = %d\n", + rqindex, bufindex, rq_hdr->index); + } + + spin_unlock_irqrestore(&rq_hdr->lock, flags); + return seq; +} + +int +efct_hw_rqpair_process_rq(struct efct_hw *hw, struct hw_cq *cq, + u8 *cqe) +{ + u16 rq_id; + u32 index; + int rqindex; + int rq_status; + u32 h_len; + u32 p_len; + struct efc_hw_sequence *seq; + struct hw_rq *rq; + + rq_status = sli_fc_rqe_rqid_and_index(&hw->sli, cqe, + &rq_id, &index); + if (rq_status != 0) { + switch (rq_status) { + case SLI4_FC_ASYNC_RQ_BUF_LEN_EXCEEDED: + case SLI4_FC_ASYNC_RQ_DMA_FAILURE: + /* just get RQ buffer then return to chip */ + rqindex = efct_hw_rqpair_find(hw, rq_id); + if (rqindex < 0) { + efc_log_debug(hw->os, + "status=%#x: lookup fail id=%#x\n", + rq_status, rq_id); + break; + } + + /* get RQ buffer */ + seq = efct_hw_rqpair_get(hw, rqindex, index); + + /* return to chip */ + if (efct_hw_rqpair_sequence_free(hw, seq)) { + efc_log_debug(hw->os, + "status=%#x,fail rtrn buf to RQ\n", + rq_status); + break; + } + break; + case SLI4_FC_ASYNC_RQ_INSUFF_BUF_NEEDED: + case SLI4_FC_ASYNC_RQ_INSUFF_BUF_FRM_DISC: + /* + * since RQ buffers were not consumed, cannot return + * them to chip + */ + efc_log_debug(hw->os, "Warning: RCQE status=%#x,\n", + rq_status); + fallthrough; + default: + break; + } + return EFC_FAIL; + } + + rqindex = efct_hw_rqpair_find(hw, rq_id); + if (rqindex < 0) { + efc_log_debug(hw->os, "Error: rq_id lookup failed for id=%#x\n", + rq_id); + return EFC_FAIL; + } + + rq = hw->hw_rq[hw->hw_rq_lookup[rqindex]]; + rq->use_count++; + + seq = efct_hw_rqpair_get(hw, rqindex, index); + if (WARN_ON(!seq)) + return EFC_FAIL; + + seq->hw = hw; + seq->auto_xrdy = 0; + seq->out_of_xris = 0; + + sli_fc_rqe_length(&hw->sli, cqe, &h_len, &p_len); + seq->header->dma.len = h_len; + seq->payload->dma.len = p_len; + seq->fcfi = sli_fc_rqe_fcfi(&hw->sli, cqe); + seq->hw_priv = cq->eq; + + efct_unsolicited_cb(hw->os, seq); + + return EFC_SUCCESS; +} + +static int +efct_hw_rqpair_put(struct efct_hw *hw, struct efc_hw_sequence *seq) +{ + struct sli4_queue *rq_hdr = &hw->rq[seq->header->rqindex]; + struct sli4_queue *rq_payload = &hw->rq[seq->payload->rqindex]; + u32 hw_rq_index = hw->hw_rq_lookup[seq->header->rqindex]; + struct hw_rq *rq = hw->hw_rq[hw_rq_index]; + u32 phys_hdr[2]; + u32 phys_payload[2]; + int qindex_hdr; + int qindex_payload; + unsigned long flags = 0; + + /* Update the RQ verification lookup tables */ + phys_hdr[0] = upper_32_bits(seq->header->dma.phys); + phys_hdr[1] = lower_32_bits(seq->header->dma.phys); + phys_payload[0] = upper_32_bits(seq->payload->dma.phys); + phys_payload[1] = lower_32_bits(seq->payload->dma.phys); + + /* rq_hdr lock also covers payload / header->rqindex+1 queue */ + spin_lock_irqsave(&rq_hdr->lock, flags); + + /* + * Note: The header must be posted last for buffer pair mode because + * posting on the header queue posts the payload queue as well. + * We do not ring the payload queue independently in RQ pair mode. + */ + qindex_payload = sli_rq_write(&hw->sli, rq_payload, + (void *)phys_payload); + qindex_hdr = sli_rq_write(&hw->sli, rq_hdr, (void *)phys_hdr); + if (qindex_hdr < 0 || + qindex_payload < 0) { + efc_log_err(hw->os, "RQ_ID=%#x write failed\n", rq_hdr->id); + spin_unlock_irqrestore(&rq_hdr->lock, flags); + return EFCT_HW_RTN_ERROR; + } + + /* ensure the indexes are the same */ + WARN_ON(qindex_hdr != qindex_payload); + + /* Update the lookup table */ + if (!rq->rq_tracker[qindex_hdr]) { + rq->rq_tracker[qindex_hdr] = seq; + } else { + efc_log_debug(hw->os, + "expected rq_tracker[%d][%d] buffer to be NULL\n", + hw_rq_index, qindex_hdr); + } + + spin_unlock_irqrestore(&rq_hdr->lock, flags); + return EFCT_HW_RTN_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_rqpair_sequence_free(struct efct_hw *hw, struct efc_hw_sequence *seq) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS; + + /* + * Post the data buffer first. Because in RQ pair mode, ringing the + * doorbell of the header ring will post the data buffer as well. + */ + if (efct_hw_rqpair_put(hw, seq)) { + efc_log_err(hw->os, "error writing buffers\n"); + return EFCT_HW_RTN_ERROR; + } + + return rc; +} + +int +efct_efc_hw_sequence_free(struct efc *efc, struct efc_hw_sequence *seq) +{ + struct efct *efct = efc->base; + + return efct_hw_rqpair_sequence_free(&efct->hw, seq); +} From patchwork Thu Jan 7 22:58:55 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358584 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C2FA0C433E6 for ; Thu, 7 Jan 2021 23:01:16 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8C65F235FF for ; Thu, 7 Jan 2021 23:01:16 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728902AbhAGXBO (ORCPT ); Thu, 7 Jan 2021 18:01:14 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52018 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728800AbhAGXBN (ORCPT ); Thu, 7 Jan 2021 18:01:13 -0500 Received: from mail-pf1-x431.google.com (mail-pf1-x431.google.com [IPv6:2607:f8b0:4864:20::431]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id F1953C0612A7 for ; Thu, 7 Jan 2021 14:59:31 -0800 (PST) Received: by mail-pf1-x431.google.com with SMTP id q20so1713243pfu.8 for ; Thu, 07 Jan 2021 14:59:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aL8dP+fHg8BUhmzE8eO6HH19ZMW92ZmKxkMjUYWEEyk=; b=gWsOb2dG7u8zL/5C+7kEFSrqODpYZkEmBoeOi9o890cZUOGuR1jGeykhVNCWPy4Rcd 3Jd6+IOEU/bsdXGjIC4bqpC0hyZ8mHJOp6Ak5jaI5Pf+t0lAsph2aRZdXXGwXUxs6Kar wIX2w2eLwR2McubgY28fXg90OhS1uvNLkQvlk7hZqrUTOHaCGS8NVRwx1g8MCHg2qqqO RTHiwfKJtFOj2lu3vvvPPL48ECoKehLJYK99+gfgLuaBKo/ani80hger0XAoKxZ9ROB6 VjDRahhqAuO5YLmafJRmFTbiQoBsUo1NWM5o4KZrbmU5ud06cACeDPx21shQ7ojVerkg 53hw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aL8dP+fHg8BUhmzE8eO6HH19ZMW92ZmKxkMjUYWEEyk=; b=Nvgy1vlZBOpOgDF/eVV8x2yLgGF7MVtcUE54cbPKbj1OIlI48LSjqYzTjDXFDcgkf3 iAblpbcIs9IynotR+We3xjCQEflixxBpWwHn6evcqMB5TUpvGr3M4TXLwNtApjY411MP dn2gn1rI9wZag8XKKL/snE5/b9YxXXaMQCfAXws9GA+0QZ57Ywrw8oyO0mlTB1lUDkH5 R9TMgCDd0XHNs0KzgWDF2Seu9R8jZPR5r0kY8v8bd0bg/03uIS2qVpbxCouSq4HBHX9d D/ZlGdNj1Z+N8pjEDVxgLSaKtxsi7Bq2IgOgzyzUcnZQBp2C80TEk32qx1/l12EJP43W n8Eg== X-Gm-Message-State: AOAM531X9PhI34KOTiNdc3rA8cdPqUJFYPKynhrMt//fpUgRcKrEV41B 97oZXlpPPlzOXRdc/ONN2xZfaAqymEgVew== X-Google-Smtp-Source: ABdhPJx8/91lABPMwvzp4I59jieEJEjIDe3nLV4dZVjmNRUKMl5F8RB/ae8XiW3dt4K3KsEdbPjNqA== X-Received: by 2002:a62:f948:0:b029:1ad:c27d:2b9e with SMTP id g8-20020a62f9480000b02901adc27d2b9emr715095pfm.33.1610060371051; Thu, 07 Jan 2021 14:59:31 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:30 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Daniel Wagner Subject: [PATCH v7 21/31] elx: efct: Hardware IO and SGL initialization Date: Thu, 7 Jan 2021 14:58:55 -0800 Message-Id: <20210107225905.18186-22-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the efct driver population. This patch adds driver definitions for: Routines to create IO interfaces (wqs, etc), SGL initialization, and configure hardware features. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- drivers/scsi/elx/efct/efct_hw.c | 590 ++++++++++++++++++++++++++++++++ drivers/scsi/elx/efct/efct_hw.h | 41 +++ 2 files changed, 631 insertions(+) diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c index 451a5729bff4..da4e53407b94 100644 --- a/drivers/scsi/elx/efct/efct_hw.c +++ b/drivers/scsi/elx/efct/efct_hw.c @@ -1565,3 +1565,593 @@ efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg) } return EFC_SUCCESS; } + +static inline struct efct_hw_io * +_efct_hw_io_alloc(struct efct_hw *hw) +{ + struct efct_hw_io *io = NULL; + + if (!list_empty(&hw->io_free)) { + io = list_first_entry(&hw->io_free, struct efct_hw_io, + list_entry); + list_del(&io->list_entry); + } + if (io) { + INIT_LIST_HEAD(&io->list_entry); + list_add_tail(&io->list_entry, &hw->io_inuse); + io->state = EFCT_HW_IO_STATE_INUSE; + io->abort_reqtag = U32_MAX; + io->wq = hw->wq_cpu_array[raw_smp_processor_id()]; + if (!io->wq) { + efc_log_err(hw->os, "WQ not assigned for cpu:%d\n", + raw_smp_processor_id()); + io->wq = hw->hw_wq[0]; + } + kref_init(&io->ref); + io->release = efct_hw_io_free_internal; + } else { + atomic_add_return(1, &hw->io_alloc_failed_count); + } + + return io; +} + +struct efct_hw_io * +efct_hw_io_alloc(struct efct_hw *hw) +{ + struct efct_hw_io *io = NULL; + unsigned long flags = 0; + + spin_lock_irqsave(&hw->io_lock, flags); + io = _efct_hw_io_alloc(hw); + spin_unlock_irqrestore(&hw->io_lock, flags); + + return io; +} + +static void +efct_hw_io_free_move_correct_list(struct efct_hw *hw, + struct efct_hw_io *io) +{ + + /* + * When an IO is freed, depending on the exchange busy flag, + * move it to the correct list. + */ + if (io->xbusy) { + /* + * add to wait_free list and wait for XRI_ABORTED CQEs to clean + * up + */ + INIT_LIST_HEAD(&io->list_entry); + list_add_tail(&io->list_entry, &hw->io_wait_free); + io->state = EFCT_HW_IO_STATE_WAIT_FREE; + } else { + /* IO not busy, add to free list */ + INIT_LIST_HEAD(&io->list_entry); + list_add_tail(&io->list_entry, &hw->io_free); + io->state = EFCT_HW_IO_STATE_FREE; + } +} + +static inline void +efct_hw_io_free_common(struct efct_hw *hw, struct efct_hw_io *io) +{ + /* initialize IO fields */ + efct_hw_init_free_io(io); + + /* Restore default SGL */ + efct_hw_io_restore_sgl(hw, io); +} + +void +efct_hw_io_free_internal(struct kref *arg) +{ + unsigned long flags = 0; + struct efct_hw_io *io = container_of(arg, struct efct_hw_io, ref); + struct efct_hw *hw = io->hw; + + /* perform common cleanup */ + efct_hw_io_free_common(hw, io); + + spin_lock_irqsave(&hw->io_lock, flags); + /* remove from in-use list */ + if (!list_empty(&io->list_entry) && !list_empty(&hw->io_inuse)) { + list_del_init(&io->list_entry); + efct_hw_io_free_move_correct_list(hw, io); + } + spin_unlock_irqrestore(&hw->io_lock, flags); +} + +int +efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io) +{ + return kref_put(&io->ref, io->release); +} + +struct efct_hw_io * +efct_hw_io_lookup(struct efct_hw *hw, u32 xri) +{ + u32 ioindex; + + ioindex = xri - hw->sli.ext[SLI4_RSRC_XRI].base[0]; + return hw->io[ioindex]; +} + +enum efct_hw_rtn +efct_hw_io_init_sges(struct efct_hw *hw, struct efct_hw_io *io, + enum efct_hw_io_type type) +{ + struct sli4_sge *data = NULL; + u32 i = 0; + u32 skips = 0; + u32 sge_flags = 0; + + if (!io) { + efc_log_err(hw->os, + "bad parameter hw=%p io=%p\n", hw, io); + return EFCT_HW_RTN_ERROR; + } + + /* Clear / reset the scatter-gather list */ + io->sgl = &io->def_sgl; + io->sgl_count = io->def_sgl_count; + io->first_data_sge = 0; + + memset(io->sgl->virt, 0, 2 * sizeof(struct sli4_sge)); + io->n_sge = 0; + io->sge_offset = 0; + + io->type = type; + + data = io->sgl->virt; + + /* + * Some IO types have underlying hardware requirements on the order + * of SGEs. Process all special entries here. + */ + switch (type) { + case EFCT_HW_IO_TARGET_WRITE: + + /* populate host resident XFER_RDY buffer */ + sge_flags = le32_to_cpu(data->dw2_flags); + sge_flags &= (~SLI4_SGE_TYPE_MASK); + sge_flags |= (SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT); + data->buffer_address_high = + cpu_to_le32(upper_32_bits(io->xfer_rdy.phys)); + data->buffer_address_low = + cpu_to_le32(lower_32_bits(io->xfer_rdy.phys)); + data->buffer_length = cpu_to_le32(io->xfer_rdy.size); + data->dw2_flags = cpu_to_le32(sge_flags); + data++; + + skips = EFCT_TARGET_WRITE_SKIPS; + + io->n_sge = 1; + break; + case EFCT_HW_IO_TARGET_READ: + /* + * For FCP_TSEND64, the first 2 entries are SKIP SGE's + */ + skips = EFCT_TARGET_READ_SKIPS; + break; + case EFCT_HW_IO_TARGET_RSP: + /* + * No skips, etc. for FCP_TRSP64 + */ + break; + default: + efc_log_err(hw->os, "unsupported IO type %#x\n", type); + return EFCT_HW_RTN_ERROR; + } + + /* + * Write skip entries + */ + for (i = 0; i < skips; i++) { + sge_flags = le32_to_cpu(data->dw2_flags); + sge_flags &= (~SLI4_SGE_TYPE_MASK); + sge_flags |= (SLI4_SGE_TYPE_SKIP << SLI4_SGE_TYPE_SHIFT); + data->dw2_flags = cpu_to_le32(sge_flags); + data++; + } + + io->n_sge += skips; + + /* + * Set last + */ + sge_flags = le32_to_cpu(data->dw2_flags); + sge_flags |= SLI4_SGE_LAST; + data->dw2_flags = cpu_to_le32(sge_flags); + + return EFCT_HW_RTN_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io, + uintptr_t addr, u32 length) +{ + struct sli4_sge *data = NULL; + u32 sge_flags = 0; + + if (!io || !addr || !length) { + efc_log_err(hw->os, + "bad parameter hw=%p io=%p addr=%lx length=%u\n", + hw, io, addr, length); + return EFCT_HW_RTN_ERROR; + } + + if (length > hw->sli.sge_supported_length) { + efc_log_err(hw->os, + "length of SGE %d bigger than allowed %d\n", + length, hw->sli.sge_supported_length); + return EFCT_HW_RTN_ERROR; + } + + data = io->sgl->virt; + data += io->n_sge; + + sge_flags = le32_to_cpu(data->dw2_flags); + sge_flags &= ~SLI4_SGE_TYPE_MASK; + sge_flags |= SLI4_SGE_TYPE_DATA << SLI4_SGE_TYPE_SHIFT; + sge_flags &= ~SLI4_SGE_DATA_OFFSET_MASK; + sge_flags |= SLI4_SGE_DATA_OFFSET_MASK & io->sge_offset; + + data->buffer_address_high = cpu_to_le32(upper_32_bits(addr)); + data->buffer_address_low = cpu_to_le32(lower_32_bits(addr)); + data->buffer_length = cpu_to_le32(length); + + /* + * Always assume this is the last entry and mark as such. + * If this is not the first entry unset the "last SGE" + * indication for the previous entry + */ + sge_flags |= SLI4_SGE_LAST; + data->dw2_flags = cpu_to_le32(sge_flags); + + if (io->n_sge) { + sge_flags = le32_to_cpu(data[-1].dw2_flags); + sge_flags &= ~SLI4_SGE_LAST; + data[-1].dw2_flags = cpu_to_le32(sge_flags); + } + + /* Set first_data_bde if not previously set */ + if (io->first_data_sge == 0) + io->first_data_sge = io->n_sge; + + io->sge_offset += length; + io->n_sge++; + + return EFCT_HW_RTN_SUCCESS; +} + +void +efct_hw_io_abort_all(struct efct_hw *hw) +{ + struct efct_hw_io *io_to_abort = NULL; + struct efct_hw_io *next_io = NULL; + + list_for_each_entry_safe(io_to_abort, next_io, + &hw->io_inuse, list_entry) { + efct_hw_io_abort(hw, io_to_abort, true, NULL, NULL); + } +} + +static void +efct_hw_wq_process_abort(void *arg, u8 *cqe, int status) +{ + struct efct_hw_io *io = arg; + struct efct_hw *hw = io->hw; + u32 ext = 0; + u32 len = 0; + struct hw_wq_callback *wqcb; + unsigned long flags = 0; + + /* + * For IOs that were aborted internally, we may need to issue the + * callback here depending on whether a XRI_ABORTED CQE is expected ot + * not. If the status is Local Reject/No XRI, then + * issue the callback now. + */ + ext = sli_fc_ext_status(&hw->sli, cqe); + if (status == SLI4_FC_WCQE_STATUS_LOCAL_REJECT && + ext == SLI4_FC_LOCAL_REJECT_NO_XRI && io->done) { + efct_hw_done_t done = io->done; + + io->done = NULL; + + /* + * Use latched status as this is always saved for an internal + * abort Note: We wont have both a done and abort_done + * function, so don't worry about + * clobbering the len, status and ext fields. + */ + status = io->saved_status; + len = io->saved_len; + ext = io->saved_ext; + io->status_saved = false; + done(io, len, status, ext, io->arg); + } + + if (io->abort_done) { + efct_hw_done_t done = io->abort_done; + + io->abort_done = NULL; + done(io, len, status, ext, io->abort_arg); + } + spin_lock_irqsave(&hw->io_abort_lock, flags); + /* clear abort bit to indicate abort is complete */ + io->abort_in_progress = false; + spin_unlock_irqrestore(&hw->io_abort_lock, flags); + + /* Free the WQ callback */ + if (io->abort_reqtag == U32_MAX) { + efc_log_err(hw->os, "HW IO already freed\n"); + return; + } + + wqcb = efct_hw_reqtag_get_instance(hw, io->abort_reqtag); + efct_hw_reqtag_free(hw, wqcb); + + /* + * Call efct_hw_io_free() because this releases the WQ reservation as + * well as doing the refcount put. Don't duplicate the code here. + */ + (void)efct_hw_io_free(hw, io); +} + +static void +efct_hw_fill_abort_wqe(struct efct_hw *hw, struct efct_hw_wqe *wqe) +{ + struct sli4_abort_wqe *abort = (void *)wqe->wqebuf; + + memset(abort, 0, hw->sli.wqe_size); + + abort->criteria = SLI4_ABORT_CRITERIA_XRI_TAG; + abort->ia_ir_byte |= wqe->send_abts ? 0 : 1; + + /* Suppress ABTS retries */ + abort->ia_ir_byte |= SLI4_ABRT_WQE_IR; + + abort->t_tag = cpu_to_le32(wqe->id); + abort->command = SLI4_WQE_ABORT; + abort->request_tag = cpu_to_le16(wqe->abort_reqtag); + + abort->dw10w0_flags = cpu_to_le16(SLI4_ABRT_WQE_QOSD); + + abort->cq_id = cpu_to_le16(SLI4_CQ_DEFAULT); +} + +enum efct_hw_rtn +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort, + bool send_abts, void *cb, void *arg) +{ + struct hw_wq_callback *wqcb; + unsigned long flags = 0; + + if (!io_to_abort) { + efc_log_err(hw->os, + "bad parameter hw=%p io=%p\n", + hw, io_to_abort); + return EFCT_HW_RTN_ERROR; + } + + if (hw->state != EFCT_HW_STATE_ACTIVE) { + efc_log_err(hw->os, "cannot send IO abort, HW state=%d\n", + hw->state); + return EFCT_HW_RTN_ERROR; + } + + /* take a reference on IO being aborted */ + if (kref_get_unless_zero(&io_to_abort->ref) == 0) { + /* command no longer active */ + efc_log_debug(hw->os, + "io not active xri=0x%x tag=0x%x\n", + io_to_abort->indicator, io_to_abort->reqtag); + return EFCT_HW_RTN_IO_NOT_ACTIVE; + } + + /* Must have a valid WQ reference */ + if (!io_to_abort->wq) { + efc_log_debug(hw->os, "io_to_abort xri=0x%x not active on WQ\n", + io_to_abort->indicator); + /* efct_ref_get(): same function */ + kref_put(&io_to_abort->ref, io_to_abort->release); + return EFCT_HW_RTN_IO_NOT_ACTIVE; + } + + /* + * Validation checks complete; now check to see if already being + * aborted + */ + spin_lock_irqsave(&hw->io_abort_lock, flags); + if (io_to_abort->abort_in_progress) { + spin_unlock_irqrestore(&hw->io_abort_lock, flags); + /* efct_ref_get(): same function */ + kref_put(&io_to_abort->ref, io_to_abort->release); + efc_log_debug(hw->os, + "io already being aborted xri=0x%x tag=0x%x\n", + io_to_abort->indicator, io_to_abort->reqtag); + return EFCT_HW_RTN_IO_ABORT_IN_PROGRESS; + } + + /* + * This IO is not already being aborted. Set flag so we won't try to + * abort it again. After all, we only have one abort_done callback. + */ + io_to_abort->abort_in_progress = true; + spin_unlock_irqrestore(&hw->io_abort_lock, flags); + + /* + * If we got here, the possibilities are: + * - host owned xri + * - io_to_abort->wq_index != U32_MAX + * - submit ABORT_WQE to same WQ + * - port owned xri: + * - rxri: io_to_abort->wq_index == U32_MAX + * - submit ABORT_WQE to any WQ + * - non-rxri + * - io_to_abort->index != U32_MAX + * - submit ABORT_WQE to same WQ + * - io_to_abort->index == U32_MAX + * - submit ABORT_WQE to any WQ + */ + io_to_abort->abort_done = cb; + io_to_abort->abort_arg = arg; + + /* Allocate a request tag for the abort portion of this IO */ + wqcb = efct_hw_reqtag_alloc(hw, efct_hw_wq_process_abort, io_to_abort); + if (!wqcb) { + efc_log_err(hw->os, "can't allocate request tag\n"); + return EFCT_HW_RTN_NO_RESOURCES; + } + + io_to_abort->abort_reqtag = wqcb->instance_index; + io_to_abort->wqe.send_abts = send_abts; + io_to_abort->wqe.id = io_to_abort->indicator; + io_to_abort->wqe.abort_reqtag = io_to_abort->abort_reqtag; + + /* + * If the wqe is on the pending list, then set this wqe to be + * aborted when the IO's wqe is removed from the list. + */ + if (io_to_abort->wq) { + spin_lock_irqsave(&io_to_abort->wq->queue->lock, flags); + if (io_to_abort->wqe.list_entry.next) { + io_to_abort->wqe.abort_wqe_submit_needed = true; + spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, + flags); + return EFC_SUCCESS; + } + spin_unlock_irqrestore(&io_to_abort->wq->queue->lock, flags); + } + + efct_hw_fill_abort_wqe(hw, &io_to_abort->wqe); + + /* ABORT_WQE does not actually utilize an XRI on the Port, + * therefore, keep xbusy as-is to track the exchange's state, + * not the ABORT_WQE's state + */ + if (efct_hw_wq_write(io_to_abort->wq, &io_to_abort->wqe)) { + spin_lock_irqsave(&hw->io_abort_lock, flags); + io_to_abort->abort_in_progress = false; + spin_unlock_irqrestore(&hw->io_abort_lock, flags); + /* efct_ref_get(): same function */ + kref_put(&io_to_abort->ref, io_to_abort->release); + return EFC_FAIL; + } + + return EFC_SUCCESS; +} + +void +efct_hw_reqtag_pool_free(struct efct_hw *hw) +{ + u32 i; + struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool; + struct hw_wq_callback *wqcb = NULL; + + if (reqtag_pool) { + for (i = 0; i < U16_MAX; i++) { + wqcb = reqtag_pool->tags[i]; + if (!wqcb) + continue; + + kfree(wqcb); + } + kfree(reqtag_pool); + hw->wq_reqtag_pool = NULL; + } +} + +struct reqtag_pool * +efct_hw_reqtag_pool_alloc(struct efct_hw *hw) +{ + u32 i = 0; + struct reqtag_pool *reqtag_pool; + struct hw_wq_callback *wqcb; + + reqtag_pool = kzalloc(sizeof(*reqtag_pool), GFP_KERNEL); + if (!reqtag_pool) + return NULL; + + INIT_LIST_HEAD(&reqtag_pool->freelist); + /* initialize reqtag pool lock */ + spin_lock_init(&reqtag_pool->lock); + for (i = 0; i < U16_MAX; i++) { + wqcb = kmalloc(sizeof(*wqcb), GFP_KERNEL); + if (!wqcb) + break; + + reqtag_pool->tags[i] = wqcb; + wqcb->instance_index = i; + wqcb->callback = NULL; + wqcb->arg = NULL; + INIT_LIST_HEAD(&wqcb->list_entry); + list_add_tail(&wqcb->list_entry, &reqtag_pool->freelist); + } + + return reqtag_pool; +} + +struct hw_wq_callback * +efct_hw_reqtag_alloc(struct efct_hw *hw, + void (*callback)(void *arg, u8 *cqe, int status), + void *arg) +{ + struct hw_wq_callback *wqcb = NULL; + struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool; + unsigned long flags = 0; + + if (!callback) + return wqcb; + + spin_lock_irqsave(&reqtag_pool->lock, flags); + + if (!list_empty(&reqtag_pool->freelist)) { + wqcb = list_first_entry(&reqtag_pool->freelist, + struct hw_wq_callback, list_entry); + } + + if (wqcb) { + list_del_init(&wqcb->list_entry); + spin_unlock_irqrestore(&reqtag_pool->lock, flags); + wqcb->callback = callback; + wqcb->arg = arg; + } else { + spin_unlock_irqrestore(&reqtag_pool->lock, flags); + } + + return wqcb; +} + +void +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb) +{ + unsigned long flags = 0; + struct reqtag_pool *reqtag_pool = hw->wq_reqtag_pool; + + if (!wqcb->callback) + efc_log_err(hw->os, "WQCB is already freed\n"); + + spin_lock_irqsave(&reqtag_pool->lock, flags); + wqcb->callback = NULL; + wqcb->arg = NULL; + INIT_LIST_HEAD(&wqcb->list_entry); + list_add(&wqcb->list_entry, &hw->wq_reqtag_pool->freelist); + spin_unlock_irqrestore(&reqtag_pool->lock, flags); +} + +struct hw_wq_callback * +efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index) +{ + struct hw_wq_callback *wqcb; + + wqcb = hw->wq_reqtag_pool->tags[instance_index]; + if (!wqcb) + efc_log_err(hw->os, "wqcb for instance %d is null\n", + instance_index); + + return wqcb; +} diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h index 76de5decfbea..dd00d4df6ada 100644 --- a/drivers/scsi/elx/efct/efct_hw.h +++ b/drivers/scsi/elx/efct/efct_hw.h @@ -628,4 +628,45 @@ efct_hw_command(struct efct_hw *hw, u8 *cmd, u32 opts, void *cb, int efct_issue_mbox_rqst(void *base, void *cmd, void *cb, void *arg); +struct efct_hw_io *efct_hw_io_alloc(struct efct_hw *hw); +int efct_hw_io_free(struct efct_hw *hw, struct efct_hw_io *io); +u8 efct_hw_io_inuse(struct efct_hw *hw, struct efct_hw_io *io); +enum efct_hw_rtn +efct_hw_io_send(struct efct_hw *hw, enum efct_hw_io_type type, + struct efct_hw_io *io, union efct_hw_io_param_u *iparam, + void *cb, void *arg); +enum efct_hw_rtn +efct_hw_io_register_sgl(struct efct_hw *hw, struct efct_hw_io *io, + struct efc_dma *sgl, + u32 sgl_count); +enum efct_hw_rtn +efct_hw_io_init_sges(struct efct_hw *hw, + struct efct_hw_io *io, enum efct_hw_io_type type); + +enum efct_hw_rtn +efct_hw_io_add_sge(struct efct_hw *hw, struct efct_hw_io *io, + uintptr_t addr, u32 length); +enum efct_hw_rtn +efct_hw_io_abort(struct efct_hw *hw, struct efct_hw_io *io_to_abort, + bool send_abts, void *cb, void *arg); +u32 +efct_hw_io_get_count(struct efct_hw *hw, + enum efct_hw_io_count_type io_count_type); +struct efct_hw_io +*efct_hw_io_lookup(struct efct_hw *hw, u32 indicator); +void efct_hw_io_abort_all(struct efct_hw *hw); +void efct_hw_io_free_internal(struct kref *arg); + +/* HW WQ request tag API */ +struct reqtag_pool *efct_hw_reqtag_pool_alloc(struct efct_hw *hw); +void efct_hw_reqtag_pool_free(struct efct_hw *hw); +struct hw_wq_callback +*efct_hw_reqtag_alloc(struct efct_hw *hw, + void (*callback)(void *arg, u8 *cqe, + int status), void *arg); +void +efct_hw_reqtag_free(struct efct_hw *hw, struct hw_wq_callback *wqcb); +struct hw_wq_callback +*efct_hw_reqtag_get_instance(struct efct_hw *hw, u32 instance_index); + #endif /* __EFCT_H__ */ From patchwork Thu Jan 7 22:58:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358582 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id E0027C433E9 for ; Thu, 7 Jan 2021 23:01:17 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A2ECA235FF for ; Thu, 7 Jan 2021 23:01:17 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728906AbhAGXBR (ORCPT ); Thu, 7 Jan 2021 18:01:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52026 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728471AbhAGXBO (ORCPT ); Thu, 7 Jan 2021 18:01:14 -0500 Received: from mail-pf1-x435.google.com (mail-pf1-x435.google.com [IPv6:2607:f8b0:4864:20::435]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 6E113C0612AC for ; Thu, 7 Jan 2021 14:59:36 -0800 (PST) Received: by mail-pf1-x435.google.com with SMTP id 11so4994292pfu.4 for ; Thu, 07 Jan 2021 14:59:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pjxV6JB+4YwDi3u/iYx+LgOmM81MO6obRrCU6zOXe4w=; b=bT9SV484V7aWviM9FSv+0igx9RnlRPfm6zLYF/+ph0Hl9pQUfo32XN39ZMcark3+wA wzo7oPWgsvILL/OcYH4bCh2dn2xwQlXQK4Fl4dJtZ2J2QtBxS+2jcHN2dd0lFpLtxpEy cjMMFv6mE/YciCQKmvyn4R33UoG4Ltl/yKKTFsKge5NUxb10cLNRsdNcAxgPn8bwyueu O5BCTZAi5UY38tHKp432o2+w3+gg082XqGOfJ1y7dmZ9cNCerz8YBCkzX003LGXGUWx1 0MNwkvKc+ELi45tW/oVfNq0z2zLjQtEK7CZA5N2PjhwnfeHzh6CeZT9ruKzd6k34zEEG ydGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pjxV6JB+4YwDi3u/iYx+LgOmM81MO6obRrCU6zOXe4w=; b=nn85SvO9tjGLB+YvU/JyMRicHBnGCgK0AGHyH3eyycYMNu5IEESP75faq8DQUCO7RV K4SCJLpQnEIq3XUQQMuC/feHBX7nxa+dbiuRhKKysfUOPIX/1/B2978Hr3aaCWWeDdN6 hnQWbKMm6EosGl0tKmZA7PSQTtgJrlWltJGyKWgdzFd1wU18YewckFklov0vmrnhTH1Y +WHAVPfm9xZhucTct8sF0vPIb6j9co+VQNxylY8QTizG2ikSkiadytB19RXHJxacawSb cZU8LQj8tVHISDbkKfFTjwm8Jc1dyG8dSlrYQibTDgZhIqZgDy2nffTEGPZwTk4QpbKM vymQ== X-Gm-Message-State: AOAM531n2KWCThD/4lHuVsGjGwpTT9BLT5qIUrDmgOOzN67u0ex+zOdB XKEOXr+ChTJU4piWsw5hXfu40OLqUaiDKw== X-Google-Smtp-Source: ABdhPJyS1dGunBYGjjIabdUtZvmjuiuZfyBBAEwk5Yxy03MqUT6G0MWmOCQ093JOI4D7qqKlIBldsQ== X-Received: by 2002:aa7:8506:0:b029:19e:95:f75f with SMTP id v6-20020aa785060000b029019e0095f75fmr807750pfn.7.1610060375165; Thu, 07 Jan 2021 14:59:35 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:34 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 24/31] elx: efct: SCSI IO handling routines Date: Thu, 7 Jan 2021 14:58:58 -0800 Message-Id: <20210107225905.18186-25-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the efct driver population. This patch adds driver definitions for: Routines for SCSI transport IO alloc, build and send IO. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/efct/efct_scsi.c | 1164 +++++++++++++++++++++++++++++ drivers/scsi/elx/efct/efct_scsi.h | 207 +++++ 2 files changed, 1371 insertions(+) create mode 100644 drivers/scsi/elx/efct/efct_scsi.c create mode 100644 drivers/scsi/elx/efct/efct_scsi.h diff --git a/drivers/scsi/elx/efct/efct_scsi.c b/drivers/scsi/elx/efct/efct_scsi.c new file mode 100644 index 000000000000..20aed62a1bff --- /dev/null +++ b/drivers/scsi/elx/efct/efct_scsi.c @@ -0,0 +1,1164 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#include "efct_driver.h" +#include "efct_hw.h" + +#define enable_tsend_auto_resp(efct) 1 +#define enable_treceive_auto_resp(efct) 0 + +#define SCSI_IOFMT "[%04x][i:%04x t:%04x h:%04x]" + +#define scsi_io_printf(io, fmt, ...) \ + efc_log_debug(io->efct, "[%s]" SCSI_IOFMT fmt, \ + io->node->display_name, io->instance_index,\ + io->init_task_tag, io->tgt_task_tag, io->hw_tag, ##__VA_ARGS__) + +#define EFCT_LOG_ENABLE_SCSI_TRACE(efct) \ + (((efct) != NULL) ? (((efct)->logmask & (1U << 2)) != 0) : 0) + +#define scsi_io_trace(io, fmt, ...) \ + do { \ + if (EFCT_LOG_ENABLE_SCSI_TRACE(io->efct)) \ + scsi_io_printf(io, fmt, ##__VA_ARGS__); \ + } while (0) + +struct efct_io * +efct_scsi_io_alloc(struct efct_node *node) +{ + struct efct *efct; + struct efct_xport *xport; + struct efct_io *io; + unsigned long flags = 0; + + efct = node->efct; + + xport = efct->xport; + + spin_lock_irqsave(&node->active_ios_lock, flags); + + io = efct_io_pool_io_alloc(efct->xport->io_pool); + if (!io) { + efc_log_err(efct, "IO alloc Failed\n"); + atomic_add_return(1, &xport->io_alloc_failed_count); + spin_unlock_irqrestore(&node->active_ios_lock, flags); + return NULL; + } + + /* initialize refcount */ + kref_init(&io->ref); + io->release = _efct_scsi_io_free; + + /* set generic fields */ + io->efct = efct; + io->node = node; + kref_get(&node->ref); + + /* set type and name */ + io->io_type = EFCT_IO_TYPE_IO; + io->display_name = "scsi_io"; + + io->cmd_ini = false; + io->cmd_tgt = true; + + /* Add to node's active_ios list */ + INIT_LIST_HEAD(&io->list_entry); + list_add(&io->list_entry, &node->active_ios); + + spin_unlock_irqrestore(&node->active_ios_lock, flags); + + return io; +} + +void +_efct_scsi_io_free(struct kref *arg) +{ + struct efct_io *io = container_of(arg, struct efct_io, ref); + struct efct *efct = io->efct; + struct efct_node *node = io->node; + unsigned long flags = 0; + + scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name); + + if (io->io_free) { + efc_log_err(efct, "IO already freed.\n"); + return; + } + + spin_lock_irqsave(&node->active_ios_lock, flags); + list_del_init(&io->list_entry); + spin_unlock_irqrestore(&node->active_ios_lock, flags); + + kref_put(&node->ref, node->release); + io->node = NULL; + efct_io_pool_io_free(efct->xport->io_pool, io); +} + +void +efct_scsi_io_free(struct efct_io *io) +{ + scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name); + WARN_ON(!refcount_read(&io->ref.refcount)); + kref_put(&io->ref, io->release); +} + +static void +efct_target_io_cb(struct efct_hw_io *hio, u32 length, int status, + u32 ext_status, void *app) +{ + u32 flags = 0; + struct efct_io *io = app; + struct efct *efct; + enum efct_scsi_io_status scsi_stat = EFCT_SCSI_STATUS_GOOD; + efct_scsi_io_cb_t cb; + + if (!io || !io->efct) { + pr_err("%s: IO can not be NULL\n", __func__); + return; + } + + scsi_io_trace(io, "status x%x ext_status x%x\n", status, ext_status); + + efct = io->efct; + + io->transferred += length; + + if (!io->scsi_tgt_cb) { + efct_scsi_check_pending(efct); + return; + } + + /* Call target server completion */ + cb = io->scsi_tgt_cb; + + /* Clear the callback before invoking the callback */ + io->scsi_tgt_cb = NULL; + + /* if status was good, and auto-good-response was set, + * then callback target-server with IO_CMPL_RSP_SENT, + * otherwise send IO_CMPL + */ + if (status == 0 && io->auto_resp) + flags |= EFCT_SCSI_IO_CMPL_RSP_SENT; + else + flags |= EFCT_SCSI_IO_CMPL; + + switch (status) { + case SLI4_FC_WCQE_STATUS_SUCCESS: + scsi_stat = EFCT_SCSI_STATUS_GOOD; + break; + case SLI4_FC_WCQE_STATUS_DI_ERROR: + if (ext_status & SLI4_FC_DI_ERROR_GE) + scsi_stat = EFCT_SCSI_STATUS_DIF_GUARD_ERR; + else if (ext_status & SLI4_FC_DI_ERROR_AE) + scsi_stat = EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR; + else if (ext_status & SLI4_FC_DI_ERROR_RE) + scsi_stat = EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR; + else + scsi_stat = EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR; + break; + case SLI4_FC_WCQE_STATUS_LOCAL_REJECT: + switch (ext_status) { + case SLI4_FC_LOCAL_REJECT_INVALID_RELOFFSET: + case SLI4_FC_LOCAL_REJECT_ABORT_REQUESTED: + scsi_stat = EFCT_SCSI_STATUS_ABORTED; + break; + case SLI4_FC_LOCAL_REJECT_INVALID_RPI: + scsi_stat = EFCT_SCSI_STATUS_NEXUS_LOST; + break; + case SLI4_FC_LOCAL_REJECT_NO_XRI: + scsi_stat = EFCT_SCSI_STATUS_NO_IO; + break; + default: + /*we have seen 0x0d(TX_DMA_FAILED err)*/ + scsi_stat = EFCT_SCSI_STATUS_ERROR; + break; + } + break; + + case SLI4_FC_WCQE_STATUS_TARGET_WQE_TIMEOUT: + /* target IO timed out */ + scsi_stat = EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED; + break; + + case SLI4_FC_WCQE_STATUS_SHUTDOWN: + /* Target IO cancelled by HW */ + scsi_stat = EFCT_SCSI_STATUS_SHUTDOWN; + break; + + default: + scsi_stat = EFCT_SCSI_STATUS_ERROR; + break; + } + + cb(io, scsi_stat, flags, io->scsi_tgt_cb_arg); + + efct_scsi_check_pending(efct); +} + +static int +efct_scsi_build_sgls(struct efct_hw *hw, struct efct_hw_io *hio, + struct efct_scsi_sgl *sgl, u32 sgl_count, + enum efct_hw_io_type type) +{ + int rc; + u32 i; + struct efct *efct = hw->os; + + /* Initialize HW SGL */ + rc = efct_hw_io_init_sges(hw, hio, type); + if (rc) { + efc_log_err(efct, "efct_hw_io_init_sges failed: %d\n", rc); + return EFC_FAIL; + } + + for (i = 0; i < sgl_count; i++) { + + /* Add data SGE */ + rc = efct_hw_io_add_sge(hw, hio, sgl[i].addr, sgl[i].len); + if (rc) { + efc_log_err(efct, + "add sge failed cnt=%d rc=%d\n", + sgl_count, rc); + return rc; + } + } + + return EFC_SUCCESS; +} + +static void efc_log_sgl(struct efct_io *io) +{ + struct efct_hw_io *hio = io->hio; + struct sli4_sge *data = NULL; + u32 *dword = NULL; + u32 i; + u32 n_sge; + + scsi_io_trace(io, "def_sgl at 0x%x 0x%08x\n", + upper_32_bits(hio->def_sgl.phys), + lower_32_bits(hio->def_sgl.phys)); + n_sge = (hio->sgl == &hio->def_sgl) ? hio->n_sge : hio->def_sgl_count; + for (i = 0, data = hio->def_sgl.virt; i < n_sge; i++, data++) { + dword = (u32 *)data; + + scsi_io_trace(io, "SGL %2d 0x%08x 0x%08x 0x%08x 0x%08x\n", + i, dword[0], dword[1], dword[2], dword[3]); + + if (dword[2] & (1U << 31)) + break; + } + +} + +static void +efct_scsi_check_pending_async_cb(struct efct_hw *hw, int status, + u8 *mqe, void *arg) +{ + struct efct_io *io = arg; + + if (io) { + efct_hw_done_t cb = io->hw_cb; + + if (!io->hw_cb) + return; + + io->hw_cb = NULL; + (cb)(io->hio, 0, SLI4_FC_WCQE_STATUS_DISPATCH_ERROR, 0, io); + } +} + +static int +efct_scsi_io_dispatch_hw_io(struct efct_io *io, struct efct_hw_io *hio) +{ + int rc = EFC_SUCCESS; + struct efct *efct = io->efct; + + /* Got a HW IO; + * update ini/tgt_task_tag with HW IO info and dispatch + */ + io->hio = hio; + if (io->cmd_tgt) + io->tgt_task_tag = hio->indicator; + else if (io->cmd_ini) + io->init_task_tag = hio->indicator; + io->hw_tag = hio->reqtag; + + hio->eq = io->hw_priv; + + /* Copy WQ steering */ + switch (io->wq_steering) { + case EFCT_SCSI_WQ_STEERING_CLASS >> EFCT_SCSI_WQ_STEERING_SHIFT: + hio->wq_steering = EFCT_HW_WQ_STEERING_CLASS; + break; + case EFCT_SCSI_WQ_STEERING_REQUEST >> EFCT_SCSI_WQ_STEERING_SHIFT: + hio->wq_steering = EFCT_HW_WQ_STEERING_REQUEST; + break; + case EFCT_SCSI_WQ_STEERING_CPU >> EFCT_SCSI_WQ_STEERING_SHIFT: + hio->wq_steering = EFCT_HW_WQ_STEERING_CPU; + break; + } + + switch (io->io_type) { + case EFCT_IO_TYPE_IO: + rc = efct_scsi_build_sgls(&efct->hw, io->hio, + io->sgl, io->sgl_count, io->hio_type); + if (rc) + break; + + if (EFCT_LOG_ENABLE_SCSI_TRACE(efct)) + efc_log_sgl(io); + + if (io->app_id) + io->iparam.fcp_tgt.app_id = io->app_id; + + io->iparam.fcp_tgt.vpi = io->node->vpi; + io->iparam.fcp_tgt.rpi = io->node->rpi; + io->iparam.fcp_tgt.s_id = io->node->port_fc_id; + io->iparam.fcp_tgt.d_id = io->node->node_fc_id; + io->iparam.fcp_tgt.xmit_len = io->wire_len; + + rc = efct_hw_io_send(&io->efct->hw, io->hio_type, io->hio, + &io->iparam, io->hw_cb, io); + break; + default: + scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type); + rc = EFC_FAIL; + break; + } + return rc; +} + +static int +efct_scsi_io_dispatch_no_hw_io(struct efct_io *io) +{ + int rc; + + switch (io->io_type) { + case EFCT_IO_TYPE_ABORT: { + struct efct_hw_io *hio_to_abort = NULL; + + hio_to_abort = io->io_to_abort->hio; + + if (!hio_to_abort) { + /* + * If "IO to abort" does not have an + * associated HW IO, immediately make callback with + * success. The command must have been sent to + * the backend, but the data phase has not yet + * started, so we don't have a HW IO. + * + * Note: since the backend shims should be + * taking a reference on io_to_abort, it should not + * be possible to have been completed and freed by + * the backend before the abort got here. + */ + scsi_io_printf(io, "IO: not active\n"); + ((efct_hw_done_t)io->hw_cb)(io->hio, 0, + SLI4_FC_WCQE_STATUS_SUCCESS, 0, io); + rc = EFC_SUCCESS; + break; + } + + /* HW IO is valid, abort it */ + scsi_io_printf(io, "aborting\n"); + rc = efct_hw_io_abort(&io->efct->hw, hio_to_abort, + io->send_abts, io->hw_cb, io); + if (rc) { + int status = SLI4_FC_WCQE_STATUS_SUCCESS; + efct_hw_done_t cb = io->hw_cb; + + + if (rc != EFCT_HW_RTN_IO_NOT_ACTIVE && + rc != EFCT_HW_RTN_IO_ABORT_IN_PROGRESS) { + status = -1; + scsi_io_printf(io, + "Failed to abort IO rc=%d\n", rc); + } + cb(io->hio, 0, status, 0, io); + rc = EFC_SUCCESS; + } + + break; + } + default: + scsi_io_printf(io, "Unknown IO type=%d\n", io->io_type); + rc = EFC_FAIL; + break; + } + return rc; +} + +static struct efct_io * +efct_scsi_dispatch_pending(struct efct *efct) +{ + struct efct_xport *xport = efct->xport; + struct efct_io *io = NULL; + struct efct_hw_io *hio; + unsigned long flags = 0; + int status; + + spin_lock_irqsave(&xport->io_pending_lock, flags); + + if (!list_empty(&xport->io_pending_list)) { + io = list_first_entry(&xport->io_pending_list, struct efct_io, + io_pending_link); + list_del_init(&io->io_pending_link); + } + + if (!io) { + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + return NULL; + } + + if (io->io_type == EFCT_IO_TYPE_ABORT) { + hio = NULL; + } else { + hio = efct_hw_io_alloc(&efct->hw); + if (!hio) { + /* + * No HW IO available.Put IO back on + * the front of pending list + */ + list_add(&xport->io_pending_list, &io->io_pending_link); + io = NULL; + } else { + hio->eq = io->hw_priv; + } + } + + /* Must drop the lock before dispatching the IO */ + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + + if (!io) + return NULL; + + /* + * We pulled an IO off the pending list, + * and either got an HW IO or don't need one + */ + atomic_sub_return(1, &xport->io_pending_count); + if (!hio) + status = efct_scsi_io_dispatch_no_hw_io(io); + else + status = efct_scsi_io_dispatch_hw_io(io, hio); + if (status) { + /* + * Invoke the HW callback, but do so in the + * separate execution context,provided by the + * NOP mailbox completion processing context + * by using efct_hw_async_call() + */ + if (efct_hw_async_call(&efct->hw, + efct_scsi_check_pending_async_cb, io)) { + efc_log_debug(efct, "call hw async failed\n"); + } + } + + return io; +} + +void +efct_scsi_check_pending(struct efct *efct) +{ + struct efct_xport *xport = efct->xport; + struct efct_io *io = NULL; + int count = 0; + unsigned long flags = 0; + int dispatch = 0; + + /* Guard against recursion */ + if (atomic_add_return(1, &xport->io_pending_recursing)) { + /* This function is already running. Decrement and return. */ + atomic_sub_return(1, &xport->io_pending_recursing); + return; + } + + while (efct_scsi_dispatch_pending(efct)) + count++; + + if (count) { + atomic_sub_return(1, &xport->io_pending_recursing); + return; + } + + /* + * If nothing was removed from the list, + * we might be in a case where we need to abort an + * active IO and the abort is on the pending list. + * Look for an abort we can dispatch. + */ + + spin_lock_irqsave(&xport->io_pending_lock, flags); + + list_for_each_entry(io, &xport->io_pending_list, io_pending_link) { + if (io->io_type == EFCT_IO_TYPE_ABORT && io->io_to_abort->hio) { + /* This IO has a HW IO, so it is + * active. Dispatch the abort. + */ + dispatch = 1; + list_del_init(&io->io_pending_link); + atomic_sub_return(1, &xport->io_pending_count); + break; + } + } + + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + + if (dispatch) { + if (efct_scsi_io_dispatch_no_hw_io(io)) { + if (efct_hw_async_call(&efct->hw, + efct_scsi_check_pending_async_cb, io)) { + efc_log_debug(efct, "hw async failed\n"); + } + } + } + + atomic_sub_return(1, &xport->io_pending_recursing); +} + +int +efct_scsi_io_dispatch(struct efct_io *io, void *cb) +{ + struct efct_hw_io *hio; + struct efct *efct = io->efct; + struct efct_xport *xport = efct->xport; + unsigned long flags = 0; + + io->hw_cb = cb; + + /* + * if this IO already has a HW IO, then this is either + * not the first phase of the IO. Send it to the HW. + */ + if (io->hio) + return efct_scsi_io_dispatch_hw_io(io, io->hio); + + /* + * We don't already have a HW IO associated with the IO. First check + * the pending list. If not empty, add IO to the tail and process the + * pending list. + */ + spin_lock_irqsave(&xport->io_pending_lock, flags); + if (!list_empty(&xport->io_pending_list)) { + /* + * If this is a low latency request, + * the put at the front of the IO pending + * queue, otherwise put it at the end of the queue. + */ + if (io->low_latency) { + INIT_LIST_HEAD(&io->io_pending_link); + list_add(&xport->io_pending_list, &io->io_pending_link); + } else { + INIT_LIST_HEAD(&io->io_pending_link); + list_add_tail(&io->io_pending_link, + &xport->io_pending_list); + } + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + atomic_add_return(1, &xport->io_pending_count); + atomic_add_return(1, &xport->io_total_pending); + + /* process pending list */ + efct_scsi_check_pending(efct); + return EFC_SUCCESS; + } + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + + /* + * We don't have a HW IO associated with the IO and there's nothing + * on the pending list. Attempt to allocate a HW IO and dispatch it. + */ + hio = efct_hw_io_alloc(&io->efct->hw); + if (!hio) { + /* Couldn't get a HW IO. Save this IO on the pending list */ + spin_lock_irqsave(&xport->io_pending_lock, flags); + INIT_LIST_HEAD(&io->io_pending_link); + list_add_tail(&io->io_pending_link, &xport->io_pending_list); + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + + atomic_add_return(1, &xport->io_total_pending); + atomic_add_return(1, &xport->io_pending_count); + return EFC_SUCCESS; + } + + /* We successfully allocated a HW IO; dispatch to HW */ + return efct_scsi_io_dispatch_hw_io(io, hio); +} + +int +efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb) +{ + struct efct *efct = io->efct; + struct efct_xport *xport = efct->xport; + unsigned long flags = 0; + + io->hw_cb = cb; + + /* + * For aborts, we don't need a HW IO, but we still want + * to pass through the pending list to preserve ordering. + * Thus, if the pending list is not empty, add this abort + * to the pending list and process the pending list. + */ + spin_lock_irqsave(&xport->io_pending_lock, flags); + if (!list_empty(&xport->io_pending_list)) { + INIT_LIST_HEAD(&io->io_pending_link); + list_add_tail(&io->io_pending_link, &xport->io_pending_list); + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + atomic_add_return(1, &xport->io_pending_count); + atomic_add_return(1, &xport->io_total_pending); + + /* process pending list */ + efct_scsi_check_pending(efct); + return EFC_SUCCESS; + } + spin_unlock_irqrestore(&xport->io_pending_lock, flags); + + /* nothing on pending list, dispatch abort */ + return efct_scsi_io_dispatch_no_hw_io(io); +} + +static inline int +efct_scsi_xfer_data(struct efct_io *io, u32 flags, + struct efct_scsi_sgl *sgl, u32 sgl_count, u64 xwire_len, + enum efct_hw_io_type type, int enable_ar, + efct_scsi_io_cb_t cb, void *arg) +{ + struct efct *efct; + size_t residual = 0; + + io->sgl_count = sgl_count; + + efct = io->efct; + + scsi_io_trace(io, "%s wire_len %llu\n", + (type == EFCT_HW_IO_TARGET_READ) ? "send" : "recv", + xwire_len); + + io->hio_type = type; + + io->scsi_tgt_cb = cb; + io->scsi_tgt_cb_arg = arg; + + residual = io->exp_xfer_len - io->transferred; + io->wire_len = (xwire_len < residual) ? xwire_len : residual; + residual = (xwire_len - io->wire_len); + + memset(&io->iparam, 0, sizeof(io->iparam)); + io->iparam.fcp_tgt.ox_id = io->init_task_tag; + io->iparam.fcp_tgt.offset = io->transferred; + io->iparam.fcp_tgt.cs_ctl = io->cs_ctl; + io->iparam.fcp_tgt.timeout = io->timeout; + + /* if this is the last data phase and there is no residual, enable + * auto-good-response + */ + if (enable_ar && (flags & EFCT_SCSI_LAST_DATAPHASE) && residual == 0 && + ((io->transferred + io->wire_len) == io->exp_xfer_len) && + (!(flags & EFCT_SCSI_NO_AUTO_RESPONSE))) { + io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE; + io->auto_resp = true; + } else { + io->auto_resp = false; + } + + /* save this transfer length */ + io->xfer_req = io->wire_len; + + /* Adjust the transferred count to account for overrun + * when the residual is calculated in efct_scsi_send_resp + */ + io->transferred += residual; + + /* Adjust the SGL size if there is overrun */ + + if (residual) { + struct efct_scsi_sgl *sgl_ptr = &io->sgl[sgl_count - 1]; + + while (residual) { + size_t len = sgl_ptr->len; + + if (len > residual) { + sgl_ptr->len = len - residual; + residual = 0; + } else { + sgl_ptr->len = 0; + residual -= len; + io->sgl_count--; + } + sgl_ptr--; + } + } + + /* Set latency and WQ steering */ + io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0; + io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >> + EFCT_SCSI_WQ_STEERING_SHIFT; + io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >> + EFCT_SCSI_WQ_CLASS_SHIFT; + + if (efct->xport) { + struct efct_xport *xport = efct->xport; + + if (type == EFCT_HW_IO_TARGET_READ) { + xport->fcp_stats.input_requests++; + xport->fcp_stats.input_bytes += xwire_len; + } else if (type == EFCT_HW_IO_TARGET_WRITE) { + xport->fcp_stats.output_requests++; + xport->fcp_stats.output_bytes += xwire_len; + } + } + return efct_scsi_io_dispatch(io, efct_target_io_cb); +} + +int +efct_scsi_send_rd_data(struct efct_io *io, u32 flags, + struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len, + efct_scsi_io_cb_t cb, void *arg) +{ + return efct_scsi_xfer_data(io, flags, sgl, sgl_count, + len, EFCT_HW_IO_TARGET_READ, + enable_tsend_auto_resp(io->efct), cb, arg); +} + +int +efct_scsi_recv_wr_data(struct efct_io *io, u32 flags, + struct efct_scsi_sgl *sgl, u32 sgl_count, u64 len, + efct_scsi_io_cb_t cb, void *arg) +{ + return efct_scsi_xfer_data(io, flags, sgl, sgl_count, len, + EFCT_HW_IO_TARGET_WRITE, + enable_treceive_auto_resp(io->efct), cb, arg); +} + +int +efct_scsi_send_resp(struct efct_io *io, u32 flags, + struct efct_scsi_cmd_resp *rsp, + efct_scsi_io_cb_t cb, void *arg) +{ + struct efct *efct; + int residual; + /* Always try auto resp */ + bool auto_resp = true; + u8 scsi_status = 0; + u16 scsi_status_qualifier = 0; + u8 *sense_data = NULL; + u32 sense_data_length = 0; + + efct = io->efct; + + if (rsp) { + scsi_status = rsp->scsi_status; + scsi_status_qualifier = rsp->scsi_status_qualifier; + sense_data = rsp->sense_data; + sense_data_length = rsp->sense_data_length; + residual = rsp->residual; + } else { + residual = io->exp_xfer_len - io->transferred; + } + + io->wire_len = 0; + io->hio_type = EFCT_HW_IO_TARGET_RSP; + + io->scsi_tgt_cb = cb; + io->scsi_tgt_cb_arg = arg; + + memset(&io->iparam, 0, sizeof(io->iparam)); + io->iparam.fcp_tgt.ox_id = io->init_task_tag; + io->iparam.fcp_tgt.offset = 0; + io->iparam.fcp_tgt.cs_ctl = io->cs_ctl; + io->iparam.fcp_tgt.timeout = io->timeout; + + /* Set low latency queueing request */ + io->low_latency = (flags & EFCT_SCSI_LOW_LATENCY) != 0; + io->wq_steering = (flags & EFCT_SCSI_WQ_STEERING_MASK) >> + EFCT_SCSI_WQ_STEERING_SHIFT; + io->wq_class = (flags & EFCT_SCSI_WQ_CLASS_MASK) >> + EFCT_SCSI_WQ_CLASS_SHIFT; + + if (scsi_status != 0 || residual || sense_data_length) { + struct fcp_resp_with_ext *fcprsp = io->rspbuf.virt; + u8 *sns_data; + + if (!fcprsp) { + efc_log_err(efct, "NULL response buffer\n"); + return EFC_FAIL; + } + + sns_data = (u8 *)io->rspbuf.virt + sizeof(*fcprsp); + + auto_resp = false; + + memset(fcprsp, 0, sizeof(*fcprsp)); + + io->wire_len += sizeof(*fcprsp); + + fcprsp->resp.fr_status = scsi_status; + fcprsp->resp.fr_retry_delay = + cpu_to_be16(scsi_status_qualifier); + + /* set residual status if necessary */ + if (residual != 0) { + /* FCP: if data transferred is less than the + * amount expected, then this is an underflow. + * If data transferred would have been greater + * than the amount expected this is an overflow + */ + if (residual > 0) { + fcprsp->resp.fr_flags |= FCP_RESID_UNDER; + fcprsp->ext.fr_resid = cpu_to_be32(residual); + } else { + fcprsp->resp.fr_flags |= FCP_RESID_OVER; + fcprsp->ext.fr_resid = cpu_to_be32(-residual); + } + } + + if (EFCT_SCSI_SNS_BUF_VALID(sense_data) && sense_data_length) { + if (sense_data_length > SCSI_SENSE_BUFFERSIZE) { + efc_log_err(efct, "Sense exceeds max size.\n"); + return EFC_FAIL; + } + + fcprsp->resp.fr_flags |= FCP_SNS_LEN_VAL; + memcpy(sns_data, sense_data, sense_data_length); + fcprsp->ext.fr_sns_len = cpu_to_be32(sense_data_length); + io->wire_len += sense_data_length; + } + + io->sgl[0].addr = io->rspbuf.phys; + io->sgl[0].dif_addr = 0; + io->sgl[0].len = io->wire_len; + io->sgl_count = 1; + } + + if (auto_resp) + io->iparam.fcp_tgt.flags |= SLI4_IO_AUTO_GOOD_RESPONSE; + + return efct_scsi_io_dispatch(io, efct_target_io_cb); +} + +static int +efct_target_bls_resp_cb(struct efct_hw_io *hio, u32 length, int status, + u32 ext_status, void *app) +{ + struct efct_io *io = app; + struct efct *efct; + enum efct_scsi_io_status bls_status; + + efct = io->efct; + + /* BLS isn't really a "SCSI" concept, but use SCSI status */ + if (status) { + io_error_log(io, "s=%#x x=%#x\n", status, ext_status); + bls_status = EFCT_SCSI_STATUS_ERROR; + } else { + bls_status = EFCT_SCSI_STATUS_GOOD; + } + + if (io->bls_cb) { + efct_scsi_io_cb_t bls_cb = io->bls_cb; + void *bls_cb_arg = io->bls_cb_arg; + + io->bls_cb = NULL; + io->bls_cb_arg = NULL; + + /* invoke callback */ + bls_cb(io, bls_status, 0, bls_cb_arg); + } + + efct_scsi_check_pending(efct); + return EFC_SUCCESS; +} + +static int +efct_target_send_bls_resp(struct efct_io *io, + efct_scsi_io_cb_t cb, void *arg) +{ + struct efct_node *node = io->node; + struct sli_bls_params *bls = &io->iparam.bls; + struct efct *efct = node->efct; + struct fc_ba_acc *acc; + int rc; + + /* fill out IO structure with everything needed to send BA_ACC */ + memset(&io->iparam, 0, sizeof(io->iparam)); + bls->ox_id = io->init_task_tag; + bls->rx_id = io->abort_rx_id; + bls->vpi = io->node->vpi; + bls->rpi = io->node->rpi; + bls->s_id = U32_MAX; + bls->d_id = io->node->node_fc_id; + bls->rpi_registered = true; + + acc = (void *)bls->payload; + acc->ba_ox_id = cpu_to_be16(bls->ox_id); + acc->ba_rx_id = cpu_to_be16(bls->rx_id); + acc->ba_high_seq_cnt = cpu_to_be16(U16_MAX); + + /* generic io fields have already been populated */ + + /* set type and BLS-specific fields */ + io->io_type = EFCT_IO_TYPE_BLS_RESP; + io->display_name = "bls_rsp"; + io->hio_type = EFCT_HW_BLS_ACC; + io->bls_cb = cb; + io->bls_cb_arg = arg; + + /* dispatch IO */ + rc = efct_hw_bls_send(efct, FC_RCTL_BA_ACC, bls, + efct_target_bls_resp_cb, io); + return rc; +} + +static int efct_bls_send_rjt_cb(struct efct_hw_io *hio, u32 length, int status, + u32 ext_status, void *app) +{ + struct efct_io *io = app; + + efct_scsi_io_free(io); + return 0; +} + +struct efct_io * +efct_bls_send_rjt(struct efct_io *io, struct fc_frame_header *hdr) +{ + struct efct_node *node = io->node; + struct sli_bls_params *bls = &io->iparam.bls; + struct efct *efct = node->efct; + struct fc_ba_rjt *acc; + int rc; + + /* fill out BLS Response-specific fields */ + io->io_type = EFCT_IO_TYPE_BLS_RESP; + io->display_name = "ba_rjt"; + io->hio_type = EFCT_HW_BLS_RJT; + io->init_task_tag = be16_to_cpu(hdr->fh_ox_id); + + /* fill out iparam fields */ + memset(&io->iparam, 0, sizeof(io->iparam)); + bls->ox_id = be16_to_cpu(hdr->fh_ox_id); + bls->rx_id = be16_to_cpu(hdr->fh_rx_id); + bls->vpi = io->node->vpi; + bls->rpi = io->node->rpi; + bls->s_id = U32_MAX; + bls->d_id = io->node->node_fc_id; + bls->rpi_registered = true; + + acc = (void *)bls->payload; + acc->br_reason = ELS_RJT_UNAB; + acc->br_explan = ELS_EXPL_NONE; + + rc = efct_hw_bls_send(efct, FC_RCTL_BA_RJT, bls, efct_bls_send_rjt_cb, + io); + if (rc) { + efc_log_err(efct, "efct_scsi_io_dispatch() failed: %d\n", rc); + efct_scsi_io_free(io); + io = NULL; + } + return io; +} + +int +efct_scsi_send_tmf_resp(struct efct_io *io, + enum efct_scsi_tmf_resp rspcode, + u8 addl_rsp_info[3], + efct_scsi_io_cb_t cb, void *arg) +{ + int rc; + struct { + struct fcp_resp_with_ext rsp_ext; + struct fcp_resp_rsp_info info; + } *fcprsp; + u8 fcp_rspcode; + + io->wire_len = 0; + + switch (rspcode) { + case EFCT_SCSI_TMF_FUNCTION_COMPLETE: + fcp_rspcode = FCP_TMF_CMPL; + break; + case EFCT_SCSI_TMF_FUNCTION_SUCCEEDED: + case EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND: + fcp_rspcode = FCP_TMF_CMPL; + break; + case EFCT_SCSI_TMF_FUNCTION_REJECTED: + fcp_rspcode = FCP_TMF_REJECTED; + break; + case EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER: + fcp_rspcode = FCP_TMF_INVALID_LUN; + break; + case EFCT_SCSI_TMF_SERVICE_DELIVERY: + fcp_rspcode = FCP_TMF_FAILED; + break; + default: + fcp_rspcode = FCP_TMF_REJECTED; + break; + } + + io->hio_type = EFCT_HW_IO_TARGET_RSP; + + io->scsi_tgt_cb = cb; + io->scsi_tgt_cb_arg = arg; + + if (io->tmf_cmd == EFCT_SCSI_TMF_ABORT_TASK) { + rc = efct_target_send_bls_resp(io, cb, arg); + return rc; + } + + /* populate the FCP TMF response */ + fcprsp = io->rspbuf.virt; + memset(fcprsp, 0, sizeof(*fcprsp)); + + fcprsp->rsp_ext.resp.fr_flags |= FCP_SNS_LEN_VAL; + + if (addl_rsp_info) { + memcpy(fcprsp->info._fr_resvd, addl_rsp_info, + sizeof(fcprsp->info._fr_resvd)); + } + fcprsp->info.rsp_code = fcp_rspcode; + + io->wire_len = sizeof(*fcprsp); + + fcprsp->rsp_ext.ext.fr_rsp_len = + cpu_to_be32(sizeof(struct fcp_resp_rsp_info)); + + io->sgl[0].addr = io->rspbuf.phys; + io->sgl[0].dif_addr = 0; + io->sgl[0].len = io->wire_len; + io->sgl_count = 1; + + memset(&io->iparam, 0, sizeof(io->iparam)); + io->iparam.fcp_tgt.ox_id = io->init_task_tag; + io->iparam.fcp_tgt.offset = 0; + io->iparam.fcp_tgt.cs_ctl = io->cs_ctl; + io->iparam.fcp_tgt.timeout = io->timeout; + + rc = efct_scsi_io_dispatch(io, efct_target_io_cb); + + return rc; +} + +static int +efct_target_abort_cb(struct efct_hw_io *hio, u32 length, int status, + u32 ext_status, void *app) +{ + struct efct_io *io = app; + struct efct *efct; + enum efct_scsi_io_status scsi_status; + efct_scsi_io_cb_t abort_cb; + void *abort_cb_arg; + + efct = io->efct; + + if (!io->abort_cb) + goto done; + + abort_cb = io->abort_cb; + abort_cb_arg = io->abort_cb_arg; + + io->abort_cb = NULL; + io->abort_cb_arg = NULL; + + switch (status) { + case SLI4_FC_WCQE_STATUS_SUCCESS: + scsi_status = EFCT_SCSI_STATUS_GOOD; + break; + case SLI4_FC_WCQE_STATUS_LOCAL_REJECT: + switch (ext_status) { + case SLI4_FC_LOCAL_REJECT_NO_XRI: + scsi_status = EFCT_SCSI_STATUS_NO_IO; + break; + case SLI4_FC_LOCAL_REJECT_ABORT_IN_PROGRESS: + scsi_status = EFCT_SCSI_STATUS_ABORT_IN_PROGRESS; + break; + default: + /*we have seen 0x15 (abort in progress)*/ + scsi_status = EFCT_SCSI_STATUS_ERROR; + break; + } + break; + case SLI4_FC_WCQE_STATUS_FCP_RSP_FAILURE: + scsi_status = EFCT_SCSI_STATUS_CHECK_RESPONSE; + break; + default: + scsi_status = EFCT_SCSI_STATUS_ERROR; + break; + } + /* invoke callback */ + abort_cb(io->io_to_abort, scsi_status, 0, abort_cb_arg); + +done: + /* done with IO to abort,efct_ref_get(): efct_scsi_tgt_abort_io() */ + kref_put(&io->io_to_abort->ref, io->io_to_abort->release); + + efct_io_pool_io_free(efct->xport->io_pool, io); + + efct_scsi_check_pending(efct); + return EFC_SUCCESS; +} + +int +efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg) +{ + struct efct *efct; + struct efct_xport *xport; + int rc; + struct efct_io *abort_io = NULL; + + efct = io->efct; + xport = efct->xport; + + /* take a reference on IO being aborted */ + if (kref_get_unless_zero(&io->ref) == 0) { + /* command no longer active */ + scsi_io_printf(io, "command no longer active\n"); + return EFC_FAIL; + } + + /* + * allocate a new IO to send the abort request. Use efct_io_alloc() + * directly, as we need an IO object that will not fail allocation + * due to allocations being disabled (in efct_scsi_io_alloc()) + */ + abort_io = efct_io_pool_io_alloc(efct->xport->io_pool); + if (!abort_io) { + atomic_add_return(1, &xport->io_alloc_failed_count); + kref_put(&io->ref, io->release); + return EFC_FAIL; + } + + /* Save the target server callback and argument */ + /* set generic fields */ + abort_io->cmd_tgt = true; + abort_io->node = io->node; + + /* set type and abort-specific fields */ + abort_io->io_type = EFCT_IO_TYPE_ABORT; + abort_io->display_name = "tgt_abort"; + abort_io->io_to_abort = io; + abort_io->send_abts = false; + abort_io->abort_cb = cb; + abort_io->abort_cb_arg = arg; + + /* now dispatch IO */ + rc = efct_scsi_io_dispatch_abort(abort_io, efct_target_abort_cb); + if (rc) + kref_put(&io->ref, io->release); + return rc; +} + +void +efct_scsi_io_complete(struct efct_io *io) +{ + if (io->io_free) { + efc_log_debug(io->efct, "completion for non-busy io tag 0x%x\n", + io->tag); + return; + } + + scsi_io_trace(io, "freeing io 0x%p %s\n", io, io->display_name); + kref_put(&io->ref, io->release); +} diff --git a/drivers/scsi/elx/efct/efct_scsi.h b/drivers/scsi/elx/efct/efct_scsi.h new file mode 100644 index 000000000000..7c4afa1e8c95 --- /dev/null +++ b/drivers/scsi/elx/efct/efct_scsi.h @@ -0,0 +1,207 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Copyright (C) 2021 Broadcom. All Rights Reserved. The term + * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. + */ + +#if !defined(__EFCT_SCSI_H__) +#define __EFCT_SCSI_H__ +#include +#include + +/* efct_scsi_rcv_cmd() efct_scsi_rcv_tmf() flags */ +#define EFCT_SCSI_CMD_DIR_IN (1 << 0) +#define EFCT_SCSI_CMD_DIR_OUT (1 << 1) +#define EFCT_SCSI_CMD_SIMPLE (1 << 2) +#define EFCT_SCSI_CMD_HEAD_OF_QUEUE (1 << 3) +#define EFCT_SCSI_CMD_ORDERED (1 << 4) +#define EFCT_SCSI_CMD_UNTAGGED (1 << 5) +#define EFCT_SCSI_CMD_ACA (1 << 6) +#define EFCT_SCSI_FIRST_BURST_ERR (1 << 7) +#define EFCT_SCSI_FIRST_BURST_ABORTED (1 << 8) + +/* efct_scsi_send_rd_data/recv_wr_data/send_resp flags */ +#define EFCT_SCSI_LAST_DATAPHASE (1 << 0) +#define EFCT_SCSI_NO_AUTO_RESPONSE (1 << 1) +#define EFCT_SCSI_LOW_LATENCY (1 << 2) + +#define EFCT_SCSI_SNS_BUF_VALID(sense) ((sense) && \ + (0x70 == (((const u8 *)(sense))[0] & 0x70))) + +#define EFCT_SCSI_WQ_STEERING_SHIFT 16 +#define EFCT_SCSI_WQ_STEERING_MASK (0xf << EFCT_SCSI_WQ_STEERING_SHIFT) +#define EFCT_SCSI_WQ_STEERING_CLASS (0 << EFCT_SCSI_WQ_STEERING_SHIFT) +#define EFCT_SCSI_WQ_STEERING_REQUEST (1 << EFCT_SCSI_WQ_STEERING_SHIFT) +#define EFCT_SCSI_WQ_STEERING_CPU (2 << EFCT_SCSI_WQ_STEERING_SHIFT) + +#define EFCT_SCSI_WQ_CLASS_SHIFT (20) +#define EFCT_SCSI_WQ_CLASS_MASK (0xf << EFCT_SCSI_WQ_CLASS_SHIFT) +#define EFCT_SCSI_WQ_CLASS(x) ((x & EFCT_SCSI_WQ_CLASS_MASK) << \ + EFCT_SCSI_WQ_CLASS_SHIFT) + +#define EFCT_SCSI_WQ_CLASS_LOW_LATENCY 1 + +struct efct_scsi_cmd_resp { + u8 scsi_status; + u16 scsi_status_qualifier; + u8 *response_data; + u32 response_data_length; + u8 *sense_data; + u32 sense_data_length; + int residual; + u32 response_wire_length; +}; + +struct efct_vport { + struct efct *efct; + bool is_vport; + struct fc_host_statistics fc_host_stats; + struct Scsi_Host *shost; + struct fc_vport *fc_vport; + u64 npiv_wwpn; + u64 npiv_wwnn; +}; + +/* Status values returned by IO callbacks */ +enum efct_scsi_io_status { + EFCT_SCSI_STATUS_GOOD = 0, + EFCT_SCSI_STATUS_ABORTED, + EFCT_SCSI_STATUS_ERROR, + EFCT_SCSI_STATUS_DIF_GUARD_ERR, + EFCT_SCSI_STATUS_DIF_REF_TAG_ERROR, + EFCT_SCSI_STATUS_DIF_APP_TAG_ERROR, + EFCT_SCSI_STATUS_DIF_UNKNOWN_ERROR, + EFCT_SCSI_STATUS_PROTOCOL_CRC_ERROR, + EFCT_SCSI_STATUS_NO_IO, + EFCT_SCSI_STATUS_ABORT_IN_PROGRESS, + EFCT_SCSI_STATUS_CHECK_RESPONSE, + EFCT_SCSI_STATUS_COMMAND_TIMEOUT, + EFCT_SCSI_STATUS_TIMEDOUT_AND_ABORTED, + EFCT_SCSI_STATUS_SHUTDOWN, + EFCT_SCSI_STATUS_NEXUS_LOST, +}; + +struct efct_node; +struct efct_io; +struct efc_node; +struct efc_nport; + +/* Callback used by send_rd_data(), recv_wr_data(), send_resp() */ +typedef int (*efct_scsi_io_cb_t)(struct efct_io *io, + enum efct_scsi_io_status status, + u32 flags, void *arg); + +/* Callback used by send_rd_io(), send_wr_io() */ +typedef int (*efct_scsi_rsp_io_cb_t)(struct efct_io *io, + enum efct_scsi_io_status status, + struct efct_scsi_cmd_resp *rsp, + u32 flags, void *arg); + +/* efct_scsi_cb_t flags */ +#define EFCT_SCSI_IO_CMPL (1 << 0) +/* IO completed, response sent */ +#define EFCT_SCSI_IO_CMPL_RSP_SENT (1 << 1) +#define EFCT_SCSI_IO_ABORTED (1 << 2) + +/* efct_scsi_recv_tmf() request values */ +enum efct_scsi_tmf_cmd { + EFCT_SCSI_TMF_ABORT_TASK = 1, + EFCT_SCSI_TMF_QUERY_TASK_SET, + EFCT_SCSI_TMF_ABORT_TASK_SET, + EFCT_SCSI_TMF_CLEAR_TASK_SET, + EFCT_SCSI_TMF_QUERY_ASYNCHRONOUS_EVENT, + EFCT_SCSI_TMF_LOGICAL_UNIT_RESET, + EFCT_SCSI_TMF_CLEAR_ACA, + EFCT_SCSI_TMF_TARGET_RESET, +}; + +/* efct_scsi_send_tmf_resp() response values */ +enum efct_scsi_tmf_resp { + EFCT_SCSI_TMF_FUNCTION_COMPLETE = 1, + EFCT_SCSI_TMF_FUNCTION_SUCCEEDED, + EFCT_SCSI_TMF_FUNCTION_IO_NOT_FOUND, + EFCT_SCSI_TMF_FUNCTION_REJECTED, + EFCT_SCSI_TMF_INCORRECT_LOGICAL_UNIT_NUMBER, + EFCT_SCSI_TMF_SERVICE_DELIVERY, +}; + +struct efct_scsi_sgl { + uintptr_t addr; + uintptr_t dif_addr; + size_t len; +}; + +/* Return values for calls from base driver to libefc */ +#define EFCT_SCSI_CALL_COMPLETE 0 /* All work is done */ +#define EFCT_SCSI_CALL_ASYNC 1 /* Work will be completed asynchronously */ + +enum efct_scsi_io_role { + EFCT_SCSI_IO_ROLE_ORIGINATOR, + EFCT_SCSI_IO_ROLE_RESPONDER, +}; + +struct efct_io * +efct_scsi_io_alloc(struct efct_node *node); +void efct_scsi_io_free(struct efct_io *io); +struct efct_io *efct_io_get_instance(struct efct *efct, u32 index); + +int efct_scsi_tgt_driver_init(void); +int efct_scsi_tgt_driver_exit(void); +int efct_scsi_tgt_new_device(struct efct *efct); +int efct_scsi_tgt_del_device(struct efct *efct); +int +efct_scsi_tgt_new_nport(struct efc *efc, struct efc_nport *nport); +void +efct_scsi_tgt_del_nport(struct efc *efc, struct efc_nport *nport); + +int +efct_scsi_new_initiator(struct efc *efc, struct efc_node *node); + +enum efct_scsi_del_initiator_reason { + EFCT_SCSI_INITIATOR_DELETED, + EFCT_SCSI_INITIATOR_MISSING, +}; + +int +efct_scsi_del_initiator(struct efc *efc, struct efc_node *node, int reason); +int +efct_scsi_recv_cmd(struct efct_io *io, uint64_t lun, u8 *cdb, u32 cdb_len, + u32 flags); +int +efct_scsi_recv_tmf(struct efct_io *tmfio, u32 lun, enum efct_scsi_tmf_cmd cmd, + struct efct_io *abortio, u32 flags); +int +efct_scsi_send_rd_data(struct efct_io *io, u32 flags, struct efct_scsi_sgl *sgl, + u32 sgl_count, u64 wire_len, efct_scsi_io_cb_t cb, void *arg); +int +efct_scsi_recv_wr_data(struct efct_io *io, u32 flags, struct efct_scsi_sgl *sgl, + u32 sgl_count, u64 wire_len, efct_scsi_io_cb_t cb, void *arg); +int +efct_scsi_send_resp(struct efct_io *io, u32 flags, + struct efct_scsi_cmd_resp *rsp, efct_scsi_io_cb_t cb, void *arg); +int +efct_scsi_send_tmf_resp(struct efct_io *io, enum efct_scsi_tmf_resp rspcode, + u8 addl_rsp_info[3], efct_scsi_io_cb_t cb, void *arg); +int +efct_scsi_tgt_abort_io(struct efct_io *io, efct_scsi_io_cb_t cb, void *arg); + +void efct_scsi_io_complete(struct efct_io *io); + +int efct_scsi_reg_fc_transport(void); +int efct_scsi_release_fc_transport(void); +int efct_scsi_new_device(struct efct *efct); +int efct_scsi_del_device(struct efct *efct); +void _efct_scsi_io_free(struct kref *arg); + +int +efct_scsi_del_vport(struct efct *efct, struct Scsi_Host *shost); +struct efct_vport * +efct_scsi_new_vport(struct efct *efct, struct device *dev); + +int efct_scsi_io_dispatch(struct efct_io *io, void *cb); +int efct_scsi_io_dispatch_abort(struct efct_io *io, void *cb); +void efct_scsi_check_pending(struct efct *efct); +struct efct_io * +efct_bls_send_rjt(struct efct_io *io, struct fc_frame_header *hdr); + +#endif /* __EFCT_SCSI_H__ */ From patchwork Thu Jan 7 22:59:01 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358583 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 58826C433E0 for ; Thu, 7 Jan 2021 23:01:18 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 30CE523600 for ; Thu, 7 Jan 2021 23:01:18 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728937AbhAGXBR (ORCPT ); Thu, 7 Jan 2021 18:01:17 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52032 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728632AbhAGXBP (ORCPT ); Thu, 7 Jan 2021 18:01:15 -0500 Received: from mail-pj1-x102c.google.com (mail-pj1-x102c.google.com [IPv6:2607:f8b0:4864:20::102c]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 36914C0612B2 for ; Thu, 7 Jan 2021 14:59:39 -0800 (PST) Received: by mail-pj1-x102c.google.com with SMTP id m5so5044416pjv.5 for ; Thu, 07 Jan 2021 14:59:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=e/nzKMX+mnOsMxdMGq8GtcVNgBTVGPuQZjLUAZEdd/o=; b=Da4unryrxfRkzVIMCnvt3X6Ll1TCDRpZoSf3Y/WtwpbJnzRVLCXaW3gVajgd10BzAY 8ZMpKRtYXniNdSIpbjb4gIn5XSW3waqnQ1VW++a8x5jC1tHBgTxpI+etk+7e+JEwCKPZ tvM8zftfE+f5h58S0h4eA0Vf3QpXPqPg/b7EgGtEG5JYKutwBnHrOGTOC5lYckC+TTxF /fLjFQnpE3jFvBUK6membb27GOKq9kD8ooVL3eIc3ONyz6ocbOzhoD6cehY4JP7r+ZMd /8Pl6PdD32Dl8HYemz1sJPeevmeEVYrJAXfO69S539DdQq7wEsdKnrn3QXCPsXouhTTh OGHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=e/nzKMX+mnOsMxdMGq8GtcVNgBTVGPuQZjLUAZEdd/o=; b=H/NpI0+YWiTFYkgXxyBZlvCsE38F3/KdjyY0lfBH/y4q67lkGmx6SL7uB7Ow3mf1yi LcWXDOiPGZaTLiSN6eXVI8sdV1BtIVxvg+W20HWD6mncSOsOAaSR+kS28a3rlyZN9uE6 +wIOja1kDsBCM8y9bVkFdX1KQ45Y2XdQMqXVGZ+zr5+D8pRVdbUUBFaIZcMW1iJ7lcdr fmJcZaozfugTY3NwhgtylHx8cFR16neLo3+cyiX3wr8NbUaGzqr7Kmyr3RL2k1aodmUJ uzssVyTlq7xmzyl5y3aEr7c2H+nOQgYIv68x8OsE+WcRsQwiZyiInwcNN7RgSSGHDoif DJEg== X-Gm-Message-State: AOAM530ELFAwdjswvkscXQ5dRitvhtDr4R3Vn3skkpCUIaX7V0sS0bFY OUVNwf3fHNXMvDE+OKZRl9j4nh/lo2xmEQ== X-Google-Smtp-Source: ABdhPJzwXP7UfQSaGBCCmVn6KgHOGUdKgnszzLloEK6WezVh2/HZad0lGJQ9SEN9kKTCC7+k7Iq+xQ== X-Received: by 2002:a17:902:bb95:b029:dc:e7b:fd6e with SMTP id m21-20020a170902bb95b02900dc0e7bfd6emr4130774pls.12.1610060378481; Thu, 07 Jan 2021 14:59:38 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:38 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 27/31] elx: efct: link and host statistics Date: Thu, 7 Jan 2021 14:59:01 -0800 Message-Id: <20210107225905.18186-28-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the efct driver population. This patch adds driver definitions for: Routines to retrieve link stats and host stats. Add Firmware update helper routines. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/efct/efct_hw.c | 325 ++++++++++++++++++++++++++++++++ drivers/scsi/elx/efct/efct_hw.h | 31 +++ 2 files changed, 356 insertions(+) diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c index 87d18dd50970..fca9bff358e3 100644 --- a/drivers/scsi/elx/efct/efct_hw.c +++ b/drivers/scsi/elx/efct/efct_hw.c @@ -8,6 +8,23 @@ #include "efct_hw.h" #include "efct_unsol.h" +struct efct_hw_link_stat_cb_arg { + void (*cb)(int status, u32 num_counters, + struct efct_hw_link_stat_counts *counters, void *arg); + void *arg; +}; + +struct efct_hw_host_stat_cb_arg { + void (*cb)(int status, u32 num_counters, + struct efct_hw_host_stat_counts *counters, void *arg); + void *arg; +}; + +struct efct_hw_fw_wr_cb_arg { + void (*cb)(int status, u32 bytes_written, u32 change_status, void *arg); + void *arg; +}; + struct efct_mbox_rqst_ctx { int (*callback)(struct efc *efc, int status, u8 *mqe, void *arg); void *arg; @@ -3033,3 +3050,311 @@ efct_hw_send_frame(struct efct_hw *hw, struct fc_frame_header *hdr, return EFCT_HW_RTN_SUCCESS; } + +static int +efct_hw_cb_link_stat(struct efct_hw *hw, int status, + u8 *mqe, void *arg) +{ + struct sli4_cmd_read_link_stats *mbox_rsp; + struct efct_hw_link_stat_cb_arg *cb_arg = arg; + struct efct_hw_link_stat_counts counts[EFCT_HW_LINK_STAT_MAX]; + u32 num_counters, i; + u32 mbox_rsp_flags = 0; + + mbox_rsp = (struct sli4_cmd_read_link_stats *)mqe; + mbox_rsp_flags = le32_to_cpu(mbox_rsp->dw1_flags); + num_counters = (mbox_rsp_flags & SLI4_READ_LNKSTAT_GEC) ? 20 : 13; + memset(counts, 0, sizeof(struct efct_hw_link_stat_counts) * + EFCT_HW_LINK_STAT_MAX); + + /* Fill overflow counts, mask starts from SLI4_READ_LNKSTAT_W02OF*/ + for (i = 0; i < EFCT_HW_LINK_STAT_MAX; i++) + counts[i].overflow = (mbox_rsp_flags & (1 << (i + 2))); + + counts[EFCT_HW_LINK_STAT_LINK_FAILURE_COUNT].counter = + le32_to_cpu(mbox_rsp->linkfail_errcnt); + counts[EFCT_HW_LINK_STAT_LOSS_OF_SYNC_COUNT].counter = + le32_to_cpu(mbox_rsp->losssync_errcnt); + counts[EFCT_HW_LINK_STAT_LOSS_OF_SIGNAL_COUNT].counter = + le32_to_cpu(mbox_rsp->losssignal_errcnt); + counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_COUNT].counter = + le32_to_cpu(mbox_rsp->primseq_errcnt); + counts[EFCT_HW_LINK_STAT_INVALID_XMIT_WORD_COUNT].counter = + le32_to_cpu(mbox_rsp->inval_txword_errcnt); + counts[EFCT_HW_LINK_STAT_CRC_COUNT].counter = + le32_to_cpu(mbox_rsp->crc_errcnt); + counts[EFCT_HW_LINK_STAT_PRIMITIVE_SEQ_TIMEOUT_COUNT].counter = + le32_to_cpu(mbox_rsp->primseq_eventtimeout_cnt); + counts[EFCT_HW_LINK_STAT_ELASTIC_BUFFER_OVERRUN_COUNT].counter = + le32_to_cpu(mbox_rsp->elastic_bufoverrun_errcnt); + counts[EFCT_HW_LINK_STAT_ARB_TIMEOUT_COUNT].counter = + le32_to_cpu(mbox_rsp->arbit_fc_al_timeout_cnt); + counts[EFCT_HW_LINK_STAT_ADVERTISED_RCV_B2B_CREDIT].counter = + le32_to_cpu(mbox_rsp->adv_rx_buftor_to_buf_credit); + counts[EFCT_HW_LINK_STAT_CURR_RCV_B2B_CREDIT].counter = + le32_to_cpu(mbox_rsp->curr_rx_buf_to_buf_credit); + counts[EFCT_HW_LINK_STAT_ADVERTISED_XMIT_B2B_CREDIT].counter = + le32_to_cpu(mbox_rsp->adv_tx_buf_to_buf_credit); + counts[EFCT_HW_LINK_STAT_CURR_XMIT_B2B_CREDIT].counter = + le32_to_cpu(mbox_rsp->curr_tx_buf_to_buf_credit); + counts[EFCT_HW_LINK_STAT_RCV_EOFA_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_eofa_cnt); + counts[EFCT_HW_LINK_STAT_RCV_EOFDTI_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_eofdti_cnt); + counts[EFCT_HW_LINK_STAT_RCV_EOFNI_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_eofni_cnt); + counts[EFCT_HW_LINK_STAT_RCV_SOFF_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_soff_cnt); + counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_AER_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_dropped_no_aer_cnt); + counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_RPI_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_dropped_no_avail_rpi_rescnt); + counts[EFCT_HW_LINK_STAT_RCV_DROPPED_NO_XRI_COUNT].counter = + le32_to_cpu(mbox_rsp->rx_dropped_no_avail_xri_rescnt); + + if (cb_arg) { + if (cb_arg->cb) { + if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status)) + status = le16_to_cpu(mbox_rsp->hdr.status); + cb_arg->cb(status, num_counters, counts, cb_arg->arg); + } + + kfree(cb_arg); + } + + return EFC_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_get_link_stats(struct efct_hw *hw, u8 req_ext_counters, + u8 clear_overflow_flags, u8 clear_all_counters, + void (*cb)(int status, u32 num_counters, + struct efct_hw_link_stat_counts *counters, + void *arg), + void *arg) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR; + struct efct_hw_link_stat_cb_arg *cb_arg; + u8 mbxdata[SLI4_BMBX_SIZE]; + + cb_arg = kzalloc(sizeof(*cb_arg), GFP_ATOMIC); + if (!cb_arg) + return EFCT_HW_RTN_NO_MEMORY; + + cb_arg->cb = cb; + cb_arg->arg = arg; + + /* Send the HW command */ + if (!sli_cmd_read_link_stats(&hw->sli, mbxdata, req_ext_counters, + clear_overflow_flags, clear_all_counters)) + rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT, + efct_hw_cb_link_stat, cb_arg); + + if (rc) + kfree(cb_arg); + + return rc; +} + +static int +efct_hw_cb_host_stat(struct efct_hw *hw, int status, u8 *mqe, void *arg) +{ + struct sli4_cmd_read_status *mbox_rsp = + (struct sli4_cmd_read_status *)mqe; + struct efct_hw_host_stat_cb_arg *cb_arg = arg; + struct efct_hw_host_stat_counts counts[EFCT_HW_HOST_STAT_MAX]; + u32 num_counters = EFCT_HW_HOST_STAT_MAX; + + memset(counts, 0, sizeof(struct efct_hw_host_stat_counts) * + EFCT_HW_HOST_STAT_MAX); + + counts[EFCT_HW_HOST_STAT_TX_KBYTE_COUNT].counter = + le32_to_cpu(mbox_rsp->trans_kbyte_cnt); + counts[EFCT_HW_HOST_STAT_RX_KBYTE_COUNT].counter = + le32_to_cpu(mbox_rsp->recv_kbyte_cnt); + counts[EFCT_HW_HOST_STAT_TX_FRAME_COUNT].counter = + le32_to_cpu(mbox_rsp->trans_frame_cnt); + counts[EFCT_HW_HOST_STAT_RX_FRAME_COUNT].counter = + le32_to_cpu(mbox_rsp->recv_frame_cnt); + counts[EFCT_HW_HOST_STAT_TX_SEQ_COUNT].counter = + le32_to_cpu(mbox_rsp->trans_seq_cnt); + counts[EFCT_HW_HOST_STAT_RX_SEQ_COUNT].counter = + le32_to_cpu(mbox_rsp->recv_seq_cnt); + counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_ORIG].counter = + le32_to_cpu(mbox_rsp->tot_exchanges_orig); + counts[EFCT_HW_HOST_STAT_TOTAL_EXCH_RESP].counter = + le32_to_cpu(mbox_rsp->tot_exchanges_resp); + counts[EFCT_HW_HOSY_STAT_RX_P_BSY_COUNT].counter = + le32_to_cpu(mbox_rsp->recv_p_bsy_cnt); + counts[EFCT_HW_HOST_STAT_RX_F_BSY_COUNT].counter = + le32_to_cpu(mbox_rsp->recv_f_bsy_cnt); + counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_RQ_BUF_COUNT].counter = + le32_to_cpu(mbox_rsp->no_rq_buf_dropped_frames_cnt); + counts[EFCT_HW_HOST_STAT_EMPTY_RQ_TIMEOUT_COUNT].counter = + le32_to_cpu(mbox_rsp->empty_rq_timeout_cnt); + counts[EFCT_HW_HOST_STAT_DROP_FRM_DUE_TO_NO_XRI_COUNT].counter = + le32_to_cpu(mbox_rsp->no_xri_dropped_frames_cnt); + counts[EFCT_HW_HOST_STAT_EMPTY_XRI_POOL_COUNT].counter = + le32_to_cpu(mbox_rsp->empty_xri_pool_cnt); + + if (cb_arg) { + if (cb_arg->cb) { + if (status == 0 && le16_to_cpu(mbox_rsp->hdr.status)) + status = le16_to_cpu(mbox_rsp->hdr.status); + cb_arg->cb(status, num_counters, counts, cb_arg->arg); + } + + kfree(cb_arg); + } + + return EFC_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_get_host_stats(struct efct_hw *hw, u8 cc, + void (*cb)(int status, u32 num_counters, + struct efct_hw_host_stat_counts *counters, + void *arg), + void *arg) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR; + struct efct_hw_host_stat_cb_arg *cb_arg; + u8 mbxdata[SLI4_BMBX_SIZE]; + + cb_arg = kmalloc(sizeof(*cb_arg), GFP_ATOMIC); + if (!cb_arg) + return EFCT_HW_RTN_NO_MEMORY; + + cb_arg->cb = cb; + cb_arg->arg = arg; + + /* Send the HW command to get the host stats */ + if (!sli_cmd_read_status(&hw->sli, mbxdata, cc)) + rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT, + efct_hw_cb_host_stat, cb_arg); + + if (rc) { + efc_log_debug(hw->os, "READ_HOST_STATS failed\n"); + kfree(cb_arg); + } + + return rc; +} + +struct efct_hw_async_call_ctx { + efct_hw_async_cb_t callback; + void *arg; + u8 cmd[SLI4_BMBX_SIZE]; +}; + +static void +efct_hw_async_cb(struct efct_hw *hw, int status, u8 *mqe, void *arg) +{ + struct efct_hw_async_call_ctx *ctx = arg; + + if (ctx) { + if (ctx->callback) + (*ctx->callback)(hw, status, mqe, ctx->arg); + + kfree(ctx); + } +} + +int +efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg) +{ + struct efct_hw_async_call_ctx *ctx; + + /* + * Allocate a callback context (which includes the mbox cmd buffer), + * we need this to be persistent as the mbox cmd submission may be + * queued and executed later execution. + */ + ctx = kzalloc(sizeof(*ctx), GFP_KERNEL); + if (!ctx) + return EFCT_HW_RTN_NO_MEMORY; + + ctx->callback = callback; + ctx->arg = arg; + + /* Build and send a NOP mailbox command */ + if (sli_cmd_common_nop(&hw->sli, ctx->cmd, 0)) { + efc_log_err(hw->os, "COMMON_NOP format failure\n"); + kfree(ctx); + return EFC_FAIL; + } + + if (efct_hw_command(hw, ctx->cmd, EFCT_CMD_NOWAIT, efct_hw_async_cb, + ctx)) { + efc_log_err(hw->os, "COMMON_NOP command failure\n"); + kfree(ctx); + return EFC_FAIL; + } + return EFC_SUCCESS; +} + +static int +efct_hw_cb_fw_write(struct efct_hw *hw, int status, u8 *mqe, void *arg) +{ + struct sli4_cmd_sli_config *mbox_rsp = + (struct sli4_cmd_sli_config *)mqe; + struct sli4_rsp_cmn_write_object *wr_obj_rsp; + struct efct_hw_fw_wr_cb_arg *cb_arg = arg; + u32 bytes_written; + u16 mbox_status; + u32 change_status; + + wr_obj_rsp = (struct sli4_rsp_cmn_write_object *) + &mbox_rsp->payload.embed; + bytes_written = le32_to_cpu(wr_obj_rsp->actual_write_length); + mbox_status = le16_to_cpu(mbox_rsp->hdr.status); + change_status = (le32_to_cpu(wr_obj_rsp->change_status_dword) & + RSP_CHANGE_STATUS); + + if (cb_arg) { + if (cb_arg->cb) { + if (!status && mbox_status) + status = mbox_status; + cb_arg->cb(status, bytes_written, change_status, + cb_arg->arg); + } + + kfree(cb_arg); + } + + return EFC_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma, u32 size, + u32 offset, int last, + void (*cb)(int status, u32 bytes_written, + u32 change_status, void *arg), + void *arg) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR; + u8 mbxdata[SLI4_BMBX_SIZE]; + struct efct_hw_fw_wr_cb_arg *cb_arg; + int noc = 0; + + cb_arg = kzalloc(sizeof(*cb_arg), GFP_KERNEL); + if (!cb_arg) + return EFCT_HW_RTN_NO_MEMORY; + + cb_arg->cb = cb; + cb_arg->arg = arg; + + /* Write a portion of a firmware image to the device */ + if (!sli_cmd_common_write_object(&hw->sli, mbxdata, + noc, last, size, offset, "/prg/", + dma)) + rc = efct_hw_command(hw, mbxdata, EFCT_CMD_NOWAIT, + efct_hw_cb_fw_write, cb_arg); + + if (rc != EFCT_HW_RTN_SUCCESS) { + efc_log_debug(hw->os, "COMMON_WRITE_OBJECT failed\n"); + kfree(cb_arg); + } + + return rc; +} diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h index a030fc5ac719..0dc7e210c86a 100644 --- a/drivers/scsi/elx/efct/efct_hw.h +++ b/drivers/scsi/elx/efct/efct_hw.h @@ -721,4 +721,35 @@ int efct_hw_bls_send(struct efct *efct, u32 type, struct sli_bls_params *bls_params, void *cb, void *arg); +/* Function for retrieving link statistics */ +enum efct_hw_rtn +efct_hw_get_link_stats(struct efct_hw *hw, + u8 req_ext_counters, + u8 clear_overflow_flags, + u8 clear_all_counters, + void (*efct_hw_link_stat_cb_t)(int status, + u32 num_counters, + struct efct_hw_link_stat_counts *counters, + void *arg), + void *arg); +/* Function for retrieving host statistics */ +enum efct_hw_rtn +efct_hw_get_host_stats(struct efct_hw *hw, + u8 cc, + void (*efct_hw_host_stat_cb_t)(int status, + u32 num_counters, + struct efct_hw_host_stat_counts *counters, + void *arg), + void *arg); +enum efct_hw_rtn +efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma, + u32 size, u32 offset, int last, + void (*cb)(int status, u32 bytes_written, + u32 change_status, void *arg), + void *arg); +typedef void (*efct_hw_async_cb_t)(struct efct_hw *hw, int status, + u8 *mqe, void *arg); +int +efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg); + #endif /* __EFCT_H__ */ From patchwork Thu Jan 7 22:59:02 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358586 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 82485C43381 for ; Thu, 7 Jan 2021 23:00:53 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4810B235FF for ; Thu, 7 Jan 2021 23:00:53 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728873AbhAGXAw (ORCPT ); Thu, 7 Jan 2021 18:00:52 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51904 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728857AbhAGXAu (ORCPT ); Thu, 7 Jan 2021 18:00:50 -0500 Received: from mail-pg1-x52d.google.com (mail-pg1-x52d.google.com [IPv6:2607:f8b0:4864:20::52d]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 75579C061240 for ; Thu, 7 Jan 2021 14:59:40 -0800 (PST) Received: by mail-pg1-x52d.google.com with SMTP id c132so6344021pga.3 for ; Thu, 07 Jan 2021 14:59:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=yXuwIMMs4+zd35IR+8pnkHu8txi4XSwNPnH0TUKUJ3Q=; b=ZXRvumCAuLUSUqVXsdF5kSNu5dCuw4qktdLWqurA+Tir098of1vOno5LPIO3nA189s wJxhL3gFO4Bl1cYG83Xnd1QgSI3ZaDACyTxPbeLj2wPkGqr6OUoSPcDV7LgjRgWQ7syi +QfviF2359PgoPYFtdh//l/UDSqd1sFIcRxBcjoFOkA3ACn8N8bAEDC12+DTwHz/KvvW HWXFPXPher8s/xxS3N/o9CN+3S/7m5E6wAUurCHBBDVbXll0wHW/S6/mJRSuv0FpGcbU xeu6Xpp/8gft6w5zI3+d722UmnJ+Dit/l68rEPXuh2Fb04lwmJzWB2gIBJVdF1nqI/zv e/Ag== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=yXuwIMMs4+zd35IR+8pnkHu8txi4XSwNPnH0TUKUJ3Q=; b=dgYSEgzncjUvoRmAA7ftQSUP4WZohqJX+JOIT265bkhjswgQSQFFmixJIuNW+Gl3qG 79cYZ0V1R7eJtvRUn0hEJboIFiy7j78tH8eVVwWIK1qOez/fNaLDua9OPPz4ZroZQgjJ a1AyHZ+7Rnj5YirVpNrgYsRwUKMsqsEx7AfCihnRXt5znuwflDKYP+6230Ixu3OeODcL sb4ZJ9ZmusmwY+OhJagYtnN559mvc80zUCdp97ANn2II59kZ6Ys9f1SWHcdMZwvcr6YZ 1jGRo+nyEQbSxtPYP5aDlgSvL2uiHQgsSA9DMFtRsfS1yJAic6FltOrTiYmqjUyM6tv7 w0VA== X-Gm-Message-State: AOAM531bzzO4CetjiT2WhkbGUTxDJ4Hj26T93NV/2ki1hqByi8XLsIuM EORAWQ/AqMI55BSPQj8byAp4gRrMsj4gng== X-Google-Smtp-Source: ABdhPJz4YiXcU9ljtxoSwquPHTIEdZUq5l7/pO0O7Bu1nOrYEtRC39HouxYYoXUdZFJlJN8ZE57ftQ== X-Received: by 2002:aa7:9848:0:b029:19d:c24b:1179 with SMTP id n8-20020aa798480000b029019dc24b1179mr731971pfq.29.1610060379519; Thu, 07 Jan 2021 14:59:39 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:38 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Daniel Wagner Subject: [PATCH v7 28/31] elx: efct: xport and hardware teardown routines Date: Thu, 7 Jan 2021 14:59:02 -0800 Message-Id: <20210107225905.18186-29-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch continues the efct driver population. This patch adds driver definitions for: Routines to detach xport and hardware objects. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Daniel Wagner --- drivers/scsi/elx/efct/efct_hw.c | 277 +++++++++++++++++++++++++++++ drivers/scsi/elx/efct/efct_hw.h | 27 +++ drivers/scsi/elx/efct/efct_xport.c | 261 +++++++++++++++++++++++++++ 3 files changed, 565 insertions(+) diff --git a/drivers/scsi/elx/efct/efct_hw.c b/drivers/scsi/elx/efct/efct_hw.c index fca9bff358e3..8486dfaf346f 100644 --- a/drivers/scsi/elx/efct/efct_hw.c +++ b/drivers/scsi/elx/efct/efct_hw.c @@ -3358,3 +3358,280 @@ efct_hw_firmware_write(struct efct_hw *hw, struct efc_dma *dma, u32 size, return rc; } + +static int +efct_hw_cb_port_control(struct efct_hw *hw, int status, u8 *mqe, + void *arg) +{ + return EFC_SUCCESS; +} + +enum efct_hw_rtn +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl, + uintptr_t value, + void (*cb)(int status, uintptr_t value, void *arg), + void *arg) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_ERROR; + u8 link[SLI4_BMBX_SIZE]; + u32 speed = 0; + u8 reset_alpa = 0; + + switch (ctrl) { + case EFCT_HW_PORT_INIT: + if (!sli_cmd_config_link(&hw->sli, link)) + rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT, + efct_hw_cb_port_control, NULL); + + if (rc != EFCT_HW_RTN_SUCCESS) { + efc_log_err(hw->os, "CONFIG_LINK failed\n"); + break; + } + speed = hw->config.speed; + reset_alpa = (u8)(value & 0xff); + + rc = EFCT_HW_RTN_ERROR; + if (!sli_cmd_init_link(&hw->sli, link, speed, reset_alpa)) + rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT, + efct_hw_cb_port_control, NULL); + /* Free buffer on error, since no callback is coming */ + if (rc) + efc_log_err(hw->os, "INIT_LINK failed\n"); + break; + + case EFCT_HW_PORT_SHUTDOWN: + if (!sli_cmd_down_link(&hw->sli, link)) + rc = efct_hw_command(hw, link, EFCT_CMD_NOWAIT, + efct_hw_cb_port_control, NULL); + /* Free buffer on error, since no callback is coming */ + if (rc) + efc_log_err(hw->os, "DOWN_LINK failed\n"); + break; + + default: + efc_log_debug(hw->os, "unhandled control %#x\n", ctrl); + break; + } + + return rc; +} + +enum efct_hw_rtn +efct_hw_teardown(struct efct_hw *hw) +{ + u32 i = 0; + u32 iters = 10; + u32 destroy_queues; + u32 free_memory; + struct efc_dma *dma; + struct efct *efct = hw->os; + + destroy_queues = (hw->state == EFCT_HW_STATE_ACTIVE); + free_memory = (hw->state != EFCT_HW_STATE_UNINITIALIZED); + + /* Cancel Sliport Healthcheck */ + if (hw->sliport_healthcheck) { + hw->sliport_healthcheck = 0; + efct_hw_config_sli_port_health_check(hw, 0, 0); + } + + if (hw->state != EFCT_HW_STATE_QUEUES_ALLOCATED) { + hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS; + + efct_hw_flush(hw); + + /* + * If there are outstanding commands, wait for them to complete + */ + while (!list_empty(&hw->cmd_head) && iters) { + mdelay(10); + efct_hw_flush(hw); + iters--; + } + + if (list_empty(&hw->cmd_head)) + efc_log_debug(hw->os, + "All commands completed on MQ queue\n"); + else + efc_log_debug(hw->os, + "Some cmds still pending on MQ queue\n"); + + /* Cancel any remaining commands */ + efct_hw_command_cancel(hw); + } else { + hw->state = EFCT_HW_STATE_TEARDOWN_IN_PROGRESS; + } + + dma_free_coherent(&efct->pci->dev, + hw->rnode_mem.size, hw->rnode_mem.virt, + hw->rnode_mem.phys); + memset(&hw->rnode_mem, 0, sizeof(struct efc_dma)); + + if (hw->io) { + for (i = 0; i < hw->config.n_io; i++) { + if (hw->io[i] && hw->io[i]->sgl && + hw->io[i]->sgl->virt) { + dma_free_coherent(&efct->pci->dev, + hw->io[i]->sgl->size, + hw->io[i]->sgl->virt, + hw->io[i]->sgl->phys); + } + kfree(hw->io[i]); + hw->io[i] = NULL; + } + kfree(hw->io); + hw->io = NULL; + kfree(hw->wqe_buffs); + hw->wqe_buffs = NULL; + } + + dma = &hw->xfer_rdy; + dma_free_coherent(&efct->pci->dev, + dma->size, dma->virt, dma->phys); + memset(dma, 0, sizeof(struct efc_dma)); + + dma = &hw->loop_map; + dma_free_coherent(&efct->pci->dev, + dma->size, dma->virt, dma->phys); + memset(dma, 0, sizeof(struct efc_dma)); + + for (i = 0; i < hw->wq_count; i++) + sli_queue_free(&hw->sli, &hw->wq[i], destroy_queues, + free_memory); + + for (i = 0; i < hw->rq_count; i++) + sli_queue_free(&hw->sli, &hw->rq[i], destroy_queues, + free_memory); + + for (i = 0; i < hw->mq_count; i++) + sli_queue_free(&hw->sli, &hw->mq[i], destroy_queues, + free_memory); + + for (i = 0; i < hw->cq_count; i++) + sli_queue_free(&hw->sli, &hw->cq[i], destroy_queues, + free_memory); + + for (i = 0; i < hw->eq_count; i++) + sli_queue_free(&hw->sli, &hw->eq[i], destroy_queues, + free_memory); + + /* Free rq buffers */ + efct_hw_rx_free(hw); + + efct_hw_queue_teardown(hw); + + kfree(hw->wq_cpu_array); + + sli_teardown(&hw->sli); + + /* record the fact that the queues are non-functional */ + hw->state = EFCT_HW_STATE_UNINITIALIZED; + + /* free sequence free pool */ + kfree(hw->seq_pool); + hw->seq_pool = NULL; + + /* free hw_wq_callback pool */ + efct_hw_reqtag_pool_free(hw); + + mempool_destroy(hw->cmd_ctx_pool); + mempool_destroy(hw->mbox_rqst_pool); + + /* Mark HW setup as not having been called */ + hw->hw_setup_called = false; + + return EFCT_HW_RTN_SUCCESS; +} + +static enum efct_hw_rtn +efct_hw_sli_reset(struct efct_hw *hw, enum efct_hw_reset reset, + enum efct_hw_state prev_state) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS; + + switch (reset) { + case EFCT_HW_RESET_FUNCTION: + efc_log_debug(hw->os, "issuing function level reset\n"); + if (sli_reset(&hw->sli)) { + efc_log_err(hw->os, "sli_reset failed\n"); + rc = EFCT_HW_RTN_ERROR; + } + break; + case EFCT_HW_RESET_FIRMWARE: + efc_log_debug(hw->os, "issuing firmware reset\n"); + if (sli_fw_reset(&hw->sli)) { + efc_log_err(hw->os, "sli_soft_reset failed\n"); + rc = EFCT_HW_RTN_ERROR; + } + /* + * Because the FW reset leaves the FW in a non-running state, + * follow that with a regular reset. + */ + efc_log_debug(hw->os, "issuing function level reset\n"); + if (sli_reset(&hw->sli)) { + efc_log_err(hw->os, "sli_reset failed\n"); + rc = EFCT_HW_RTN_ERROR; + } + break; + default: + efc_log_err(hw->os, + "unknown reset type - no reset performed\n"); + hw->state = prev_state; + rc = EFCT_HW_RTN_INVALID_ARG; + break; + } + + return rc; +} + +enum efct_hw_rtn +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset) +{ + enum efct_hw_rtn rc = EFCT_HW_RTN_SUCCESS; + u32 iters; + enum efct_hw_state prev_state = hw->state; + + if (hw->state != EFCT_HW_STATE_ACTIVE) + efc_log_debug(hw->os, + "HW state %d is not active\n", hw->state); + + hw->state = EFCT_HW_STATE_RESET_IN_PROGRESS; + + /* + * If the prev_state is already reset/teardown in progress, + * don't continue further + */ + if (prev_state == EFCT_HW_STATE_RESET_IN_PROGRESS || + prev_state == EFCT_HW_STATE_TEARDOWN_IN_PROGRESS) + return efct_hw_sli_reset(hw, reset, prev_state); + + if (prev_state != EFCT_HW_STATE_UNINITIALIZED) { + efct_hw_flush(hw); + + /* + * If an mailbox command requiring a DMA is outstanding + * (SFP/DDM), then the FW will UE when the reset is issued. + * So attempt to complete all mailbox commands. + */ + iters = 10; + while (!list_empty(&hw->cmd_head) && iters) { + mdelay(10); + efct_hw_flush(hw); + iters--; + } + + if (list_empty(&hw->cmd_head)) + efc_log_debug(hw->os, + "All commands completed on MQ queue\n"); + else + efc_log_debug(hw->os, + "Some commands still pending on MQ queue\n"); + } + + /* Reset the chip */ + rc = efct_hw_sli_reset(hw, reset, prev_state); + if (rc == EFCT_HW_RTN_INVALID_ARG) + return EFCT_HW_RTN_ERROR; + + return rc; +} diff --git a/drivers/scsi/elx/efct/efct_hw.h b/drivers/scsi/elx/efct/efct_hw.h index 0dc7e210c86a..a87c78d01e18 100644 --- a/drivers/scsi/elx/efct/efct_hw.h +++ b/drivers/scsi/elx/efct/efct_hw.h @@ -752,4 +752,31 @@ typedef void (*efct_hw_async_cb_t)(struct efct_hw *hw, int status, int efct_hw_async_call(struct efct_hw *hw, efct_hw_async_cb_t callback, void *arg); +struct hw_eq *efct_hw_new_eq(struct efct_hw *hw, u32 entry_count); +struct hw_cq *efct_hw_new_cq(struct hw_eq *eq, u32 entry_count); +u32 +efct_hw_new_cq_set(struct hw_eq *eqs[], struct hw_cq *cqs[], + u32 num_cqs, u32 entry_count); +struct hw_mq *efct_hw_new_mq(struct hw_cq *cq, u32 entry_count); +struct hw_wq +*efct_hw_new_wq(struct hw_cq *cq, u32 entry_count); +u32 +efct_hw_new_rq_set(struct hw_cq *cqs[], struct hw_rq *rqs[], + u32 num_rq_pairs, u32 entry_count); +void efct_hw_del_eq(struct hw_eq *eq); +void efct_hw_del_cq(struct hw_cq *cq); +void efct_hw_del_mq(struct hw_mq *mq); +void efct_hw_del_wq(struct hw_wq *wq); +void efct_hw_del_rq(struct hw_rq *rq); +void efct_hw_queue_teardown(struct efct_hw *hw); +enum efct_hw_rtn efct_hw_teardown(struct efct_hw *hw); +enum efct_hw_rtn +efct_hw_reset(struct efct_hw *hw, enum efct_hw_reset reset); + +enum efct_hw_rtn +efct_hw_port_control(struct efct_hw *hw, enum efct_hw_port ctrl, + uintptr_t value, + void (*cb)(int status, uintptr_t value, void *arg), + void *arg); + #endif /* __EFCT_H__ */ diff --git a/drivers/scsi/elx/efct/efct_xport.c b/drivers/scsi/elx/efct/efct_xport.c index b19304d6febb..0cf558c03547 100644 --- a/drivers/scsi/elx/efct/efct_xport.c +++ b/drivers/scsi/elx/efct/efct_xport.c @@ -517,3 +517,264 @@ efct_scsi_release_fc_transport(void) return EFC_SUCCESS; } + +int +efct_xport_detach(struct efct_xport *xport) +{ + struct efct *efct = xport->efct; + + /* free resources associated with target-server and initiator-client */ + efct_scsi_tgt_del_device(efct); + + efct_scsi_del_device(efct); + + /*Shutdown FC Statistics timer*/ + if (timer_pending(&xport->stats_timer)) + del_timer(&xport->stats_timer); + + efct_hw_teardown(&efct->hw); + + efct_xport_delete_debugfs(efct); + + return EFC_SUCCESS; +} + +static void +efct_xport_domain_free_cb(struct efc *efc, void *arg) +{ + struct completion *done = arg; + + complete(done); +} + +static void +efct_xport_post_node_event_cb(struct efct_hw *hw, int status, + u8 *mqe, void *arg) +{ + struct efct_xport_post_node_event *payload = arg; + + if (payload) { + efc_node_post_shutdown(payload->node, payload->evt, + payload->context); + complete(&payload->done); + if (atomic_sub_and_test(1, &payload->refcnt)) + kfree(payload); + } +} + +int +efct_xport_control(struct efct_xport *xport, enum efct_xport_ctrl cmd, ...) +{ + u32 rc = 0; + struct efct *efct = NULL; + va_list argp; + + efct = xport->efct; + + switch (cmd) { + case EFCT_XPORT_PORT_ONLINE: { + /* Bring the port on-line */ + rc = efct_hw_port_control(&efct->hw, EFCT_HW_PORT_INIT, 0, + NULL, NULL); + if (rc) + efc_log_err(efct, + "%s: Can't init port\n", efct->desc); + else + xport->configured_link_state = cmd; + break; + } + case EFCT_XPORT_PORT_OFFLINE: { + if (efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, 0, + NULL, NULL)) + efc_log_err(efct, "port shutdown failed\n"); + else + xport->configured_link_state = cmd; + break; + } + + case EFCT_XPORT_SHUTDOWN: { + struct completion done; + unsigned long timeout; + + /* if a PHYSDEV reset was performed (e.g. hw dump), will affect + * all PCI functions; orderly shutdown won't work, + * just force free + */ + if (sli_reset_required(&efct->hw.sli)) { + struct efc_domain *domain = efct->efcport->domain; + + if (domain) + efc_domain_cb(efct->efcport, EFC_HW_DOMAIN_LOST, + domain); + } else { + efct_hw_port_control(&efct->hw, EFCT_HW_PORT_SHUTDOWN, + 0, NULL, NULL); + } + + init_completion(&done); + + efc_register_domain_free_cb(efct->efcport, + efct_xport_domain_free_cb, &done); + + efc_log_debug(efct, "Waiting %d seconds for domain shutdown\n", + (EFC_SHUTDOWN_TIMEOUT_USEC / 1000000)); + + timeout = usecs_to_jiffies(EFC_SHUTDOWN_TIMEOUT_USEC); + if (!wait_for_completion_timeout(&done, timeout)) { + efc_log_err(efct, "Domain shutdown timed out!!\n"); + WARN_ON(1); + } + + efc_register_domain_free_cb(efct->efcport, NULL, NULL); + + /* Free up any saved virtual ports */ + efc_vport_del_all(efct->efcport); + break; + } + + /* + * POST_NODE_EVENT: post an event to a node object + * + * This transport function is used to post an event to a node object. + * It does this by submitting a NOP mailbox command to defer execution + * to the interrupt context (thereby enforcing the serialized execution + * of event posting to the node state machine instances) + */ + case EFCT_XPORT_POST_NODE_EVENT: { + struct efc_node *node; + u32 evt; + void *context; + struct efct_xport_post_node_event *payload = NULL; + struct efct_hw *hw; + + /* Retrieve arguments */ + va_start(argp, cmd); + node = va_arg(argp, struct efc_node *); + evt = va_arg(argp, u32); + context = va_arg(argp, void *); + va_end(argp); + + payload = kzalloc(sizeof(*payload), GFP_KERNEL); + if (!payload) + return EFC_FAIL; + + hw = &efct->hw; + + /* if node's state machine is disabled, + * don't bother continuing + */ + if (!node->sm.current_state) { + efc_log_debug(efct, "node %p state machine disabled\n", + node); + kfree(payload); + rc = -1; + break; + } + + /* Setup payload */ + init_completion(&payload->done); + + /* one for self and one for callback */ + atomic_set(&payload->refcnt, 2); + payload->node = node; + payload->evt = evt; + payload->context = context; + + if (efct_hw_async_call(hw, efct_xport_post_node_event_cb, + payload)) { + efc_log_debug(efct, "efct_hw_async_call failed\n"); + kfree(payload); + rc = -1; + break; + } + + /* Wait for completion */ + if (wait_for_completion_interruptible(&payload->done)) { + efc_log_debug(efct, + "POST_NODE_EVENT: completion failed\n"); + rc = -1; + } + if (atomic_sub_and_test(1, &payload->refcnt)) + kfree(payload); + + break; + } + /* + * Set wwnn for the port. This will be used instead of the default + * provided by FW. + */ + case EFCT_XPORT_WWNN_SET: { + u64 wwnn; + + /* Retrieve arguments */ + va_start(argp, cmd); + wwnn = va_arg(argp, uint64_t); + va_end(argp); + + efc_log_debug(efct, " WWNN %016llx\n", wwnn); + xport->req_wwnn = wwnn; + + break; + } + /* + * Set wwpn for the port. This will be used instead of the default + * provided by FW. + */ + case EFCT_XPORT_WWPN_SET: { + u64 wwpn; + + /* Retrieve arguments */ + va_start(argp, cmd); + wwpn = va_arg(argp, uint64_t); + va_end(argp); + + efc_log_debug(efct, " WWPN %016llx\n", wwpn); + xport->req_wwpn = wwpn; + + break; + } + + default: + break; + } + return rc; +} + +void +efct_xport_free(struct efct_xport *xport) +{ + if (xport) { + efct_io_pool_free(xport->io_pool); + + kfree(xport); + } +} + +void +efct_release_fc_transport(struct scsi_transport_template *transport_template) +{ + if (transport_template) + pr_err("releasing transport layer\n"); + + /* Releasing FC transport */ + fc_release_transport(transport_template); +} + +static void +efct_xport_remove_host(struct Scsi_Host *shost) +{ + fc_remove_host(shost); +} + +int efct_scsi_del_device(struct efct *efct) +{ + if (efct->shost) { + efc_log_debug(efct, "Unregistering with Transport Layer\n"); + efct_xport_remove_host(efct->shost); + efc_log_debug(efct, "Unregistering with SCSI Midlayer\n"); + scsi_remove_host(efct->shost); + scsi_host_put(efct->shost); + efct->shost = NULL; + } + return EFC_SUCCESS; +} From patchwork Thu Jan 7 22:59:04 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358585 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 03A53C433E0 for ; Thu, 7 Jan 2021 23:00:56 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id CF43B235FF for ; Thu, 7 Jan 2021 23:00:55 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728876AbhAGXAz (ORCPT ); Thu, 7 Jan 2021 18:00:55 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:51918 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727944AbhAGXAz (ORCPT ); Thu, 7 Jan 2021 18:00:55 -0500 Received: from mail-pf1-x42e.google.com (mail-pf1-x42e.google.com [IPv6:2607:f8b0:4864:20::42e]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id E55C8C061243 for ; Thu, 7 Jan 2021 14:59:42 -0800 (PST) Received: by mail-pf1-x42e.google.com with SMTP id w6so5008087pfu.1 for ; Thu, 07 Jan 2021 14:59:42 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=kh1yKrWnwb0DPbX5N7wOKrUUIpbyN7AyFREIrCh8Ki0=; b=DB72JtXi4+Sx5bOjmHY6L4bFdy7eXNIVHAcPM7QNdVJKjt5nhvOWkCYunedJE1ALZi ROpIbEUrBBr7e/DxC79G0VisJH1ioDKB0zI8EwReMNnk9KsHMS6v96bYKuck1AfOuwHO jul2qGSmPFdcjh3Ui+mi20e5K4NmSusXdYSTkh4Ct53gKZYK4CxZWuJKDDiogyI3GVIL LDfOPeUdWYbjofAPUAxdnQgj2qO1u3Ofgccc6ko5XKno5vOrRVRwsnFckdIaKdtMzXDn 5TbdTQsLadgdS/t4dnt6CMXUR0UIMC5u3Vw3ty/XuEpA67PeQ7/94GG/8sAUMWUS69Nf h4Ow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=kh1yKrWnwb0DPbX5N7wOKrUUIpbyN7AyFREIrCh8Ki0=; b=HPGwsqhbDmwe18L6vyzVpuVfvjoq/cLiDG47x3UdiMc7b+cls/Mh6AqpsnxNZzius0 i0EwuArGyDOEFI+u6B0X5ZHnbZyNbgMHrKq6brOGrC44ahGSz5+Uo3E6MD7VpUUvA6uN 2E8b4xAgnMA/ntsFOpHqtFq5+U9MfzblZSrJbM0jjZftl9mOVPnqM705YmghxzXPXJSU l5g03+w1o6bFimF48ZlzjdIUEP1KaIVA7JzAh/eTcydr2mlsDwbmCyAdiwasyY/7wH7q q/fJ4Fdi4tYKvSwHZj6ajYOTq81BGh7Fm2JJm4JdveJszkXpCyfuUbn8Zpxzx2LNf4pJ giGQ== X-Gm-Message-State: AOAM533vVI7m8u8ofQibUsCCOO3QKc/2vwpsckDMG19kIhQ+MiZnwE+D iU4aSXkzlGUWnUZJcN8rkAPKxCkHtbTRRw== X-Google-Smtp-Source: ABdhPJy3BwQRxDGbQL9nexIy+aE+qc4/KIqW+J+w5laxstw74yRmMuAojhzyYkK2pTuEkT/5YVO5lg== X-Received: by 2002:a62:7651:0:b029:1a5:929b:1681 with SMTP id r78-20020a6276510000b02901a5929b1681mr798448pfc.27.1610060382402; Thu, 07 Jan 2021 14:59:42 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:41 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 30/31] elx: efct: Add Makefile and Kconfig for efct driver Date: Thu, 7 Jan 2021 14:59:04 -0800 Message-Id: <20210107225905.18186-31-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This patch completes the efct driver population. This patch adds driver definitions for: Adds the efct driver Kconfig and Makefiles Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/elx/Kconfig | 9 +++++++++ drivers/scsi/elx/Makefile | 18 ++++++++++++++++++ 2 files changed, 27 insertions(+) create mode 100644 drivers/scsi/elx/Kconfig create mode 100644 drivers/scsi/elx/Makefile diff --git a/drivers/scsi/elx/Kconfig b/drivers/scsi/elx/Kconfig new file mode 100644 index 000000000000..831daea7a951 --- /dev/null +++ b/drivers/scsi/elx/Kconfig @@ -0,0 +1,9 @@ +config SCSI_EFCT + tristate "Emulex Fibre Channel Target" + depends on PCI && SCSI + depends on TARGET_CORE + depends on SCSI_FC_ATTRS + select CRC_T10DIF + help + The efct driver provides enhanced SCSI Target Mode + support for specific SLI-4 adapters. diff --git a/drivers/scsi/elx/Makefile b/drivers/scsi/elx/Makefile new file mode 100644 index 000000000000..a8537d7a2a6e --- /dev/null +++ b/drivers/scsi/elx/Makefile @@ -0,0 +1,18 @@ +#// SPDX-License-Identifier: GPL-2.0 +#/* +# * Copyright (C) 2021 Broadcom. All Rights Reserved. The term +# * “Broadcom” refers to Broadcom Inc. and/or its subsidiaries. +# */ + + +obj-$(CONFIG_SCSI_EFCT) := efct.o + +efct-objs := efct/efct_driver.o efct/efct_io.o efct/efct_scsi.o \ + efct/efct_xport.o efct/efct_hw.o efct/efct_hw_queues.o \ + efct/efct_lio.o efct/efct_unsol.o + +efct-objs += libefc/efc_cmds.o libefc/efc_domain.o libefc/efc_fabric.o \ + libefc/efc_node.o libefc/efc_nport.o libefc/efc_device.o \ + libefc/efclib.o libefc/efc_sm.o libefc/efc_els.o + +efct-objs += libefc_sli/sli4.o From patchwork Thu Jan 7 22:59:05 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: James Smart X-Patchwork-Id: 358581 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6E4FCC433E0 for ; Thu, 7 Jan 2021 23:01:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 44E1D235FF for ; Thu, 7 Jan 2021 23:01:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728949AbhAGXBV (ORCPT ); Thu, 7 Jan 2021 18:01:21 -0500 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:52046 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728940AbhAGXBU (ORCPT ); Thu, 7 Jan 2021 18:01:20 -0500 Received: from mail-pl1-x62f.google.com (mail-pl1-x62f.google.com [IPv6:2607:f8b0:4864:20::62f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 11665C061244 for ; Thu, 7 Jan 2021 14:59:44 -0800 (PST) Received: by mail-pl1-x62f.google.com with SMTP id v3so4618909plz.13 for ; Thu, 07 Jan 2021 14:59:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=pCdFxWMmuOfZWudA0mXbDkmU0asgXU+dhVFKYSWdKR0=; b=ejWmNP5L/u/RPmyuQaHPpSpQrd0QBeUmTI1PWMx7COzY2DEbqrtq9PAfiNj245R6cm E/HLgRHdJzqwWdJgIDw/XuG12tfrjO7buk25kVPSiQKjZbguOJWAcxAgkI+odQ8Ee10z /zkMeTRBtZjj+C/xsFeGfsKRACd+BDPKLtF3PpfWTpE1uzNlIVgMPYF6tHocwGv/zT4E PjVvl63zuUlIbOmnXHuZ7fQnK6664wkCz738tVP8nL1m4u4dw3XD1gazpBRjBAq75J2R YAhrIc67GElkozUu2D5AYfvlJJ40EMNgBTSiQcSCgcsaxOTDy0pnDW8d0gNmKbZFktYF SUaw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=pCdFxWMmuOfZWudA0mXbDkmU0asgXU+dhVFKYSWdKR0=; b=ZizLPD5ZZXM3SG6yNX8E3VG0nUbh1S5SZZEiFUuEx2CqLroNECNs5mVAnCyrua02iC rNEAcHoD8TykZ1R9eW+8JTOzw1fPNslRMKVP0+8CFuq0Wl3/ztRvRjrpX1rafNAYAH3Q Tqjre5UJHB9X+SRtJfuRt+fEICgNlq88EyaLZM0p1rbd2asii4mcjXXKwHTjsSchba6y 1SZiWFmJMDJLklC49uo46nVjQPefVmgrxlibcDZ2nvzZQtkPM5u7sywjAYA8ara52+xt 59buXb45wD9F2tzv2TY7EYIdGAMiswxojV2KNt8MboXGnRbdTAi6jAwg+IRicD14+XBb WVLQ== X-Gm-Message-State: AOAM532gcx5OK6+3raqKOiJi3mExd1G2tNb/UgVeFl2AP4x+bD0lw1NA 3UyXAxmvlCnixznYWyncwVLdGSJiWPVWMA== X-Google-Smtp-Source: ABdhPJziuwjA+OWoihQe3w6rMcHIMgC7fszCf2mVWUdlyT4N1pA3vw0b1e8XOfJ+Ga0uWUoLdprUaw== X-Received: by 2002:a17:90b:85:: with SMTP id bb5mr715069pjb.162.1610060383455; Thu, 07 Jan 2021 14:59:43 -0800 (PST) Received: from localhost.localdomain ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id l197sm6881405pfd.97.2021.01.07.14.59.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 07 Jan 2021 14:59:42 -0800 (PST) From: James Smart To: linux-scsi@vger.kernel.org Cc: James Smart , Ram Vegesna , Hannes Reinecke , Daniel Wagner Subject: [PATCH v7 31/31] elx: efct: Tie into kernel Kconfig and build process Date: Thu, 7 Jan 2021 14:59:05 -0800 Message-Id: <20210107225905.18186-32-jsmart2021@gmail.com> X-Mailer: git-send-email 2.26.2 In-Reply-To: <20210107225905.18186-1-jsmart2021@gmail.com> References: <20210107225905.18186-1-jsmart2021@gmail.com> MIME-Version: 1.0 Precedence: bulk List-ID: X-Mailing-List: linux-scsi@vger.kernel.org This final patch ties the efct driver into the kernel Kconfig and build linkages in the drivers/scsi directory. Co-developed-by: Ram Vegesna Signed-off-by: Ram Vegesna Signed-off-by: James Smart Reviewed-by: Hannes Reinecke Reviewed-by: Daniel Wagner --- drivers/scsi/Kconfig | 2 ++ drivers/scsi/Makefile | 1 + 2 files changed, 3 insertions(+) diff --git a/drivers/scsi/Kconfig b/drivers/scsi/Kconfig index 701b61ec76ee..f2d47bf55f97 100644 --- a/drivers/scsi/Kconfig +++ b/drivers/scsi/Kconfig @@ -1170,6 +1170,8 @@ config SCSI_LPFC_DEBUG_FS This makes debugging information from the lpfc driver available via the debugfs filesystem. +source "drivers/scsi/elx/Kconfig" + config SCSI_SIM710 tristate "Simple 53c710 SCSI support (Compaq, NCR machines)" depends on EISA && SCSI diff --git a/drivers/scsi/Makefile b/drivers/scsi/Makefile index c00e3dd57990..844db573283c 100644 --- a/drivers/scsi/Makefile +++ b/drivers/scsi/Makefile @@ -86,6 +86,7 @@ obj-$(CONFIG_SCSI_QLOGIC_1280) += qla1280.o obj-$(CONFIG_SCSI_QLA_FC) += qla2xxx/ obj-$(CONFIG_SCSI_QLA_ISCSI) += libiscsi.o qla4xxx/ obj-$(CONFIG_SCSI_LPFC) += lpfc/ +obj-$(CONFIG_SCSI_EFCT) += elx/ obj-$(CONFIG_SCSI_BFA_FC) += bfa/ obj-$(CONFIG_SCSI_CHELSIO_FCOE) += csiostor/ obj-$(CONFIG_SCSI_DMX3191D) += dmx3191d.o