From patchwork Wed Jan 10 10:46:31 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hemant Agrawal X-Patchwork-Id: 124067 Delivered-To: patch@linaro.org Received: by 10.140.22.227 with SMTP id 90csp5180039qgn; Wed, 10 Jan 2018 03:56:55 -0800 (PST) X-Google-Smtp-Source: ACJfBoufk/qEu2ymuSD+YUggT0FGsCOPYzMGaafBoLemXQTyi/0a53gn9SyZTPVXrmm7XaeY7zXs X-Received: by 10.28.199.132 with SMTP id x126mr3794200wmf.71.1515585415680; Wed, 10 Jan 2018 03:56:55 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1515585415; cv=none; d=google.com; s=arc-20160816; b=1EJCwHOYzJZh9zob+rJLrn9z0v6QzkFx3Ad1xFzRjpw3NQIeNNCrFzfKUXoADh5S6e sVBAn6dTiWLWXEqt/2v6W87zspJxB96FjPjN232aEDynQbwQ3khUqpff70LBO0n+9ABL rKQ2Gf98C90HuQ0iIFqi734m1K5Zi/vyRZYdfFhnib7TCvuOiViVvS33d4NSenPhpq4D 1blqIrJr5z335M2c55IXgqSZxg0GTHWSdvMyYPd7RDHHPwQpdU04jz8nQ7RPDT2CgKzA p2H7OuzNYjs1k4VVJM1oaz6c4jWF0kkAgo8SWPQuRSlF/OywPRl3RMDxoar5CUU4PrKq hMXA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:spamdiagnosticmetadata :spamdiagnosticoutput:mime-version:references:in-reply-to:message-id :date:cc:to:from:arc-authentication-results; bh=V7rp6wNF3NNatQqorQnGLtza0gxG9pkOL7ZByVMe1CQ=; b=Q8R3vQdC8UTtn8qSFHpFpLyuDlJltUIaE/IVkTOmQotfqGtURP1Mohv8xUe9N/Pczi aTIwcCm/ub75BPaTTqX01RJwVnCnDrXIbH2rwbRf4ajdVvJ5Yxc4cbI4UM0C73ytuI8c 05VCj8TxJP+O5TPs7ShZmn/31M/wmnXUSKPDupmWbCCncAJKOZqA7mC91LgVaLli1Np3 D7mrpBQppBz7MiOo6dXdlp7POJAm24v0zET7kRNxSamp5mmOd4BWxZLSaHfNAW/MRyjn TwuwsPomPyVHka0LJyi3JDbCK3Yhs0xRZUet6pXvhVTPd9QVLv/DhA3SXvdP5N0JKkal dwGQ== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Return-Path: Received: from dpdk.org (dpdk.org. [92.243.14.124]) by mx.google.com with ESMTP id 5si12328083wri.437.2018.01.10.03.56.55; Wed, 10 Jan 2018 03:56:55 -0800 (PST) Received-SPF: pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) client-ip=92.243.14.124; Authentication-Results: mx.google.com; spf=pass (google.com: domain of dev-bounces@dpdk.org designates 92.243.14.124 as permitted sender) smtp.mailfrom=dev-bounces@dpdk.org; dmarc=fail (p=NONE sp=NONE dis=NONE) header.from=nxp.com Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 18F871B2CF; Wed, 10 Jan 2018 12:51:20 +0100 (CET) Received: from NAM01-BY2-obe.outbound.protection.outlook.com (mail-by2nam01on0071.outbound.protection.outlook.com [104.47.34.71]) by dpdk.org (Postfix) with ESMTP id BEC8C1B1A3 for ; Wed, 10 Jan 2018 11:48:13 +0100 (CET) Received: from BN3PR03CA0101.namprd03.prod.outlook.com (10.174.66.19) by BN6PR03MB2691.namprd03.prod.outlook.com (10.173.144.10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P256) id 15.20.345.14; Wed, 10 Jan 2018 10:48:12 +0000 Received: from BN1AFFO11FD028.protection.gbl (2a01:111:f400:7c10::115) by BN3PR03CA0101.outlook.office365.com (2603:10b6:400:4::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.386.5 via Frontend Transport; Wed, 10 Jan 2018 10:48:12 +0000 Authentication-Results: spf=fail (sender IP is 192.88.168.50) smtp.mailfrom=nxp.com; NXP1.onmicrosoft.com; dkim=none (message not signed) header.d=none;NXP1.onmicrosoft.com; dmarc=fail action=none header.from=nxp.com; Received-SPF: Fail (protection.outlook.com: domain of nxp.com does not designate 192.88.168.50 as permitted sender) receiver=protection.outlook.com; client-ip=192.88.168.50; helo=tx30smr01.am.freescale.net; Received: from tx30smr01.am.freescale.net (192.88.168.50) by BN1AFFO11FD028.mail.protection.outlook.com (10.58.52.88) with Microsoft SMTP Server (version=TLS1_0, cipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.20.345.12 via Frontend Transport; Wed, 10 Jan 2018 10:47:38 +0000 Received: from bf-netperf1.ap.freescale.net (bf-netperf1.ap.freescale.net [10.232.134.28]) by tx30smr01.am.freescale.net (8.14.3/8.14.0) with ESMTP id w0AAljSR007124; Wed, 10 Jan 2018 03:48:07 -0700 From: Hemant Agrawal To: CC: , Date: Wed, 10 Jan 2018 16:16:31 +0530 Message-ID: <1515581201-29784-10-git-send-email-hemant.agrawal@nxp.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1515581201-29784-1-git-send-email-hemant.agrawal@nxp.com> References: <1515504186-13587-1-git-send-email-hemant.agrawal@nxp.com> <1515581201-29784-1-git-send-email-hemant.agrawal@nxp.com> X-EOPAttributedMessage: 0 X-Matching-Connectors: 131600548601331690; (91ab9b29-cfa4-454e-5278-08d120cd25b8); () X-Forefront-Antispam-Report: CIP:192.88.168.50; IPV:NLI; CTRY:US; EFV:NLI; SFV:NSPM; SFS:(10009020)(979002)(346002)(39380400002)(376002)(396003)(39860400002)(2980300002)(1110001)(1109001)(339900001)(199004)(189003)(76176011)(77096006)(54906003)(8656006)(6916009)(2950100002)(6666003)(53936002)(59450400001)(8936002)(48376002)(316002)(85426001)(68736007)(81166006)(296002)(16586007)(2906002)(305945005)(356003)(104016004)(105606002)(498600001)(4326008)(8676002)(2351001)(81156014)(106466001)(51416003)(86362001)(575784001)(97736004)(50226002)(36756003)(47776003)(5660300001)(50466002)(21314002)(969003)(989001)(999001)(1009001)(1019001); DIR:OUT; SFP:1101; SCL:1; SRVR:BN6PR03MB2691; H:tx30smr01.am.freescale.net; FPR:; SPF:Fail; PTR:InfoDomainNonexistent; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; BN1AFFO11FD028; 1:Y8Hu5qZQEi3tt0OfQr6c0yjWmBU+aO8rnMBaGeEOYBaHAcYUyd4Azlt9tt63j/uqkk/URMIonHleL/I5+hB4k11ZqTtYqk6Dj8C/BfU/TXiRPxF6Btk7Yzu5a2Q4/adB MIME-Version: 1.0 X-MS-Office365-Filtering-Correlation-Id: add865ed-cd58-4145-ffea-08d558179217 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(7020049)(5600026)(4604075)(2017052603307); SRVR:BN6PR03MB2691; X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2691; 3:pz3RAOe9Z6n1PtbuKq3SG+LcyWbjG6a/Mx+o5T4mpwO8bkeGiULuxI/w9QkLoR2QODNqM6rjvE7WVyVJFX2oQNaqv8CVFnxTfTTMmB227DLEEHuNsk1cW+Arn50pwbAHmepxXoisHNtzADie6ooZysrSE6GPvtduh4o85wcFAaBD5FYHuANoj6OqGj7k2zAi5nu8SMmQJ9iUPNTnOGA06ZRMSCZS91b8Z4Vav7R6cjwWoIAPTEsXOHRtl+qJ3Rk1SkaRq0d5I4gikypEd3bDlJ/aBDjYKse06aASnPNcHBBY/giK1yafVd6AbriuRmQk4qS5EP3AhUFgKI1ba6iDDdR8nAGakzGp9KB5eCS8vNY=; 25:zW/l4jHqMUv/7P/7pebklk/A7z//ENTuW4hvyUC7BW6O/ArfoS56PNlzY7YvWC2shsuHUmJGux7hwdVaMivhkKSeQSw1VVJ/YS4l4XLbFynzkY7islubxoG+fBhlF/E/x9et01cNv/caSmoVYsj0hWDch9UcL6XV7vEqCWBEj2CGboIfwa0q2gTm7FbZkFEBk0YppBrc8e39zZQ7qHztIFUk8WJmtBEYC4C68QHSdDu+Rpl9oIDPnqsVmCO2WBvs+x6Jb4kXXTiHaPY+8ZVqK8ysqUw5Rv5478Lyalsl/rW/Oi9rzMVTc5XLO/atxQ3hAZzLejhaK/eFoaaCgpK9EQ== X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: BN6PR03MB2691: X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2691; 31:N7PGe0ukqy6RKQjQEetuXhgdIJk5AhlF72gfn6FTfNdcGmWKsJB/zZPS4RjRwz0fGTCYvVK+Enol9D0AJDzRxMj4rs8+WK550j3fK8mqhQsBDVQAhXb4lWy1SvEBnKqrBVHBl7NLxZAsqKOmQikzcQNkl3MegwlQJUaNzAeswIQxoJmIF3Op2cGgHD4KFh4hnblmhhlyUhVWdxAtFbWQ7Wl6jv+3eWq9QNCC2rG1Pzw=; 4:LNOHP7qYWhhuCJGrmx0OkaSrfTjjDI6ScyjVm+AIQ4yubLWd9NBd4hzeeByOlKSgTaDWcypDeJQKzSiZdUB+phTrTDC+9+KPPWfsLm+GluYAUzl0EK73BfaQRGbEK9weWY5u7T6bjt+tSRDb8NPCmQZxRXnTg5SIj4jLeDxCh2VGdSOHOIvjLAkokujyEoPeke7HsKJ75ctMnd1DDXmjrcONFllNGMVxrG6l0eRWppeM4/diA0TVXpyuFpHeUxz7SfIln3kmS9/79CzOxsL61jWFEQ9Rz6JxiVEmYSYzNC0X89kEy95ZEcIe9DRgTjE9ssgcBJNZHZ3Z76nSeB6IAWRhB0oVf5u12KeLVIoTduJEN16fTI3ScyMX+Qm2eL5k X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(185117386973197)(227817650892897)(275809806118684); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(6095135)(2401047)(5005006)(8121501046)(3231023)(944501119)(93006095)(93001095)(3002001)(10201501046)(6055026)(6096035)(20161123563025)(20161123561025)(201703131430075)(201703131520075)(201703131448075)(201703131433075)(201703151042153)(20161123565025)(20161123556025)(20161123559100)(201708071742011); SRVR:BN6PR03MB2691; BCL:0; PCL:0; RULEID:(100000803101)(100110400095)(400006); SRVR:BN6PR03MB2691; X-Forefront-PRVS: 0548586081 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; BN6PR03MB2691; 23:f/bb0mOf5p2UHL/DyWvSKzTsQARKn+IbJqlzExVdY?= OrJyXW6s9ocX9OwaWeVIXDyhaGTxwpXOmMVrYzc3faZobCnUJHV/4wqKQXvcBkNIgp45vLFD6HjkjvbEV0nnNeBiTl+4uAqvCcigbZctqMji5UiCR2dTlnrL0iFho87jgNLNJIywcejUBZrHdLHfnCGY7enLjmg70dfFMPkz9jbLwfMO/g7pg7VQNEJ+w1h/YNouHaTM0wx6JuXAn6pg08sP4y1BdbeXfuk6YRgg6zCh1/cAJbwaC2SvNP1X+DvL88z/M+SSjCbf9jgfmzdaRg8+TkZVm89rPFJ2GUJYizDAyTqLM7hTrdP/xUr0KkWFDu+Mc7RTnP+dDUMsIc1wHPOQ77uyMx3PTa1vhV6+DEdbZ/dvWklKlg4B8nrpuprsdKCH0U4iOkKjqcvQ0NjirvjLeje0spPcGqkWUaZvRAPPMx5bacXzyexpbcoOtyiA7M+Bzjsa37s3ryBLMaX76a5II6AWHmKexkRg7plVYAvR3KrztocvEf60+1oCt6exZifpRs3CyGKPLOBcpr0cBzuQ9qfBuzWZitPM3lAZM2O4ThUlRD8N5hbDRzusXlkyLrbpgII8putbj/M93pevurVwAZXI2zIkpmwAL2H1MvoosoCXmnIe26CWvWgh+OF1n9vm2iYQVDPIQElhyaqBeVcDO2ynSOXZJ/KoO8fEZcRbR2C40K3Uq20TNur5kg3RTrXjMggqq6Jsn+U1TzA2Y6nk2S9E2GDvNdQy9u5itk1zj58zkFkPn9fsYLgGBhN234JmYJhDBzABIXmK0E8mi3xvsjz3+yeuKTye+oWxa/MFi/r25Ubqs8TXN4X4GzOh10R91xzRJjZEF7SXtgEIFRpu3mEAKzT5ZX4/kEnN3xKUlTI8Ad86olKO4KUwv2Sy6VnObQ2blmyYGLMXTgVhnB6P/26VpdET29bN9TmT6zKS9PT0AunNamG0Z3SegD9lSIH2P2nB3jO/SIbm1YpR5v8UthrMAUm6JFp8N+iFp5ZI7M2suIdD4i5uzjstxbkRr+U8izmuLsVzUB8BGMRLp6+lzjF1QghxVKuNAaXvT6DfDmhswf0UP6bRvo8d4LXNDMCMgNH5bZlEHc7W5hIRuPvGrpy2Rogj2NGue+x3IuPp1EhpMX16jUOGU09v3AsU4jcXgmaVSEFQLYlgU6JiKl2DX/WqGuRU3NOHUdypU2ct1NKcxwVY5sY0aA4LJiga6o= X-Microsoft-Exchange-Diagnostics: 1; BN6PR03MB2691; 6:TGByZBck48ntxsalMa4HTTYNEO7L/iGD+UvZE756LBHPqYG5UqbYgD48a+I8dJjWfZkjyQnU6BX29AnLgvdnDV9/Suu8dX2xzj9biqzHIqsMM6bUfGCBClNvhuzDPo41Fyq/dXjKH1LRhIotA6XYQGSTkHCwgQ34y1UhBEbF2PeV4C/HGwwCNtupnHHZd8uJBBj+FZSqYU54kYAPM5yM48mwnxWSObfddWbbXc0c80hWkNqVaw9V2GzMyng1H4cviAT48NGLmf9VXz2iG70myf+VPMiNk6d626QzsErLo3hhvt3CKia7je8e3rM3wHsrxW/orqMDrbOfgenHHZ1UnYFW1CCGArso/vRj0BcN1Xk=; 5:V1n5KNqzFGnhAAwoB2144DE1No+xtyZc5SEsgZ5ghQc2VDq73ffCUCz4lQGViDwB22nBP4SDbnxVxLY0cKTcqqiG51Yq/PL72Wk1gmE8WLZYDdlF/XMb5lN6nUbDmlOTlYU0lXTDwUbS3jlVWLp7a+KyTt8Jno9mRggY4K/TTZI=; 24:eDx0Coijcy4mdumuQhm/qeDokdE0VDN7lxYcKN9DBJfNHDjkWbIgg9EvfNHobHZJTFq1TWA8jbR6FzhYZkxeqG8V/M6/0LYVhJMw94b38cs=; 7:5VG0e04w5iF4lyJdaelnm/XvsPNGg7V6dGSZgK9m1ZqPthvQFQ3cOC9hTtZp0kX+0y1u+ZqEQGlYfcNEhBQrjnBi/PdcIjoWLQ05GH/31Qy+eeMFHiWJtRqlvj1k3ZIhZHAupf3HFWm/YaeSizW2q8kc3TrztoFxm3LyevvqEVYuddhcHJ2OLLr9N9HiYXwjwK8OZcdTzLYl4MR9Wl+wpkOuUH02reqsM/vlCBoGEhZYMrfJlmrWH6b2H7AY9GZ7 SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 Jan 2018 10:47:38.5739 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: add865ed-cd58-4145-ffea-08d558179217 X-MS-Exchange-CrossTenant-Id: 5afe0b00-7697-4969-b663-5eab37d5f47e X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e; Ip=[192.88.168.50]; Helo=[tx30smr01.am.freescale.net] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR03MB2691 Subject: [dpdk-dev] [PATCH v3 09/19] bus/dpaa: add support to create dynamic HW portal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" HW portal is a processing context in DPAA. This patch allow creation of a queue specific HW portal context. Signed-off-by: Hemant Agrawal --- drivers/bus/dpaa/base/qbman/qman.c | 69 ++++++++++++-- drivers/bus/dpaa/base/qbman/qman_driver.c | 153 +++++++++++++++++++++++++----- drivers/bus/dpaa/base/qbman/qman_priv.h | 6 +- drivers/bus/dpaa/dpaa_bus.c | 31 +++++- drivers/bus/dpaa/include/fsl_qman.h | 25 ++--- drivers/bus/dpaa/include/fsl_usd.h | 4 + drivers/bus/dpaa/include/process.h | 11 ++- drivers/bus/dpaa/rte_bus_dpaa_version.map | 2 + drivers/bus/dpaa/rte_dpaa_bus.h | 4 + 9 files changed, 252 insertions(+), 53 deletions(-) -- 2.7.4 diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c index b6fd40b..d8fb25a 100644 --- a/drivers/bus/dpaa/base/qbman/qman.c +++ b/drivers/bus/dpaa/base/qbman/qman.c @@ -621,11 +621,52 @@ struct qman_portal *qman_create_portal( return NULL; } +#define MAX_GLOBAL_PORTALS 8 +static struct qman_portal global_portals[MAX_GLOBAL_PORTALS]; +static int global_portals_used[MAX_GLOBAL_PORTALS]; + +static struct qman_portal * +qman_alloc_global_portal(void) +{ + unsigned int i; + + for (i = 0; i < MAX_GLOBAL_PORTALS; i++) { + if (global_portals_used[i] == 0) { + global_portals_used[i] = 1; + return &global_portals[i]; + } + } + pr_err("No portal available (%x)\n", MAX_GLOBAL_PORTALS); + + return NULL; +} + +static int +qman_free_global_portal(struct qman_portal *portal) +{ + unsigned int i; + + for (i = 0; i < MAX_GLOBAL_PORTALS; i++) { + if (&global_portals[i] == portal) { + global_portals_used[i] = 0; + return 0; + } + } + return -1; +} + struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c, - const struct qman_cgrs *cgrs) + const struct qman_cgrs *cgrs, + int alloc) { struct qman_portal *res; - struct qman_portal *portal = get_affine_portal(); + struct qman_portal *portal; + + if (alloc) + portal = qman_alloc_global_portal(); + else + portal = get_affine_portal(); + /* A criteria for calling this function (from qman_driver.c) is that * we're already affine to the cpu and won't schedule onto another cpu. */ @@ -675,13 +716,18 @@ void qman_destroy_portal(struct qman_portal *qm) spin_lock_destroy(&qm->cgr_lock); } -const struct qm_portal_config *qman_destroy_affine_portal(void) +const struct qm_portal_config * +qman_destroy_affine_portal(struct qman_portal *qp) { /* We don't want to redirect if we're a slave, use "raw" */ - struct qman_portal *qm = get_affine_portal(); + struct qman_portal *qm; const struct qm_portal_config *pcfg; int cpu; + if (qp == NULL) + qm = get_affine_portal(); + else + qm = qp; pcfg = qm->config; cpu = pcfg->cpu; @@ -690,6 +736,9 @@ const struct qm_portal_config *qman_destroy_affine_portal(void) spin_lock(&affine_mask_lock); CPU_CLR(cpu, &affine_mask); spin_unlock(&affine_mask_lock); + + qman_free_global_portal(qm); + return pcfg; } @@ -1096,27 +1145,27 @@ void qman_start_dequeues(void) qm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL); } -void qman_static_dequeue_add(u32 pools) +void qman_static_dequeue_add(u32 pools, struct qman_portal *qp) { - struct qman_portal *p = get_affine_portal(); + struct qman_portal *p = qp ? qp : get_affine_portal(); pools &= p->config->pools; p->sdqcr |= pools; qm_dqrr_sdqcr_set(&p->p, p->sdqcr); } -void qman_static_dequeue_del(u32 pools) +void qman_static_dequeue_del(u32 pools, struct qman_portal *qp) { - struct qman_portal *p = get_affine_portal(); + struct qman_portal *p = qp ? qp : get_affine_portal(); pools &= p->config->pools; p->sdqcr &= ~pools; qm_dqrr_sdqcr_set(&p->p, p->sdqcr); } -u32 qman_static_dequeue_get(void) +u32 qman_static_dequeue_get(struct qman_portal *qp) { - struct qman_portal *p = get_affine_portal(); + struct qman_portal *p = qp ? qp : get_affine_portal(); return p->sdqcr; } diff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c index c17d15f..7cfa8ee 100644 --- a/drivers/bus/dpaa/base/qbman/qman_driver.c +++ b/drivers/bus/dpaa/base/qbman/qman_driver.c @@ -24,8 +24,8 @@ void *qman_ccsr_map; /* The qman clock frequency */ u32 qman_clk; -static __thread int fd = -1; -static __thread struct qm_portal_config pcfg; +static __thread int qmfd = -1; +static __thread struct qm_portal_config qpcfg; static __thread struct dpaa_ioctl_portal_map map = { .type = dpaa_portal_qman }; @@ -44,16 +44,16 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared) error(0, ret, "pthread_getaffinity_np()"); return ret; } - pcfg.cpu = -1; + qpcfg.cpu = -1; for (loop = 0; loop < CPU_SETSIZE; loop++) if (CPU_ISSET(loop, &cpuset)) { - if (pcfg.cpu != -1) { + if (qpcfg.cpu != -1) { pr_err("Thread is not affine to 1 cpu\n"); return -EINVAL; } - pcfg.cpu = loop; + qpcfg.cpu = loop; } - if (pcfg.cpu == -1) { + if (qpcfg.cpu == -1) { pr_err("Bug in getaffinity handling!\n"); return -EINVAL; } @@ -65,36 +65,36 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared) error(0, ret, "process_portal_map()"); return ret; } - pcfg.channel = map.channel; - pcfg.pools = map.pools; - pcfg.index = map.index; + qpcfg.channel = map.channel; + qpcfg.pools = map.pools; + qpcfg.index = map.index; /* Make the portal's cache-[enabled|inhibited] regions */ - pcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena; - pcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh; + qpcfg.addr_virt[DPAA_PORTAL_CE] = map.addr.cena; + qpcfg.addr_virt[DPAA_PORTAL_CI] = map.addr.cinh; - fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY); - if (fd == -1) { + qmfd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY); + if (qmfd == -1) { pr_err("QMan irq init failed\n"); process_portal_unmap(&map.addr); return -EBUSY; } - pcfg.is_shared = is_shared; - pcfg.node = NULL; - pcfg.irq = fd; + qpcfg.is_shared = is_shared; + qpcfg.node = NULL; + qpcfg.irq = qmfd; - portal = qman_create_affine_portal(&pcfg, NULL); + portal = qman_create_affine_portal(&qpcfg, NULL, 0); if (!portal) { pr_err("Qman portal initialisation failed (%d)\n", - pcfg.cpu); + qpcfg.cpu); process_portal_unmap(&map.addr); return -EBUSY; } irq_map.type = dpaa_portal_qman; irq_map.portal_cinh = map.addr.cinh; - process_portal_irq_map(fd, &irq_map); + process_portal_irq_map(qmfd, &irq_map); return 0; } @@ -103,10 +103,10 @@ static int fsl_qman_portal_finish(void) __maybe_unused const struct qm_portal_config *cfg; int ret; - process_portal_irq_unmap(fd); + process_portal_irq_unmap(qmfd); - cfg = qman_destroy_affine_portal(); - DPAA_BUG_ON(cfg != &pcfg); + cfg = qman_destroy_affine_portal(NULL); + DPAA_BUG_ON(cfg != &qpcfg); ret = process_portal_unmap(&map.addr); if (ret) error(0, ret, "process_portal_unmap()"); @@ -128,14 +128,119 @@ int qman_thread_finish(void) void qman_thread_irq(void) { - qbman_invoke_irq(pcfg.irq); + qbman_invoke_irq(qpcfg.irq); /* Now we need to uninhibit interrupts. This is the only code outside * the regular portal driver that manipulates any portal register, so * rather than breaking that encapsulation I am simply hard-coding the * offset to the inhibit register here. */ - out_be32(pcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0); + out_be32(qpcfg.addr_virt[DPAA_PORTAL_CI] + 0xe0c, 0); +} + +struct qman_portal *fsl_qman_portal_create(void) +{ + cpu_set_t cpuset; + struct qman_portal *res; + + struct qm_portal_config *q_pcfg; + int loop, ret; + struct dpaa_ioctl_irq_map irq_map; + struct dpaa_ioctl_portal_map q_map = {0}; + int q_fd; + + q_pcfg = kzalloc((sizeof(struct qm_portal_config)), 0); + if (!q_pcfg) { + error(0, -1, "q_pcfg kzalloc failed"); + return NULL; + } + + /* Verify the thread's cpu-affinity */ + ret = pthread_getaffinity_np(pthread_self(), sizeof(cpu_set_t), + &cpuset); + if (ret) { + error(0, ret, "pthread_getaffinity_np()"); + return NULL; + } + + q_pcfg->cpu = -1; + for (loop = 0; loop < CPU_SETSIZE; loop++) + if (CPU_ISSET(loop, &cpuset)) { + if (q_pcfg->cpu != -1) { + pr_err("Thread is not affine to 1 cpu\n"); + return NULL; + } + q_pcfg->cpu = loop; + } + if (q_pcfg->cpu == -1) { + pr_err("Bug in getaffinity handling!\n"); + return NULL; + } + + /* Allocate and map a qman portal */ + q_map.type = dpaa_portal_qman; + q_map.index = QBMAN_ANY_PORTAL_IDX; + ret = process_portal_map(&q_map); + if (ret) { + error(0, ret, "process_portal_map()"); + return NULL; + } + q_pcfg->channel = q_map.channel; + q_pcfg->pools = q_map.pools; + q_pcfg->index = q_map.index; + + /* Make the portal's cache-[enabled|inhibited] regions */ + q_pcfg->addr_virt[DPAA_PORTAL_CE] = q_map.addr.cena; + q_pcfg->addr_virt[DPAA_PORTAL_CI] = q_map.addr.cinh; + + q_fd = open(QMAN_PORTAL_IRQ_PATH, O_RDONLY); + if (q_fd == -1) { + pr_err("QMan irq init failed\n"); + goto err1; + } + + q_pcfg->irq = q_fd; + + res = qman_create_affine_portal(q_pcfg, NULL, true); + if (!res) { + pr_err("Qman portal initialisation failed (%d)\n", + q_pcfg->cpu); + goto err2; + } + + irq_map.type = dpaa_portal_qman; + irq_map.portal_cinh = q_map.addr.cinh; + process_portal_irq_map(q_fd, &irq_map); + + return res; +err2: + close(q_fd); +err1: + process_portal_unmap(&q_map.addr); + return NULL; +} + +int fsl_qman_portal_destroy(struct qman_portal *qp) +{ + const struct qm_portal_config *cfg; + struct dpaa_portal_map addr; + int ret; + + cfg = qman_destroy_affine_portal(qp); + kfree(qp); + + process_portal_irq_unmap(cfg->irq); + + addr.cena = cfg->addr_virt[DPAA_PORTAL_CE]; + addr.cinh = cfg->addr_virt[DPAA_PORTAL_CI]; + + ret = process_portal_unmap(&addr); + if (ret) + pr_err("process_portal_unmap() (%d)\n", ret); + + kfree((void *)cfg); + + return ret; } int qman_global_init(void) diff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h index db0b310..9e4471e 100644 --- a/drivers/bus/dpaa/base/qbman/qman_priv.h +++ b/drivers/bus/dpaa/base/qbman/qman_priv.h @@ -146,8 +146,10 @@ int qm_get_wpm(int *wpm); struct qman_portal *qman_create_affine_portal( const struct qm_portal_config *config, - const struct qman_cgrs *cgrs); -const struct qm_portal_config *qman_destroy_affine_portal(void); + const struct qman_cgrs *cgrs, + int alloc); +const struct qm_portal_config * +qman_destroy_affine_portal(struct qman_portal *q); struct qm_portal_config *qm_get_unused_portal(void); struct qm_portal_config *qm_get_unused_portal_idx(uint32_t idx); diff --git a/drivers/bus/dpaa/dpaa_bus.c b/drivers/bus/dpaa/dpaa_bus.c index a7c05b3..329a125 100644 --- a/drivers/bus/dpaa/dpaa_bus.c +++ b/drivers/bus/dpaa/dpaa_bus.c @@ -264,8 +264,7 @@ _dpaa_portal_init(void *arg) * rte_dpaa_portal_init - Wrapper over _dpaa_portal_init with thread level check * XXX Complete this */ -int -rte_dpaa_portal_init(void *arg) +int rte_dpaa_portal_init(void *arg) { if (unlikely(!RTE_PER_LCORE(_dpaa_io))) return _dpaa_portal_init(arg); @@ -273,6 +272,34 @@ rte_dpaa_portal_init(void *arg) return 0; } +int +rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq) +{ + /* Affine above created portal with channel*/ + u32 sdqcr; + struct qman_portal *qp; + + if (unlikely(!RTE_PER_LCORE(_dpaa_io))) + _dpaa_portal_init(arg); + + /* Initialise qman specific portals */ + qp = fsl_qman_portal_create(); + if (!qp) { + DPAA_BUS_LOG(ERR, "Unable to alloc fq portal"); + return -1; + } + fq->qp = qp; + sdqcr = QM_SDQCR_CHANNELS_POOL_CONV(fq->ch_id); + qman_static_dequeue_add(sdqcr, qp); + + return 0; +} + +int rte_dpaa_portal_fq_close(struct qman_fq *fq) +{ + return fsl_qman_portal_destroy(fq->qp); +} + void dpaa_portal_finish(void *arg) { diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index 5027230..fc00d8d 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1190,21 +1190,24 @@ struct qman_fq_cb { struct qman_fq { /* Caller of qman_create_fq() provides these demux callbacks */ struct qman_fq_cb cb; - /* - * These are internal to the driver, don't touch. In particular, they - * may change, be removed, or extended (so you shouldn't rely on - * sizeof(qman_fq) being a constant). - */ - spinlock_t fqlock; - u32 fqid; + u32 fqid_le; + u16 ch_id; + u8 cgr_groupid; + u8 is_static; /* DPDK Interface */ void *dpaa_intf; + /* affined portal in case of static queue */ + struct qman_portal *qp; + volatile unsigned long flags; + enum qman_fq_state state; - int cgr_groupid; + u32 fqid; + spinlock_t fqlock; + struct rb_node node; #ifdef CONFIG_FSL_QMAN_FQ_LOOKUP u32 key; @@ -1383,7 +1386,7 @@ void qman_start_dequeues(void); * (SDQCR). The requested pools are limited to those the portal has dequeue * access to. */ -void qman_static_dequeue_add(u32 pools); +void qman_static_dequeue_add(u32 pools, struct qman_portal *qm); /** * qman_static_dequeue_del - Remove pool channels from the portal SDQCR @@ -1393,7 +1396,7 @@ void qman_static_dequeue_add(u32 pools); * register (SDQCR). The requested pools are limited to those the portal has * dequeue access to. */ -void qman_static_dequeue_del(u32 pools); +void qman_static_dequeue_del(u32 pools, struct qman_portal *qp); /** * qman_static_dequeue_get - return the portal's current SDQCR @@ -1402,7 +1405,7 @@ void qman_static_dequeue_del(u32 pools); * entire register is returned, so if only the currently-enabled pool channels * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK. */ -u32 qman_static_dequeue_get(void); +u32 qman_static_dequeue_get(struct qman_portal *qp); /** * qman_dca - Perform a Discrete Consumption Acknowledgment diff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h index ada92f2..e183617 100644 --- a/drivers/bus/dpaa/include/fsl_usd.h +++ b/drivers/bus/dpaa/include/fsl_usd.h @@ -67,6 +67,10 @@ void bman_thread_irq(void); int qman_global_init(void); int bman_global_init(void); +/* Direct portal create and destroy */ +struct qman_portal *fsl_qman_portal_create(void); +int fsl_qman_portal_destroy(struct qman_portal *qp); + #ifdef __cplusplus } #endif diff --git a/drivers/bus/dpaa/include/process.h b/drivers/bus/dpaa/include/process.h index 079849b..d9ec94e 100644 --- a/drivers/bus/dpaa/include/process.h +++ b/drivers/bus/dpaa/include/process.h @@ -39,6 +39,11 @@ enum dpaa_portal_type { dpaa_portal_bman, }; +struct dpaa_portal_map { + void *cinh; + void *cena; +}; + struct dpaa_ioctl_portal_map { /* Input parameter, is a qman or bman portal required. */ enum dpaa_portal_type type; @@ -50,10 +55,8 @@ struct dpaa_ioctl_portal_map { /* Return value if the map succeeds, this gives the mapped * cache-inhibited (cinh) and cache-enabled (cena) addresses. */ - struct dpaa_portal_map { - void *cinh; - void *cena; - } addr; + struct dpaa_portal_map addr; + /* Qman-specific return values */ u16 channel; uint32_t pools; diff --git a/drivers/bus/dpaa/rte_bus_dpaa_version.map b/drivers/bus/dpaa/rte_bus_dpaa_version.map index f412362..4e3afda 100644 --- a/drivers/bus/dpaa/rte_bus_dpaa_version.map +++ b/drivers/bus/dpaa/rte_bus_dpaa_version.map @@ -74,6 +74,8 @@ DPDK_18.02 { qman_delete_cgr; qman_modify_cgr; qman_release_cgrid_range; + rte_dpaa_portal_fq_close; + rte_dpaa_portal_fq_init; local: *; } DPDK_17.11; diff --git a/drivers/bus/dpaa/rte_dpaa_bus.h b/drivers/bus/dpaa/rte_dpaa_bus.h index d9e8c84..f626066 100644 --- a/drivers/bus/dpaa/rte_dpaa_bus.h +++ b/drivers/bus/dpaa/rte_dpaa_bus.h @@ -136,6 +136,10 @@ void rte_dpaa_driver_unregister(struct rte_dpaa_driver *driver); */ int rte_dpaa_portal_init(void *arg); +int rte_dpaa_portal_fq_init(void *arg, struct qman_fq *fq); + +int rte_dpaa_portal_fq_close(struct qman_fq *fq); + /** * Cleanup a DPAA Portal */