From patchwork Thu Nov 28 10:59:32 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 180405 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7357155ilf; Thu, 28 Nov 2019 03:00:11 -0800 (PST) X-Google-Smtp-Source: APXvYqz43x8FCq3D/A9drYz7Xr3Bn+Qqj2SJkKFTAIO8x0RC7XflJUGe/iwLc1sRBTb3gH40mb3K X-Received: by 2002:a50:9f65:: with SMTP id b92mr37352605edf.63.1574938811672; Thu, 28 Nov 2019 03:00:11 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574938811; cv=none; d=google.com; s=arc-20160816; b=hoFPSdCHWQtkxM2hNgVn99o5MKxc9C3qcOmlMwaKzcWtBLB0TqZue/amY4tDvHuRSE q2HMldFqzf4H0GVExGTY0rHqF56xOMoC36pAQcWfhFekCic4F3IapsCTE7z45kPkZNb2 XixBGaiSvYiiuOx5vpackfCGzuVhVrcNeMt41qxvKoTrtZbO24ZltuAEgxXgKJEjh5VT dRXLvHeVUD3/E41eYlwikogMsejCGvpjs/2M7NXEE85JQ0cRXt9qfJrXBy3rS856+MA0 g5ei5gySGQQWrTeE97w39g65J9LbW+p0oEzB5tTF8GsYXfL2Nhg4suSJ2haoaVvGw9e+ 1ibA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=0nliBJ0aS+ZPJD5/Er1M0lCPZp+R7mh7eBej+Gho4Bw=; b=Hv7YTnMOAd6txmidPSATf05l/Q+Cs080dAVgZOV8KDelHxkKR0oTj5tuENpKy1ulw7 6liKNdJlkS4IGuN9iiFOmavznY6Z7Wvy0UpB7jw4TeZUZYAgkqyitX3HXTOEiiGlBlNv QEWKN7OEOo+QDYqJ5RixXrxxqjQRK3FivuQwFSrmXmmsTJGZbBzMCWed/vQoZxcmptLq 5KjvGkORfbx6g+XHJZ/r5gJdadV2A2X6w4dsFBDQIdaSg8WcxUoo1Yd9iMdyeZdTj3A5 2JqCBQ4+1CjbGErrQyHAz7u9+LvTgtOc4KjX/74g2nX8LODjoNmaMXy0MomH4sJx1jgx vtnw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=qTpP5vvJ; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id h5si13848352edd.401.2019.11.28.03.00.11; Thu, 28 Nov 2019 03:00:11 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=qTpP5vvJ; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726663AbfK1LAK (ORCPT + 8 others); Thu, 28 Nov 2019 06:00:10 -0500 Received: from lelv0142.ext.ti.com ([198.47.23.249]:34392 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726583AbfK1LAK (ORCPT ); Thu, 28 Nov 2019 06:00:10 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id xASB00xx022634; Thu, 28 Nov 2019 05:00:00 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1574938800; bh=0nliBJ0aS+ZPJD5/Er1M0lCPZp+R7mh7eBej+Gho4Bw=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=qTpP5vvJAXu8/PFeebgiUkmHFd9cuBBq6dOjpns6N7TnAOiYTinI2NWuONPG2pfxl /BCLoI//9r9aW4EBtncxSFcctsT4v+0wIoZHdPdnTdKYXCYQUq8uJB8FxTUZUvrhYx y0z4Jq9fzvMgHVHkar3aRsdmYczypXWJS/6Jlp7o= Received: from DLEE102.ent.ti.com (dlee102.ent.ti.com [157.170.170.32]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id xASB00IY031156 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 28 Nov 2019 05:00:00 -0600 Received: from DLEE112.ent.ti.com (157.170.170.23) by DLEE102.ent.ti.com (157.170.170.32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 28 Nov 2019 04:59:59 -0600 Received: from lelv0326.itg.ti.com (10.180.67.84) by DLEE112.ent.ti.com (157.170.170.23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 28 Nov 2019 04:59:59 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0326.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASAxgJI073287; Thu, 28 Nov 2019 04:59:56 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , , , Subject: [PATCH v6 04/17] dmaengine: Add metadata_ops for dma_async_tx_descriptor Date: Thu, 28 Nov 2019 12:59:32 +0200 Message-ID: <20191128105945.13071-5-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191128105945.13071-1-peter.ujfalusi@ti.com> References: <20191128105945.13071-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org The metadata is best described as side band data or parameters traveling alongside the data DMAd by the DMA engine. It is data which is understood by the peripheral and the peripheral driver only, the DMA engine see it only as data block and it is not interpreting it in any way. The metadata can be different per descriptor as it is a parameter for the data being transferred. If the DMA supports per descriptor metadata it can implement the attach, get_ptr/set_len callbacks. Client drivers must only use either attach or get_ptr/set_len to avoid misconfiguration. Client driver can check if a given metadata mode is supported by the channel during probe time with dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_CLIENT); dmaengine_is_metadata_mode_supported(chan, DESC_METADATA_ENGINE); and based on this information can use either mode. Wrappers are also added for the metadata_ops. To be used in DESC_METADATA_CLIENT mode: dmaengine_desc_attach_metadata() To be used in DESC_METADATA_ENGINE mode: dmaengine_desc_get_metadata_ptr() dmaengine_desc_set_metadata_len() Signed-off-by: Peter Ujfalusi Reviewed-by: Tero Kristo --- drivers/dma/dmaengine.c | 73 ++++++++++++++++++++++++++ include/linux/dmaengine.h | 108 ++++++++++++++++++++++++++++++++++++++ 2 files changed, 181 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c index 03ac4b96117c..6baddf7dcbfd 100644 --- a/drivers/dma/dmaengine.c +++ b/drivers/dma/dmaengine.c @@ -1302,6 +1302,79 @@ void dma_async_tx_descriptor_init(struct dma_async_tx_descriptor *tx, } EXPORT_SYMBOL(dma_async_tx_descriptor_init); +static inline int desc_check_and_set_metadata_mode( + struct dma_async_tx_descriptor *desc, enum dma_desc_metadata_mode mode) +{ + /* Make sure that the metadata mode is not mixed */ + if (!desc->desc_metadata_mode) { + if (dmaengine_is_metadata_mode_supported(desc->chan, mode)) + desc->desc_metadata_mode = mode; + else + return -ENOTSUPP; + } else if (desc->desc_metadata_mode != mode) { + return -EINVAL; + } + + return 0; +} + +int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc, + void *data, size_t len) +{ + int ret; + + if (!desc) + return -EINVAL; + + ret = desc_check_and_set_metadata_mode(desc, DESC_METADATA_CLIENT); + if (ret) + return ret; + + if (!desc->metadata_ops || !desc->metadata_ops->attach) + return -ENOTSUPP; + + return desc->metadata_ops->attach(desc, data, len); +} +EXPORT_SYMBOL_GPL(dmaengine_desc_attach_metadata); + +void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc, + size_t *payload_len, size_t *max_len) +{ + int ret; + + if (!desc) + return ERR_PTR(-EINVAL); + + ret = desc_check_and_set_metadata_mode(desc, DESC_METADATA_ENGINE); + if (ret) + return ERR_PTR(ret); + + if (!desc->metadata_ops || !desc->metadata_ops->get_ptr) + return ERR_PTR(-ENOTSUPP); + + return desc->metadata_ops->get_ptr(desc, payload_len, max_len); +} +EXPORT_SYMBOL_GPL(dmaengine_desc_get_metadata_ptr); + +int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc, + size_t payload_len) +{ + int ret; + + if (!desc) + return -EINVAL; + + ret = desc_check_and_set_metadata_mode(desc, DESC_METADATA_ENGINE); + if (ret) + return ret; + + if (!desc->metadata_ops || !desc->metadata_ops->set_len) + return -ENOTSUPP; + + return desc->metadata_ops->set_len(desc, payload_len); +} +EXPORT_SYMBOL_GPL(dmaengine_desc_set_metadata_len); + /* dma_wait_for_async_tx - spin wait for a transaction to complete * @tx: in-flight transaction to wait on */ diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h index 8fcdee1c0cf9..0e8b426bbde9 100644 --- a/include/linux/dmaengine.h +++ b/include/linux/dmaengine.h @@ -219,6 +219,58 @@ typedef struct { DECLARE_BITMAP(bits, DMA_TX_TYPE_END); } dma_cap_mask_t; * @bytes_transferred: byte counter */ +/** + * enum dma_desc_metadata_mode - per descriptor metadata mode types supported + * @DESC_METADATA_CLIENT - the metadata buffer is allocated/provided by the + * client driver and it is attached (via the dmaengine_desc_attach_metadata() + * helper) to the descriptor. + * + * Client drivers interested to use this mode can follow: + * - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM: + * 1. prepare the descriptor (dmaengine_prep_*) + * construct the metadata in the client's buffer + * 2. use dmaengine_desc_attach_metadata() to attach the buffer to the + * descriptor + * 3. submit the transfer + * - DMA_DEV_TO_MEM: + * 1. prepare the descriptor (dmaengine_prep_*) + * 2. use dmaengine_desc_attach_metadata() to attach the buffer to the + * descriptor + * 3. submit the transfer + * 4. when the transfer is completed, the metadata should be available in the + * attached buffer + * + * @DESC_METADATA_ENGINE - the metadata buffer is allocated/managed by the DMA + * driver. The client driver can ask for the pointer, maximum size and the + * currently used size of the metadata and can directly update or read it. + * dmaengine_desc_get_metadata_ptr() and dmaengine_desc_set_metadata_len() is + * provided as helper functions. + * + * Client drivers interested to use this mode can follow: + * - DMA_MEM_TO_DEV / DEV_MEM_TO_MEM: + * 1. prepare the descriptor (dmaengine_prep_*) + * 2. use dmaengine_desc_get_metadata_ptr() to get the pointer to the engine's + * metadata area + * 3. update the metadata at the pointer + * 4. use dmaengine_desc_set_metadata_len() to tell the DMA engine the amount + * of data the client has placed into the metadata buffer + * 5. submit the transfer + * - DMA_DEV_TO_MEM: + * 1. prepare the descriptor (dmaengine_prep_*) + * 2. submit the transfer + * 3. on transfer completion, use dmaengine_desc_get_metadata_ptr() to get the + * pointer to the engine's metadata area + * 4. Read out the metadata from the pointer + * + * Note: the two mode is not compatible and clients must use one mode for a + * descriptor. + */ +enum dma_desc_metadata_mode { + DESC_METADATA_NONE = 0, + DESC_METADATA_CLIENT = BIT(0), + DESC_METADATA_ENGINE = BIT(1), +}; + struct dma_chan_percpu { /* stats */ unsigned long memcpy_count; @@ -475,6 +527,18 @@ struct dmaengine_unmap_data { dma_addr_t addr[0]; }; +struct dma_async_tx_descriptor; + +struct dma_descriptor_metadata_ops { + int (*attach)(struct dma_async_tx_descriptor *desc, void *data, + size_t len); + + void *(*get_ptr)(struct dma_async_tx_descriptor *desc, + size_t *payload_len, size_t *max_len); + int (*set_len)(struct dma_async_tx_descriptor *desc, + size_t payload_len); +}; + /** * struct dma_async_tx_descriptor - async transaction descriptor * ---dma generic offload fields--- @@ -488,6 +552,11 @@ struct dmaengine_unmap_data { * descriptor pending. To be pushed on .issue_pending() call * @callback: routine to call after this operation is complete * @callback_param: general parameter to pass to the callback routine + * @desc_metadata_mode: core managed metadata mode to protect mixed use of + * DESC_METADATA_CLIENT or DESC_METADATA_ENGINE. Otherwise + * DESC_METADATA_NONE + * @metadata_ops: DMA driver provided metadata mode ops, need to be set by the + * DMA driver if metadata mode is supported with the descriptor * ---async_tx api specific fields--- * @next: at completion submit this descriptor * @parent: pointer to the next level up in the dependency chain @@ -504,6 +573,8 @@ struct dma_async_tx_descriptor { dma_async_tx_callback_result callback_result; void *callback_param; struct dmaengine_unmap_data *unmap; + enum dma_desc_metadata_mode desc_metadata_mode; + struct dma_descriptor_metadata_ops *metadata_ops; #ifdef CONFIG_ASYNC_TX_ENABLE_CHANNEL_SWITCH struct dma_async_tx_descriptor *next; struct dma_async_tx_descriptor *parent; @@ -666,6 +737,7 @@ struct dma_filter { * @global_node: list_head for global dma_device_list * @filter: information for device/slave to filter function/param mapping * @cap_mask: one or more dma_capability flags + * @desc_metadata_modes: supported metadata modes by the DMA device * @max_xor: maximum number of xor sources, 0 if no capability * @max_pq: maximum number of PQ sources and PQ-continue capability * @copy_align: alignment shift for memcpy operations @@ -727,6 +799,7 @@ struct dma_device { struct list_head global_node; struct dma_filter filter; dma_cap_mask_t cap_mask; + enum dma_desc_metadata_mode desc_metadata_modes; unsigned short max_xor; unsigned short max_pq; enum dmaengine_alignment copy_align; @@ -902,6 +975,41 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_memcpy( len, flags); } +static inline bool dmaengine_is_metadata_mode_supported(struct dma_chan *chan, + enum dma_desc_metadata_mode mode) +{ + if (!chan) + return false; + + return !!(chan->device->desc_metadata_modes & mode); +} + +#ifdef CONFIG_DMA_ENGINE +int dmaengine_desc_attach_metadata(struct dma_async_tx_descriptor *desc, + void *data, size_t len); +void *dmaengine_desc_get_metadata_ptr(struct dma_async_tx_descriptor *desc, + size_t *payload_len, size_t *max_len); +int dmaengine_desc_set_metadata_len(struct dma_async_tx_descriptor *desc, + size_t payload_len); +#else /* CONFIG_DMA_ENGINE */ +static inline int dmaengine_desc_attach_metadata( + struct dma_async_tx_descriptor *desc, void *data, size_t len) +{ + return -EINVAL; +} +static inline void *dmaengine_desc_get_metadata_ptr( + struct dma_async_tx_descriptor *desc, size_t *payload_len, + size_t *max_len) +{ + return NULL; +} +static inline int dmaengine_desc_set_metadata_len( + struct dma_async_tx_descriptor *desc, size_t payload_len) +{ + return -EINVAL; +} +#endif /* CONFIG_DMA_ENGINE */ + /** * dmaengine_terminate_all() - Terminate all active DMA transfers * @chan: The channel for which to terminate the transfers From patchwork Thu Nov 28 10:59:39 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 180418 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7358360ilf; Thu, 28 Nov 2019 03:01:07 -0800 (PST) X-Google-Smtp-Source: APXvYqxOjHsWJZmw/VmJlo4Wgl2eRNmHpzzt2cN3U0NZV/Jz44TzjUc5+pHG1uz6Y418gNrcIH0d X-Received: by 2002:aa7:d904:: with SMTP id a4mr33005607edr.44.1574938866291; Thu, 28 Nov 2019 03:01:06 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574938866; cv=none; d=google.com; s=arc-20160816; b=fv1f0vkCXUYIhGwyGIe/GaOTZS0Wy2WEcoSzmFViUftXoqhAd/pWG0NRnUEV7AbHm9 arTbmgFvCFtBgxMhCY4QhLu5k9g8aHNdEA89oIGx1owXLhGpXqhONTKEthD3W7DOvxgQ tmcVGGENwMDemUyOqdp+SzzcFozqM7gZeDd1O9FEUNv2e3Ia0MdrKBZQuoTzPPB03C3v v+qRVxu4V/Xi3w10NYN5qKoRFSyFjaS+UHVNvTKEDunN7AtGuRIHk2Q6+jrxDVAmJWJd Fz1X/hY2sdMSJ6Zuynqyn9ejyUe3zjMVd7KlgYJRJeYXyd4trBHW9AN6mGKrsDebm3JI Ojkg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=Nflyqem+yJtyx24LbmH2Veg/2DYqDXIelrw+nhzQ/Wo=; b=0STXzp6s9m4Xm+4ZqdOKbJwHU5a3ti3T6Ibov+iHmBomXhKmdILkFpcuJyEMRZB8cP ZjwZmQFGWX1sGcYDihdI5xXNB2GX/Ewxuk8JViyb7FL04J3cnDdCc4YM6iU6LyqwX2X1 5fTh7C5+FUy1bb/SVaXkzKM2w6bxkvuA33w7Wg0tedJYqx6+gdILYkMAOFNHebr7SfzK 2UfxWya/Mgk/bVcWX3MjSzOV0B2R0QQ2E6KovqY917Y11IyKM6IcC71azBO/XRFrvG/P cqXfghvxcr6onI48O0Mzh6SPIpYJK9nWf21xuorpizOMAN6931UN4zRW7NoQqCMPHxZ/ 3ObA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=bn6kHzHc; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id j2si13431267edf.24.2019.11.28.03.01.06; Thu, 28 Nov 2019 03:01:06 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=bn6kHzHc; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727329AbfK1LBD (ORCPT + 8 others); Thu, 28 Nov 2019 06:01:03 -0500 Received: from lelv0143.ext.ti.com ([198.47.23.248]:36416 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727128AbfK1LAg (ORCPT ); Thu, 28 Nov 2019 06:00:36 -0500 Received: from fllv0034.itg.ti.com ([10.64.40.246]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id xASB0O8A123169; Thu, 28 Nov 2019 05:00:24 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1574938824; bh=Nflyqem+yJtyx24LbmH2Veg/2DYqDXIelrw+nhzQ/Wo=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=bn6kHzHcWbmyCEer4u+DrVf5cWLSLw3lxXRRgm2W2E1IuezoGm4Y9WnrGCzDTzWkn aaJGoWWthdbdOXZI1fM9YUbzcyQoDRWbPA5cEoMzsyZUJaDWgEVkaoaz63ezmz1de+ Bxtay4PxbUFXXf7fNZLRZnxTme+nV485YqfsBCHo= Received: from DFLE101.ent.ti.com (dfle101.ent.ti.com [10.64.6.22]) by fllv0034.itg.ti.com (8.15.2/8.15.2) with ESMTPS id xASB0O0S046397 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 28 Nov 2019 05:00:24 -0600 Received: from DFLE101.ent.ti.com (10.64.6.22) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 28 Nov 2019 05:00:23 -0600 Received: from lelv0326.itg.ti.com (10.180.67.84) by DFLE101.ent.ti.com (10.64.6.22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 28 Nov 2019 05:00:23 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0326.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASAxgJP073287; Thu, 28 Nov 2019 05:00:20 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , , , Subject: [PATCH v6 11/17] dmaengine: ti: New driver for K3 UDMA - split#3: alloc/free chan_resources Date: Thu, 28 Nov 2019 12:59:39 +0200 Message-ID: <20191128105945.13071-12-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191128105945.13071-1-peter.ujfalusi@ti.com> References: <20191128105945.13071-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Split patch for review containing: channel resource allocation and free functions. DMA driver for Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P) The UDMA-P is intended to perform similar (but significantly upgraded) functions as the packet-oriented DMA used on previous SoC devices. The UDMA-P module supports the transmission and reception of various packet types. The UDMA-P is architected to facilitate the segmentation and reassembly of SoC DMA data structure compliant packets to/from smaller data blocks that are natively compatible with the specific requirements of each connected peripheral. Multiple Tx and Rx channels are provided within the DMA which allow multiple segmentation or reassembly operations to be ongoing. The DMA controller maintains state information for each of the channels which allows packet segmentation and reassembly operations to be time division multiplexed between channels in order to share the underlying DMA hardware. An external DMA scheduler is used to control the ordering and rate at which this multiplexing occurs for Transmit operations. The ordering and rate of Receive operations is indirectly controlled by the order in which blocks are pushed into the DMA on the Rx PSI-L interface. The UDMA-P also supports acting as both a UTC and UDMA-C for its internal channels. Channels in the UDMA-P can be configured to be either Packet-Based or Third-Party channels on a channel by channel basis. The initial driver supports: - MEM_TO_MEM (TR mode) - DEV_TO_MEM (Packet / TR mode) - MEM_TO_DEV (Packet / TR mode) - Cyclic (Packet / TR mode) - Metadata for descriptors Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 771 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 771 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index abd51242e396..0b5d709f6dad 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -1048,6 +1048,777 @@ static irqreturn_t udma_udma_irq_handler(int irq, void *data) return IRQ_HANDLED; } +static struct udma_rflow *__udma_get_rflow(struct udma_dev *ud, int id) +{ + /* + * Attempt to request rflow by ID can be made for any rflow + * if not in use with assumption that caller knows what's doing. + * TI-SCI FW will perform additional permission check ant way, it's + * safe + */ + + if (id < 0 || id >= ud->rflow_cnt) + return ERR_PTR(-ENOENT); + + if (test_bit(id, ud->rflow_in_use)) + return ERR_PTR(-ENOENT); + + /* GP rflow has to be allocated first */ + if (!test_bit(id, ud->rflow_gp_map) && + !test_bit(id, ud->rflow_gp_map_allocated)) + return ERR_PTR(-EINVAL); + + dev_dbg(ud->dev, "get rflow%d\n", id); + set_bit(id, ud->rflow_in_use); + return &ud->rflows[id]; +} + +static void __udma_put_rflow(struct udma_dev *ud, struct udma_rflow *rflow) +{ + if (!test_bit(rflow->id, ud->rflow_in_use)) { + dev_err(ud->dev, "attempt to put unused rflow%d\n", rflow->id); + return; + } + + dev_dbg(ud->dev, "put rflow%d\n", rflow->id); + clear_bit(rflow->id, ud->rflow_in_use); +} + +#define UDMA_RESERVE_RESOURCE(res) \ +static struct udma_##res *__udma_reserve_##res(struct udma_dev *ud, \ + enum udma_tp_level tpl, \ + int id) \ +{ \ + if (id >= 0) { \ + if (test_bit(id, ud->res##_map)) { \ + dev_err(ud->dev, "res##%d is in use\n", id); \ + return ERR_PTR(-ENOENT); \ + } \ + } else { \ + int start; \ + \ + if (tpl >= ud->match_data->tpl_levels) \ + tpl = ud->match_data->tpl_levels - 1; \ + \ + start = ud->match_data->level_start_idx[tpl]; \ + \ + id = find_next_zero_bit(ud->res##_map, ud->res##_cnt, \ + start); \ + if (id == ud->res##_cnt) { \ + return ERR_PTR(-ENOENT); \ + } \ + } \ + \ + set_bit(id, ud->res##_map); \ + return &ud->res##s[id]; \ +} + +UDMA_RESERVE_RESOURCE(tchan); +UDMA_RESERVE_RESOURCE(rchan); + +static int udma_get_tchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->tchan) { + dev_dbg(ud->dev, "chan%d: already have tchan%d allocated\n", + uc->id, uc->tchan->id); + return 0; + } + + uc->tchan = __udma_reserve_tchan(ud, uc->channel_tpl, -1); + if (IS_ERR(uc->tchan)) + return PTR_ERR(uc->tchan); + + return 0; +} + +static int udma_get_rchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->rchan) { + dev_dbg(ud->dev, "chan%d: already have rchan%d allocated\n", + uc->id, uc->rchan->id); + return 0; + } + + uc->rchan = __udma_reserve_rchan(ud, uc->channel_tpl, -1); + if (IS_ERR(uc->rchan)) + return PTR_ERR(uc->rchan); + + return 0; +} + +static int udma_get_chan_pair(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + const struct udma_match_data *match_data = ud->match_data; + int chan_id, end; + + if ((uc->tchan && uc->rchan) && uc->tchan->id == uc->rchan->id) { + dev_info(ud->dev, "chan%d: already have %d pair allocated\n", + uc->id, uc->tchan->id); + return 0; + } + + if (uc->tchan) { + dev_err(ud->dev, "chan%d: already have tchan%d allocated\n", + uc->id, uc->tchan->id); + return -EBUSY; + } else if (uc->rchan) { + dev_err(ud->dev, "chan%d: already have rchan%d allocated\n", + uc->id, uc->rchan->id); + return -EBUSY; + } + + /* Can be optimized, but let's have it like this for now */ + end = min(ud->tchan_cnt, ud->rchan_cnt); + /* Try to use the highest TPL channel pair for MEM_TO_MEM channels */ + chan_id = match_data->level_start_idx[match_data->tpl_levels - 1]; + for (; chan_id < end; chan_id++) { + if (!test_bit(chan_id, ud->tchan_map) && + !test_bit(chan_id, ud->rchan_map)) + break; + } + + if (chan_id == end) + return -ENOENT; + + set_bit(chan_id, ud->tchan_map); + set_bit(chan_id, ud->rchan_map); + uc->tchan = &ud->tchans[chan_id]; + uc->rchan = &ud->rchans[chan_id]; + + return 0; +} + +static int udma_get_rflow(struct udma_chan *uc, int flow_id) +{ + struct udma_dev *ud = uc->ud; + + if (!uc->rchan) { + dev_err(ud->dev, "chan%d: does not have rchan??\n", uc->id); + return -EINVAL; + } + + if (uc->rflow) { + dev_dbg(ud->dev, "chan%d: already have rflow%d allocated\n", + uc->id, uc->rflow->id); + return 0; + } + + uc->rflow = __udma_get_rflow(ud, flow_id); + if (IS_ERR(uc->rflow)) + return PTR_ERR(uc->rflow); + + return 0; +} + +static void udma_put_rchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->rchan) { + dev_dbg(ud->dev, "chan%d: put rchan%d\n", uc->id, + uc->rchan->id); + clear_bit(uc->rchan->id, ud->rchan_map); + uc->rchan = NULL; + } +} + +static void udma_put_tchan(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->tchan) { + dev_dbg(ud->dev, "chan%d: put tchan%d\n", uc->id, + uc->tchan->id); + clear_bit(uc->tchan->id, ud->tchan_map); + uc->tchan = NULL; + } +} + +static void udma_put_rflow(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + + if (uc->rflow) { + dev_dbg(ud->dev, "chan%d: put rflow%d\n", uc->id, + uc->rflow->id); + __udma_put_rflow(ud, uc->rflow); + uc->rflow = NULL; + } +} + +static void udma_free_tx_resources(struct udma_chan *uc) +{ + if (!uc->tchan) + return; + + k3_ringacc_ring_free(uc->tchan->t_ring); + k3_ringacc_ring_free(uc->tchan->tc_ring); + uc->tchan->t_ring = NULL; + uc->tchan->tc_ring = NULL; + + udma_put_tchan(uc); +} + +static int udma_alloc_tx_resources(struct udma_chan *uc) +{ + struct k3_ring_cfg ring_cfg; + struct udma_dev *ud = uc->ud; + int ret; + + ret = udma_get_tchan(uc); + if (ret) + return ret; + + uc->tchan->t_ring = k3_ringacc_request_ring(ud->ringacc, + uc->tchan->id, 0); + if (!uc->tchan->t_ring) { + ret = -EBUSY; + goto err_tx_ring; + } + + uc->tchan->tc_ring = k3_ringacc_request_ring(ud->ringacc, -1, 0); + if (!uc->tchan->tc_ring) { + ret = -EBUSY; + goto err_txc_ring; + } + + memset(&ring_cfg, 0, sizeof(ring_cfg)); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + + ret = k3_ringacc_ring_cfg(uc->tchan->t_ring, &ring_cfg); + ret |= k3_ringacc_ring_cfg(uc->tchan->tc_ring, &ring_cfg); + + if (ret) + goto err_ringcfg; + + return 0; + +err_ringcfg: + k3_ringacc_ring_free(uc->tchan->tc_ring); + uc->tchan->tc_ring = NULL; +err_txc_ring: + k3_ringacc_ring_free(uc->tchan->t_ring); + uc->tchan->t_ring = NULL; +err_tx_ring: + udma_put_tchan(uc); + + return ret; +} + +static void udma_free_rx_resources(struct udma_chan *uc) +{ + if (!uc->rchan) + return; + + if (uc->rflow) { + struct udma_rflow *rflow = uc->rflow; + + k3_ringacc_ring_free(rflow->fd_ring); + k3_ringacc_ring_free(rflow->r_ring); + rflow->fd_ring = NULL; + rflow->r_ring = NULL; + + udma_put_rflow(uc); + } + + udma_put_rchan(uc); +} + +static int udma_alloc_rx_resources(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct k3_ring_cfg ring_cfg; + struct udma_rflow *rflow; + int fd_ring_id; + int ret; + + ret = udma_get_rchan(uc); + if (ret) + return ret; + + /* For MEM_TO_MEM we don't need rflow or rings */ + if (uc->dir == DMA_MEM_TO_MEM) + return 0; + + ret = udma_get_rflow(uc, uc->rchan->id); + if (ret) { + ret = -EBUSY; + goto err_rflow; + } + + rflow = uc->rflow; + fd_ring_id = ud->tchan_cnt + ud->echan_cnt + uc->rchan->id; + rflow->fd_ring = k3_ringacc_request_ring(ud->ringacc, fd_ring_id, 0); + if (!rflow->fd_ring) { + ret = -EBUSY; + goto err_rx_ring; + } + + rflow->r_ring = k3_ringacc_request_ring(ud->ringacc, -1, 0); + if (!rflow->r_ring) { + ret = -EBUSY; + goto err_rxc_ring; + } + + memset(&ring_cfg, 0, sizeof(ring_cfg)); + + if (uc->pkt_mode) + ring_cfg.size = SG_MAX_SEGMENTS; + else + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + + ring_cfg.elm_size = K3_RINGACC_RING_ELSIZE_8; + ring_cfg.mode = K3_RINGACC_RING_MODE_MESSAGE; + + ret = k3_ringacc_ring_cfg(rflow->fd_ring, &ring_cfg); + ring_cfg.size = K3_UDMA_DEFAULT_RING_SIZE; + ret |= k3_ringacc_ring_cfg(rflow->r_ring, &ring_cfg); + + if (ret) + goto err_ringcfg; + + return 0; + +err_ringcfg: + k3_ringacc_ring_free(rflow->r_ring); + rflow->r_ring = NULL; +err_rxc_ring: + k3_ringacc_ring_free(rflow->fd_ring); + rflow->fd_ring = NULL; +err_rx_ring: + udma_put_rflow(uc); +err_rflow: + udma_put_rchan(uc); + + return ret; +} + +#define TISCI_TCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_EINFO_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_FILT_PSWORDS_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_SUPR_TDPKT_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID) + +#define TISCI_RCHAN_VALID_PARAMS ( \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_PAUSE_ON_ERR_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_FETCH_SIZE_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_CQ_QNUM_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_CHAN_TYPE_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_SHORT_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_IGNORE_LONG_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_START_VALID | \ + TI_SCI_MSG_VALUE_RM_UDMAP_CH_RX_FLOWID_CNT_VALID) + +static int udma_tisci_m2m_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_tchan *tchan = uc->tchan; + struct udma_rchan *rchan = uc->rchan; + int ret = 0; + + /* Non synchronized - mem to mem type of transfer */ + int tc_ring = k3_ringacc_get_ring_id(tchan->tc_ring); + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + + req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.index = tchan->id; + req_tx.tx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR; + req_tx.tx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2; + req_tx.txcq_qnum = tc_ring; + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) { + dev_err(ud->dev, "tchan%d cfg failed %d\n", tchan->id, ret); + return ret; + } + + req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = rchan->id; + req_rx.rx_fetch_size = sizeof(struct cppi5_desc_hdr_t) >> 2; + req_rx.rxcq_qnum = tc_ring; + req_rx.rx_chan_type = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_BCOPY_PBRR; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) + dev_err(ud->dev, "rchan%d alloc failed %d\n", rchan->id, ret); + + return ret; +} + +static int udma_tisci_tx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_tchan *tchan = uc->tchan; + int tc_ring = k3_ringacc_get_ring_id(tchan->tc_ring); + struct ti_sci_msg_rm_udmap_tx_ch_cfg req_tx = { 0 }; + u32 mode, fetch_size; + int ret = 0; + + if (uc->pkt_mode) { + mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; + fetch_size = cppi5_hdesc_calc_size(uc->needs_epib, uc->psd_size, + 0); + } else { + mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR; + fetch_size = sizeof(struct cppi5_desc_hdr_t); + } + + req_tx.valid_params = TISCI_TCHAN_VALID_PARAMS; + req_tx.nav_id = tisci_rm->tisci_dev_id; + req_tx.index = tchan->id; + req_tx.tx_chan_type = mode; + req_tx.tx_supr_tdpkt = uc->notdpkt; + req_tx.tx_fetch_size = fetch_size >> 2; + req_tx.txcq_qnum = tc_ring; + if (uc->ep_type == PSIL_EP_PDMA_XY) { + /* wait for peer to complete the teardown for PDMAs */ + req_tx.valid_params |= + TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; + req_tx.tx_tdtype = 1; + } + + ret = tisci_ops->tx_ch_cfg(tisci_rm->tisci, &req_tx); + if (ret) + dev_err(ud->dev, "tchan%d cfg failed %d\n", tchan->id, ret); + + return ret; +} + +static int udma_tisci_rx_channel_config(struct udma_chan *uc) +{ + struct udma_dev *ud = uc->ud; + struct udma_tisci_rm *tisci_rm = &ud->tisci_rm; + const struct ti_sci_rm_udmap_ops *tisci_ops = tisci_rm->tisci_udmap_ops; + struct udma_rchan *rchan = uc->rchan; + int fd_ring = k3_ringacc_get_ring_id(uc->rflow->fd_ring); + int rx_ring = k3_ringacc_get_ring_id(uc->rflow->r_ring); + struct ti_sci_msg_rm_udmap_rx_ch_cfg req_rx = { 0 }; + struct ti_sci_msg_rm_udmap_flow_cfg flow_req = { 0 }; + u32 mode, fetch_size; + int ret = 0; + + if (uc->pkt_mode) { + mode = TI_SCI_RM_UDMAP_CHAN_TYPE_PKT_PBRR; + fetch_size = cppi5_hdesc_calc_size(uc->needs_epib, + uc->psd_size, 0); + } else { + mode = TI_SCI_RM_UDMAP_CHAN_TYPE_3RDP_PBRR; + fetch_size = sizeof(struct cppi5_desc_hdr_t); + } + + req_rx.valid_params = TISCI_RCHAN_VALID_PARAMS; + req_rx.nav_id = tisci_rm->tisci_dev_id; + req_rx.index = rchan->id; + req_rx.rx_fetch_size = fetch_size >> 2; + req_rx.rxcq_qnum = rx_ring; + req_rx.rx_chan_type = mode; + + ret = tisci_ops->rx_ch_cfg(tisci_rm->tisci, &req_rx); + if (ret) { + dev_err(ud->dev, "rchan%d cfg failed %d\n", rchan->id, ret); + return ret; + } + + flow_req.valid_params = + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_EINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_PSINFO_PRESENT_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_ERROR_HANDLING_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DESC_TYPE_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_QNUM_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_HI_SEL_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_SRC_TAG_LO_SEL_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_HI_SEL_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_DEST_TAG_LO_SEL_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ0_SZ0_QNUM_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ1_QNUM_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ2_QNUM_VALID | + TI_SCI_MSG_VALUE_RM_UDMAP_FLOW_FDQ3_QNUM_VALID; + + flow_req.nav_id = tisci_rm->tisci_dev_id; + flow_req.flow_index = rchan->id; + + if (uc->needs_epib) + flow_req.rx_einfo_present = 1; + else + flow_req.rx_einfo_present = 0; + if (uc->psd_size) + flow_req.rx_psinfo_present = 1; + else + flow_req.rx_psinfo_present = 0; + flow_req.rx_error_handling = 1; + flow_req.rx_dest_qnum = rx_ring; + flow_req.rx_src_tag_hi_sel = UDMA_RFLOW_SRCTAG_NONE; + flow_req.rx_src_tag_lo_sel = UDMA_RFLOW_SRCTAG_SRC_TAG; + flow_req.rx_dest_tag_hi_sel = UDMA_RFLOW_DSTTAG_DST_TAG_HI; + flow_req.rx_dest_tag_lo_sel = UDMA_RFLOW_DSTTAG_DST_TAG_LO; + flow_req.rx_fdq0_sz0_qnum = fd_ring; + flow_req.rx_fdq1_qnum = fd_ring; + flow_req.rx_fdq2_qnum = fd_ring; + flow_req.rx_fdq3_qnum = fd_ring; + + ret = tisci_ops->rx_flow_cfg(tisci_rm->tisci, &flow_req); + + if (ret) + dev_err(ud->dev, "flow%d config failed: %d\n", rchan->id, ret); + + return 0; +} + +static int udma_alloc_chan_resources(struct dma_chan *chan) +{ + struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + const struct udma_match_data *match_data = ud->match_data; + struct k3_ring *irq_ring; + u32 irq_udma_idx; + int ret; + + if (uc->pkt_mode || uc->dir == DMA_MEM_TO_MEM) { + uc->use_dma_pool = true; + /* in case of MEM_TO_MEM we have maximum of two TRs */ + if (uc->dir == DMA_MEM_TO_MEM) { + uc->hdesc_size = cppi5_trdesc_calc_size( + sizeof(struct cppi5_tr_type15_t), 2); + uc->pkt_mode = false; + } + } + + if (uc->use_dma_pool) { + uc->hdesc_pool = dma_pool_create(uc->name, ud->ddev.dev, + uc->hdesc_size, ud->desc_align, + 0); + if (!uc->hdesc_pool) { + dev_err(ud->ddev.dev, + "Descriptor pool allocation failed\n"); + uc->use_dma_pool = false; + return -ENOMEM; + } + } + + /* + * Make sure that the completion is in a known state: + * No teardown, the channel is idle + */ + reinit_completion(&uc->teardown_completed); + complete_all(&uc->teardown_completed); + uc->state = UDMA_CHAN_IS_IDLE; + + switch (uc->dir) { + case DMA_MEM_TO_MEM: + /* Non synchronized - mem to mem type of transfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-MEM\n", __func__, + uc->id); + + ret = udma_get_chan_pair(uc); + if (ret) + return ret; + + ret = udma_alloc_tx_resources(uc); + if (ret) + return ret; + + ret = udma_alloc_rx_resources(uc); + if (ret) { + udma_free_tx_resources(uc); + return ret; + } + + uc->src_thread = ud->psil_base + uc->tchan->id; + uc->dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring = uc->tchan->tc_ring; + irq_udma_idx = uc->tchan->id; + + ret = udma_tisci_m2m_channel_config(uc); + break; + case DMA_MEM_TO_DEV: + /* Slave transfer synchronized - mem to dev (TX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as MEM-to-DEV\n", __func__, + uc->id); + + ret = udma_alloc_tx_resources(uc); + if (ret) { + uc->remote_thread_id = -1; + return ret; + } + + uc->src_thread = ud->psil_base + uc->tchan->id; + uc->dst_thread = uc->remote_thread_id; + uc->dst_thread |= K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring = uc->tchan->tc_ring; + irq_udma_idx = uc->tchan->id; + + ret = udma_tisci_tx_channel_config(uc); + break; + case DMA_DEV_TO_MEM: + /* Slave transfer synchronized - dev to mem (RX) trasnfer */ + dev_dbg(uc->ud->dev, "%s: chan%d as DEV-to-MEM\n", __func__, + uc->id); + + ret = udma_alloc_rx_resources(uc); + if (ret) { + uc->remote_thread_id = -1; + return ret; + } + + uc->src_thread = uc->remote_thread_id; + uc->dst_thread = (ud->psil_base + uc->rchan->id) | + K3_PSIL_DST_THREAD_ID_OFFSET; + + irq_ring = uc->rflow->r_ring; + irq_udma_idx = match_data->rchan_oes_offset + uc->rchan->id; + + ret = udma_tisci_rx_channel_config(uc); + break; + default: + /* Can not happen */ + dev_err(uc->ud->dev, "%s: chan%d invalid direction (%u)\n", + __func__, uc->id, uc->dir); + return -EINVAL; + } + + /* check if the channel configuration was successful */ + if (ret) + goto err_res_free; + + if (udma_is_chan_running(uc)) { + dev_warn(ud->dev, "chan%d: is running!\n", uc->id); + udma_stop(uc); + if (udma_is_chan_running(uc)) { + dev_err(ud->dev, "chan%d: won't stop!\n", uc->id); + goto err_res_free; + } + } + + /* PSI-L pairing */ + ret = navss_psil_pair(ud, uc->src_thread, uc->dst_thread); + if (ret) { + dev_err(ud->dev, "PSI-L pairing failed: 0x%04x -> 0x%04x\n", + uc->src_thread, uc->dst_thread); + goto err_res_free; + } + + uc->psil_paired = true; + + uc->irq_num_ring = k3_ringacc_get_ring_irq_num(irq_ring); + if (uc->irq_num_ring <= 0) { + dev_err(ud->dev, "Failed to get ring irq (index: %u)\n", + k3_ringacc_get_ring_id(irq_ring)); + ret = -EINVAL; + goto err_psi_free; + } + + ret = request_irq(uc->irq_num_ring, udma_ring_irq_handler, + IRQF_TRIGGER_HIGH, uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: ring irq request failed\n", uc->id); + goto err_irq_free; + } + + /* Event from UDMA (TR events) only needed for slave TR mode channels */ + if (is_slave_direction(uc->dir) && !uc->pkt_mode) { + uc->irq_num_udma = ti_sci_inta_msi_get_virq(ud->dev, + irq_udma_idx); + if (uc->irq_num_udma <= 0) { + dev_err(ud->dev, "Failed to get udma irq (index: %u)\n", + irq_udma_idx); + free_irq(uc->irq_num_ring, uc); + ret = -EINVAL; + goto err_irq_free; + } + + ret = request_irq(uc->irq_num_udma, udma_udma_irq_handler, 0, + uc->name, uc); + if (ret) { + dev_err(ud->dev, "chan%d: UDMA irq request failed\n", + uc->id); + free_irq(uc->irq_num_ring, uc); + goto err_irq_free; + } + } else { + uc->irq_num_udma = 0; + } + + udma_reset_rings(uc); + + return 0; + +err_irq_free: + uc->irq_num_ring = 0; + uc->irq_num_udma = 0; +err_psi_free: + navss_psil_unpair(ud, uc->src_thread, uc->dst_thread); + uc->psil_paired = false; +err_res_free: + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + + udma_reset_uchan(uc); + + if (uc->use_dma_pool) { + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + } + + return ret; +} + +static void udma_free_chan_resources(struct dma_chan *chan) +{ + struct udma_chan *uc = to_udma_chan(chan); + struct udma_dev *ud = to_udma_dev(chan->device); + + udma_terminate_all(chan); + + if (uc->irq_num_ring > 0) { + free_irq(uc->irq_num_ring, uc); + + uc->irq_num_ring = 0; + } + if (uc->irq_num_udma > 0) { + free_irq(uc->irq_num_udma, uc); + + uc->irq_num_udma = 0; + } + + /* Release PSI-L pairing */ + if (uc->psil_paired) { + navss_psil_unpair(ud, uc->src_thread, uc->dst_thread); + uc->psil_paired = false; + } + + vchan_free_chan_resources(&uc->vc); + tasklet_kill(&uc->vc.task); + + udma_free_tx_resources(uc); + udma_free_rx_resources(uc); + udma_reset_uchan(uc); + + if (uc->use_dma_pool) { + dma_pool_destroy(uc->hdesc_pool); + uc->use_dma_pool = false; + } +} + static struct platform_driver udma_driver; static bool udma_dma_filter_fn(struct dma_chan *chan, void *param) From patchwork Thu Nov 28 10:59:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 180414 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7357927ilf; Thu, 28 Nov 2019 03:00:48 -0800 (PST) X-Google-Smtp-Source: APXvYqyObuSFNBxSr2e6Hau+yAkitHsWRCR3qLb86pCkD47OHlBAGbg3h+CNRIh76zZrTOlUU1F+ X-Received: by 2002:a17:906:6dcf:: with SMTP id j15mr54829008ejt.104.1574938847491; Thu, 28 Nov 2019 03:00:47 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574938847; cv=none; d=google.com; s=arc-20160816; b=izELs5GWcMQh6eVy2Y6Uuu25rRoVND/1g3LicK60gTwP7RrhqOYPAQ/4n9Z92SFOYi 2CQGCtRvxi2ZZunY6UTiYboB8Ne20aEO5rhYlzJJxYqlqwixa5YZCPn/38epA9DP3y2v RaQgX8ASUX2/yGtLoItXFUyLficKpWeAHUx3kjXEAIHwlRX+eDXN6SWu4eNNjLr0bi0H AHDnnpWBxCb1WWIJOkFkyZC/zfWfIigyFYwTdqWTySoms0Z6ZR6kk1Rd6z1m9DOx6Be2 F3/ag9ThIwVId20Us7Zh9tOYvhhcWsKZZQGMlG81pgT/bJ96mUGpygAQFMBioplks4iR 7t1g== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=gKuEiUsU4DVYNllS/pQWCqSbYp6qHW80p/cttaU5HZE=; b=gkl7A0RsmKWmMkop6RncV9VKjp3zvDTgfExFU1WYapmiRo0+mUla8ZeSfPDQqBYqGe pnJ4ffN7S6mpEtcn0nSh1Rwg12uI1n6gsQw8dJpASamiqf9J4o6/QWorkyfj3ppDS1Av e9ruJN6VLrSxrg1fJ/DQAXaT0sY+5F5u3OunG8PJRZMhszhMJ9yJ+ngQAkewJROenqh1 bCnSNKY5XtdZKQ0hEReC+LmUvo3Y/vljgOdUosZrdasM8rnEm7ukqaZ1ykUtSrfx4MNM 4pKTbhQ9athFCDn3t4HJxXTRHon6qquLp1jxCMPQHo4POy8pzC0idvMbhX7jzOtrA35P pVBw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=xf+Xsusr; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i47si12736023eda.91.2019.11.28.03.00.47; Thu, 28 Nov 2019 03:00:47 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=xf+Xsusr; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727305AbfK1LAn (ORCPT + 8 others); Thu, 28 Nov 2019 06:00:43 -0500 Received: from lelv0142.ext.ti.com ([198.47.23.249]:34492 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727285AbfK1LAj (ORCPT ); Thu, 28 Nov 2019 06:00:39 -0500 Received: from lelv0266.itg.ti.com ([10.180.67.225]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id xASB0USJ022881; Thu, 28 Nov 2019 05:00:30 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1574938830; bh=gKuEiUsU4DVYNllS/pQWCqSbYp6qHW80p/cttaU5HZE=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=xf+XsusrZKIeZKLJzey6asb4vH6InzojfMngdbHE5sEfChqmsbQBaJohXSevGUfKF 14zK2oB5OARCzJM8JACetqVG2c+VyMYbcRFC+/n0GGitCfmuLQ3I+7E2LqWF1pkb1Z kfrRrk8QruXx9eaSKy8+VTzK9Fptzp7jNFzhV7Lw= Received: from DLEE100.ent.ti.com (dlee100.ent.ti.com [157.170.170.30]) by lelv0266.itg.ti.com (8.15.2/8.15.2) with ESMTPS id xASB0UAb015312 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 28 Nov 2019 05:00:30 -0600 Received: from DLEE103.ent.ti.com (157.170.170.33) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 28 Nov 2019 05:00:30 -0600 Received: from lelv0326.itg.ti.com (10.180.67.84) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 28 Nov 2019 05:00:30 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0326.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASAxgJR073287; Thu, 28 Nov 2019 05:00:27 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , , , Subject: [PATCH v6 13/17] dmaengine: ti: New driver for K3 UDMA - split#5: dma_device callbacks 2 Date: Thu, 28 Nov 2019 12:59:41 +0200 Message-ID: <20191128105945.13071-14-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191128105945.13071-1-peter.ujfalusi@ti.com> References: <20191128105945.13071-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Split patch for review containing: device_prep_slave_sg and device_prep_dma_cyclic implementation supporting packet and TR channels. DMA driver for Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P) The UDMA-P is intended to perform similar (but significantly upgraded) functions as the packet-oriented DMA used on previous SoC devices. The UDMA-P module supports the transmission and reception of various packet types. The UDMA-P is architected to facilitate the segmentation and reassembly of SoC DMA data structure compliant packets to/from smaller data blocks that are natively compatible with the specific requirements of each connected peripheral. Multiple Tx and Rx channels are provided within the DMA which allow multiple segmentation or reassembly operations to be ongoing. The DMA controller maintains state information for each of the channels which allows packet segmentation and reassembly operations to be time division multiplexed between channels in order to share the underlying DMA hardware. An external DMA scheduler is used to control the ordering and rate at which this multiplexing occurs for Transmit operations. The ordering and rate of Receive operations is indirectly controlled by the order in which blocks are pushed into the DMA on the Rx PSI-L interface. The UDMA-P also supports acting as both a UTC and UDMA-C for its internal channels. Channels in the UDMA-P can be configured to be either Packet-Based or Third-Party channels on a channel by channel basis. The initial driver supports: - MEM_TO_MEM (TR mode) - DEV_TO_MEM (Packet / TR mode) - MEM_TO_DEV (Packet / TR mode) - Cyclic (Packet / TR mode) - Metadata for descriptors Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 702 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 702 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index 9118db431438..0de79eacb278 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -1792,6 +1792,708 @@ static int udma_slave_config(struct dma_chan *chan, return 0; } +static struct udma_desc *udma_alloc_tr_desc(struct udma_chan *uc, + size_t tr_size, int tr_count, + enum dma_transfer_direction dir) +{ + struct udma_hwdesc *hwdesc; + struct cppi5_desc_hdr_t *tr_desc; + struct udma_desc *d; + u32 reload_count = 0; + u32 ring_id; + + switch (tr_size) { + case 16: + case 32: + case 64: + case 128: + break; + default: + dev_err(uc->ud->dev, "Unsupported TR size of %zu\n", tr_size); + return NULL; + } + + /* We have only one descriptor containing multiple TRs */ + d = kzalloc(sizeof(*d) + sizeof(d->hwdesc[0]), GFP_ATOMIC); + if (!d) + return NULL; + + d->sglen = tr_count; + + d->hwdesc_count = 1; + hwdesc = &d->hwdesc[0]; + + /* Allocate memory for DMA ring descriptor */ + if (uc->use_dma_pool) { + hwdesc->cppi5_desc_size = uc->hdesc_size; + hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, + GFP_ATOMIC, + &hwdesc->cppi5_desc_paddr); + } else { + hwdesc->cppi5_desc_size = cppi5_trdesc_calc_size(tr_size, + tr_count); + hwdesc->cppi5_desc_size = ALIGN(hwdesc->cppi5_desc_size, + uc->ud->desc_align); + hwdesc->cppi5_desc_vaddr = dma_alloc_coherent(uc->ud->dev, + hwdesc->cppi5_desc_size, + &hwdesc->cppi5_desc_paddr, + GFP_ATOMIC); + } + + if (!hwdesc->cppi5_desc_vaddr) { + kfree(d); + return NULL; + } + + /* Start of the TR req records */ + hwdesc->tr_req_base = hwdesc->cppi5_desc_vaddr + tr_size; + /* Start address of the TR response array */ + hwdesc->tr_resp_base = hwdesc->tr_req_base + tr_size * tr_count; + + tr_desc = hwdesc->cppi5_desc_vaddr; + + if (uc->cyclic) + reload_count = CPPI5_INFO0_TRDESC_RLDCNT_INFINITE; + + if (dir == DMA_DEV_TO_MEM) + ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); + else + ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + + cppi5_trdesc_init(tr_desc, tr_count, tr_size, 0, reload_count); + cppi5_desc_set_pktids(tr_desc, uc->id, 0x3fff); + cppi5_desc_set_retpolicy(tr_desc, 0, ring_id); + + return d; +} + +static struct udma_desc * +udma_prep_slave_sg_tr(struct udma_chan *uc, struct scatterlist *sgl, + unsigned int sglen, enum dma_transfer_direction dir, + unsigned long tx_flags, void *context) +{ + enum dma_slave_buswidth dev_width; + struct scatterlist *sgent; + struct udma_desc *d; + size_t tr_size; + struct cppi5_tr_type1_t *tr_req = NULL; + unsigned int i; + u32 burst; + + if (dir == DMA_DEV_TO_MEM) { + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + } else if (dir == DMA_MEM_TO_DEV) { + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + } else { + dev_err(uc->ud->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + /* Now allocate and setup the descriptor. */ + tr_size = sizeof(struct cppi5_tr_type1_t); + d = udma_alloc_tr_desc(uc, tr_size, sglen, dir); + if (!d) + return NULL; + + d->sglen = sglen; + + tr_req = (struct cppi5_tr_type1_t *)d->hwdesc[0].tr_req_base; + for_each_sg(sgl, sgent, sglen, i) { + d->residue += sg_dma_len(sgent); + + cppi5_tr_init(&tr_req[i].flags, CPPI5_TR_TYPE1, false, false, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[i].flags, CPPI5_TR_CSF_SUPR_EVT); + + tr_req[i].addr = sg_dma_address(sgent); + tr_req[i].icnt0 = burst * dev_width; + tr_req[i].dim1 = burst * dev_width; + tr_req[i].icnt1 = sg_dma_len(sgent) / tr_req[i].icnt0; + } + + cppi5_tr_csf_set(&tr_req[i - 1].flags, CPPI5_TR_CSF_EOP); + + return d; +} + +static int udma_configure_statictr(struct udma_chan *uc, struct udma_desc *d, + enum dma_slave_buswidth dev_width, + u16 elcnt) +{ + if (uc->ep_type != PSIL_EP_PDMA_XY) + return 0; + + /* Bus width translates to the element size (ES) */ + switch (dev_width) { + case DMA_SLAVE_BUSWIDTH_1_BYTE: + d->static_tr.elsize = 0; + break; + case DMA_SLAVE_BUSWIDTH_2_BYTES: + d->static_tr.elsize = 1; + break; + case DMA_SLAVE_BUSWIDTH_3_BYTES: + d->static_tr.elsize = 2; + break; + case DMA_SLAVE_BUSWIDTH_4_BYTES: + d->static_tr.elsize = 3; + break; + case DMA_SLAVE_BUSWIDTH_8_BYTES: + d->static_tr.elsize = 4; + break; + default: /* not reached */ + return -EINVAL; + } + + d->static_tr.elcnt = elcnt; + + /* + * PDMA must to close the packet when the channel is in packet mode. + * For TR mode when the channel is not cyclic we also need PDMA to close + * the packet otherwise the transfer will stall because PDMA holds on + * the data it has received from the peripheral. + */ + if (uc->pkt_mode || !uc->cyclic) { + unsigned int div = dev_width * elcnt; + + if (uc->cyclic) + d->static_tr.bstcnt = d->residue / d->sglen / div; + else + d->static_tr.bstcnt = d->residue / div; + + if (uc->dir == DMA_DEV_TO_MEM && + d->static_tr.bstcnt > uc->ud->match_data->statictr_z_mask) + return -EINVAL; + } else { + d->static_tr.bstcnt = 0; + } + + return 0; +} + +static struct udma_desc * +udma_prep_slave_sg_pkt(struct udma_chan *uc, struct scatterlist *sgl, + unsigned int sglen, enum dma_transfer_direction dir, + unsigned long tx_flags, void *context) +{ + struct scatterlist *sgent; + struct cppi5_host_desc_t *h_desc = NULL; + struct udma_desc *d; + u32 ring_id; + unsigned int i; + + d = kzalloc(sizeof(*d) + sglen * sizeof(d->hwdesc[0]), GFP_ATOMIC); + if (!d) + return NULL; + + d->sglen = sglen; + d->hwdesc_count = sglen; + + if (dir == DMA_DEV_TO_MEM) + ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); + else + ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + + for_each_sg(sgl, sgent, sglen, i) { + struct udma_hwdesc *hwdesc = &d->hwdesc[i]; + dma_addr_t sg_addr = sg_dma_address(sgent); + struct cppi5_host_desc_t *desc; + size_t sg_len = sg_dma_len(sgent); + + hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, + GFP_ATOMIC, + &hwdesc->cppi5_desc_paddr); + if (!hwdesc->cppi5_desc_vaddr) { + dev_err(uc->ud->dev, + "descriptor%d allocation failed\n", i); + + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + d->residue += sg_len; + hwdesc->cppi5_desc_size = uc->hdesc_size; + desc = hwdesc->cppi5_desc_vaddr; + + if (i == 0) { + cppi5_hdesc_init(desc, 0, 0); + /* Flow and Packed ID */ + cppi5_desc_set_pktids(&desc->hdr, uc->id, 0x3fff); + cppi5_desc_set_retpolicy(&desc->hdr, 0, ring_id); + } else { + cppi5_hdesc_reset_hbdesc(desc); + cppi5_desc_set_retpolicy(&desc->hdr, 0, 0xffff); + } + + /* attach the sg buffer to the descriptor */ + cppi5_hdesc_attach_buf(desc, sg_addr, sg_len, sg_addr, sg_len); + + /* Attach link as host buffer descriptor */ + if (h_desc) + cppi5_hdesc_link_hbdesc(h_desc, + hwdesc->cppi5_desc_paddr); + + if (dir == DMA_MEM_TO_DEV) + h_desc = desc; + } + + if (d->residue >= SZ_4M) { + dev_err(uc->ud->dev, + "%s: Transfer size %u is over the supported 4M range\n", + __func__, d->residue); + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + h_desc = d->hwdesc[0].cppi5_desc_vaddr; + cppi5_hdesc_set_pktlen(h_desc, d->residue); + + return d; +} + +static int udma_attach_metadata(struct dma_async_tx_descriptor *desc, + void *data, size_t len) +{ + struct udma_desc *d = to_udma_desc(desc); + struct udma_chan *uc = to_udma_chan(desc->chan); + struct cppi5_host_desc_t *h_desc; + u32 psd_size = len; + u32 flags = 0; + + if (!uc->pkt_mode || !uc->metadata_size) + return -ENOTSUPP; + + if (!data || len > uc->metadata_size) + return -EINVAL; + + if (uc->needs_epib && len < CPPI5_INFO0_HDESC_EPIB_SIZE) + return -EINVAL; + + h_desc = d->hwdesc[0].cppi5_desc_vaddr; + if (d->dir == DMA_MEM_TO_DEV) + memcpy(h_desc->epib, data, len); + + if (uc->needs_epib) + psd_size -= CPPI5_INFO0_HDESC_EPIB_SIZE; + + d->metadata = data; + d->metadata_size = len; + if (uc->needs_epib) + flags |= CPPI5_INFO0_HDESC_EPIB_PRESENT; + + cppi5_hdesc_update_flags(h_desc, flags); + cppi5_hdesc_update_psdata_size(h_desc, psd_size); + + return 0; +} + +static void *udma_get_metadata_ptr(struct dma_async_tx_descriptor *desc, + size_t *payload_len, size_t *max_len) +{ + struct udma_desc *d = to_udma_desc(desc); + struct udma_chan *uc = to_udma_chan(desc->chan); + struct cppi5_host_desc_t *h_desc; + + if (!uc->pkt_mode || !uc->metadata_size) + return ERR_PTR(-ENOTSUPP); + + h_desc = d->hwdesc[0].cppi5_desc_vaddr; + + *max_len = uc->metadata_size; + + *payload_len = cppi5_hdesc_epib_present(&h_desc->hdr) ? + CPPI5_INFO0_HDESC_EPIB_SIZE : 0; + *payload_len += cppi5_hdesc_get_psdata_size(h_desc); + + return h_desc->epib; +} + +static int udma_set_metadata_len(struct dma_async_tx_descriptor *desc, + size_t payload_len) +{ + struct udma_desc *d = to_udma_desc(desc); + struct udma_chan *uc = to_udma_chan(desc->chan); + struct cppi5_host_desc_t *h_desc; + u32 psd_size = payload_len; + u32 flags = 0; + + if (!uc->pkt_mode || !uc->metadata_size) + return -ENOTSUPP; + + if (payload_len > uc->metadata_size) + return -EINVAL; + + if (uc->needs_epib && payload_len < CPPI5_INFO0_HDESC_EPIB_SIZE) + return -EINVAL; + + h_desc = d->hwdesc[0].cppi5_desc_vaddr; + + if (uc->needs_epib) { + psd_size -= CPPI5_INFO0_HDESC_EPIB_SIZE; + flags |= CPPI5_INFO0_HDESC_EPIB_PRESENT; + } + + cppi5_hdesc_update_flags(h_desc, flags); + cppi5_hdesc_update_psdata_size(h_desc, psd_size); + + return 0; +} + +static struct dma_descriptor_metadata_ops metadata_ops = { + .attach = udma_attach_metadata, + .get_ptr = udma_get_metadata_ptr, + .set_len = udma_set_metadata_len, +}; + +static struct dma_async_tx_descriptor * +udma_prep_slave_sg(struct dma_chan *chan, struct scatterlist *sgl, + unsigned int sglen, enum dma_transfer_direction dir, + unsigned long tx_flags, void *context) +{ + struct udma_chan *uc = to_udma_chan(chan); + enum dma_slave_buswidth dev_width; + struct udma_desc *d; + u32 burst; + + if (dir != uc->dir) { + dev_err(chan->device->dev, + "%s: chan%d is for %s, not supporting %s\n", + __func__, uc->id, udma_get_dir_text(uc->dir), + udma_get_dir_text(dir)); + return NULL; + } + + if (dir == DMA_DEV_TO_MEM) { + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + } else if (dir == DMA_MEM_TO_DEV) { + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + } else { + dev_err(chan->device->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + if (uc->pkt_mode) + d = udma_prep_slave_sg_pkt(uc, sgl, sglen, dir, tx_flags, + context); + else + d = udma_prep_slave_sg_tr(uc, sgl, sglen, dir, tx_flags, + context); + + if (!d) + return NULL; + + d->dir = dir; + d->desc_idx = 0; + d->tr_idx = 0; + + /* static TR for remote PDMA */ + if (udma_configure_statictr(uc, d, dev_width, burst)) { + dev_err(uc->ud->dev, + "%s: StaticTR Z is limted to maximum 4095 (%u)\n", + __func__, d->static_tr.bstcnt); + + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + if (uc->metadata_size) + d->vd.tx.metadata_ops = &metadata_ops; + + return vchan_tx_prep(&uc->vc, &d->vd, tx_flags); +} + +static struct udma_desc * +udma_prep_dma_cyclic_tr(struct udma_chan *uc, dma_addr_t buf_addr, + size_t buf_len, size_t period_len, + enum dma_transfer_direction dir, unsigned long flags) +{ + enum dma_slave_buswidth dev_width; + struct udma_desc *d; + size_t tr_size; + struct cppi5_tr_type1_t *tr_req; + unsigned int i; + unsigned int periods = buf_len / period_len; + u32 burst; + + if (dir == DMA_DEV_TO_MEM) { + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + } else if (dir == DMA_MEM_TO_DEV) { + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + } else { + dev_err(uc->ud->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + /* Now allocate and setup the descriptor. */ + tr_size = sizeof(struct cppi5_tr_type1_t); + d = udma_alloc_tr_desc(uc, tr_size, periods, dir); + if (!d) + return NULL; + + tr_req = (struct cppi5_tr_type1_t *)d->hwdesc[0].tr_req_base; + for (i = 0; i < periods; i++) { + cppi5_tr_init(&tr_req[i].flags, CPPI5_TR_TYPE1, false, false, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + + tr_req[i].addr = buf_addr + period_len * i; + tr_req[i].icnt0 = dev_width; + tr_req[i].icnt1 = period_len / dev_width; + tr_req[i].dim1 = dev_width; + + if (!(flags & DMA_PREP_INTERRUPT)) + cppi5_tr_csf_set(&tr_req[i].flags, + CPPI5_TR_CSF_SUPR_EVT); + } + + return d; +} + +static struct udma_desc * +udma_prep_dma_cyclic_pkt(struct udma_chan *uc, dma_addr_t buf_addr, + size_t buf_len, size_t period_len, + enum dma_transfer_direction dir, unsigned long flags) +{ + struct udma_desc *d; + u32 ring_id; + int i; + int periods = buf_len / period_len; + + if (periods > (K3_UDMA_DEFAULT_RING_SIZE - 1)) + return NULL; + + if (period_len > 0x3FFFFF) + return NULL; + + d = kzalloc(sizeof(*d) + periods * sizeof(d->hwdesc[0]), GFP_ATOMIC); + if (!d) + return NULL; + + d->hwdesc_count = periods; + + /* TODO: re-check this... */ + if (dir == DMA_DEV_TO_MEM) + ring_id = k3_ringacc_get_ring_id(uc->rflow->r_ring); + else + ring_id = k3_ringacc_get_ring_id(uc->tchan->tc_ring); + + for (i = 0; i < periods; i++) { + struct udma_hwdesc *hwdesc = &d->hwdesc[i]; + dma_addr_t period_addr = buf_addr + (period_len * i); + struct cppi5_host_desc_t *h_desc; + + hwdesc->cppi5_desc_vaddr = dma_pool_zalloc(uc->hdesc_pool, + GFP_ATOMIC, + &hwdesc->cppi5_desc_paddr); + if (!hwdesc->cppi5_desc_vaddr) { + dev_err(uc->ud->dev, + "descriptor%d allocation failed\n", i); + + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + hwdesc->cppi5_desc_size = uc->hdesc_size; + h_desc = hwdesc->cppi5_desc_vaddr; + + cppi5_hdesc_init(h_desc, 0, 0); + cppi5_hdesc_set_pktlen(h_desc, period_len); + + /* Flow and Packed ID */ + cppi5_desc_set_pktids(&h_desc->hdr, uc->id, 0x3fff); + cppi5_desc_set_retpolicy(&h_desc->hdr, 0, ring_id); + + /* attach each period to a new descriptor */ + cppi5_hdesc_attach_buf(h_desc, + period_addr, period_len, + period_addr, period_len); + } + + return d; +} + +static struct dma_async_tx_descriptor * +udma_prep_dma_cyclic(struct dma_chan *chan, dma_addr_t buf_addr, size_t buf_len, + size_t period_len, enum dma_transfer_direction dir, + unsigned long flags) +{ + struct udma_chan *uc = to_udma_chan(chan); + enum dma_slave_buswidth dev_width; + struct udma_desc *d; + u32 burst; + + if (dir != uc->dir) { + dev_err(chan->device->dev, + "%s: chan%d is for %s, not supporting %s\n", + __func__, uc->id, udma_get_dir_text(uc->dir), + udma_get_dir_text(dir)); + return NULL; + } + + uc->cyclic = true; + + if (dir == DMA_DEV_TO_MEM) { + dev_width = uc->cfg.src_addr_width; + burst = uc->cfg.src_maxburst; + } else if (dir == DMA_MEM_TO_DEV) { + dev_width = uc->cfg.dst_addr_width; + burst = uc->cfg.dst_maxburst; + } else { + dev_err(uc->ud->dev, "%s: bad direction?\n", __func__); + return NULL; + } + + if (!burst) + burst = 1; + + if (uc->pkt_mode) + d = udma_prep_dma_cyclic_pkt(uc, buf_addr, buf_len, period_len, + dir, flags); + else + d = udma_prep_dma_cyclic_tr(uc, buf_addr, buf_len, period_len, + dir, flags); + + if (!d) + return NULL; + + d->sglen = buf_len / period_len; + + d->dir = dir; + d->residue = buf_len; + + /* static TR for remote PDMA */ + if (udma_configure_statictr(uc, d, dev_width, burst)) { + dev_err(uc->ud->dev, + "%s: StaticTR Z is limted to maximum 4095 (%u)\n", + __func__, d->static_tr.bstcnt); + + udma_free_hwdesc(uc, d); + kfree(d); + return NULL; + } + + if (uc->metadata_size) + d->vd.tx.metadata_ops = &metadata_ops; + + return vchan_tx_prep(&uc->vc, &d->vd, flags); +} + +static struct dma_async_tx_descriptor * +udma_prep_dma_memcpy(struct dma_chan *chan, dma_addr_t dest, dma_addr_t src, + size_t len, unsigned long tx_flags) +{ + struct udma_chan *uc = to_udma_chan(chan); + struct udma_desc *d; + struct cppi5_tr_type15_t *tr_req; + int num_tr; + size_t tr_size = sizeof(struct cppi5_tr_type15_t); + u16 tr0_cnt0, tr0_cnt1, tr1_cnt0; + + if (uc->dir != DMA_MEM_TO_MEM) { + dev_err(chan->device->dev, + "%s: chan%d is for %s, not supporting %s\n", + __func__, uc->id, udma_get_dir_text(uc->dir), + udma_get_dir_text(DMA_MEM_TO_MEM)); + return NULL; + } + + if (len < SZ_64K) { + num_tr = 1; + tr0_cnt0 = len; + tr0_cnt1 = 1; + } else { + unsigned long align_to = __ffs(src | dest); + + if (align_to > 3) + align_to = 3; + /* + * Keep simple: tr0: SZ_64K-alignment blocks, + * tr1: the remaining + */ + num_tr = 2; + tr0_cnt0 = (SZ_64K - BIT(align_to)); + if (len / tr0_cnt0 >= SZ_64K) { + dev_err(uc->ud->dev, "size %zu is not supported\n", + len); + return NULL; + } + + tr0_cnt1 = len / tr0_cnt0; + tr1_cnt0 = len % tr0_cnt0; + } + + d = udma_alloc_tr_desc(uc, tr_size, num_tr, DMA_MEM_TO_MEM); + if (!d) + return NULL; + + d->dir = DMA_MEM_TO_MEM; + d->desc_idx = 0; + d->tr_idx = 0; + d->residue = len; + + tr_req = (struct cppi5_tr_type15_t *)d->hwdesc[0].tr_req_base; + + cppi5_tr_init(&tr_req[0].flags, CPPI5_TR_TYPE15, false, true, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[0].flags, CPPI5_TR_CSF_SUPR_EVT); + + tr_req[0].addr = src; + tr_req[0].icnt0 = tr0_cnt0; + tr_req[0].icnt1 = tr0_cnt1; + tr_req[0].icnt2 = 1; + tr_req[0].icnt3 = 1; + tr_req[0].dim1 = tr0_cnt0; + + tr_req[0].daddr = dest; + tr_req[0].dicnt0 = tr0_cnt0; + tr_req[0].dicnt1 = tr0_cnt1; + tr_req[0].dicnt2 = 1; + tr_req[0].dicnt3 = 1; + tr_req[0].ddim1 = tr0_cnt0; + + if (num_tr == 2) { + cppi5_tr_init(&tr_req[1].flags, CPPI5_TR_TYPE15, false, true, + CPPI5_TR_EVENT_SIZE_COMPLETION, 0); + cppi5_tr_csf_set(&tr_req[1].flags, CPPI5_TR_CSF_SUPR_EVT); + + tr_req[1].addr = src + tr0_cnt1 * tr0_cnt0; + tr_req[1].icnt0 = tr1_cnt0; + tr_req[1].icnt1 = 1; + tr_req[1].icnt2 = 1; + tr_req[1].icnt3 = 1; + + tr_req[1].daddr = dest + tr0_cnt1 * tr0_cnt0; + tr_req[1].dicnt0 = tr1_cnt0; + tr_req[1].dicnt1 = 1; + tr_req[1].dicnt2 = 1; + tr_req[1].dicnt3 = 1; + } + + cppi5_tr_csf_set(&tr_req[num_tr - 1].flags, CPPI5_TR_CSF_EOP); + + if (uc->metadata_size) + d->vd.tx.metadata_ops = &metadata_ops; + + return vchan_tx_prep(&uc->vc, &d->vd, tx_flags); +} + static void udma_issue_pending(struct dma_chan *chan) { struct udma_chan *uc = to_udma_chan(chan); From patchwork Thu Nov 28 10:59:42 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 180413 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7357884ilf; Thu, 28 Nov 2019 03:00:46 -0800 (PST) X-Google-Smtp-Source: APXvYqxMQGKi1hhCCkq6apIjO+c4j1sOOiVYNcYODxz3+09CE4y3IxCBEttcLanM4PFPSaDTaZLH X-Received: by 2002:a05:6402:13d4:: with SMTP id a20mr37741271edx.105.1574938846386; Thu, 28 Nov 2019 03:00:46 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574938846; cv=none; d=google.com; s=arc-20160816; b=UpNr95A0krzbG75pSwc+FCZ7lGB1mPFTc94PSG7SvUw9qW+NZPb2ueQHdKC7TG4Bmq aslKfyjLOW9zrVYhObDzk7ewnReGlQcqoNsbzprc/DcNrtK2hkRcQ1K01zhT4VxRyx2q uqAVI4exowsCEiRwXRS2i5un6dIhFCe0OSzfYbrNBoJ5i/tQkBRRwZa/bLAJTLenh6P9 h/g/EFSkBjSEIOXEg+S9PB9a5LxgTLTzC4nqnyMmcjW4EP7i8lUSChimSKfVj93ce6c9 A8mK4AvX0q4WtNfRSA0Jcxr2AD24chmOTB24Mg3nc0CHg/ydfNDz+FUOqAQL6q+BsGTl ZCcQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=EOL7+ILAJKMK9C2xGDp+0K1qLoucOFWJ/CJCGCWfatY=; b=F/0dx8vMMQ+yjkj5wE5XcjYbWuTP/LbJxAkbstBZP7NpoLh1NeR8h397aAkxQYU+Y6 YtIPXoDz92tJDJsD1yeoRBocZ+v35QeyjYjaot/L3GvKsEoHEc1CFO04ZvImopDpfnur gZxn/s5kLs02MOJTWuC2eXUZExsWSMJ9OvD+4kqNfw7cnO9QC3CJb1Zf86yTBLRVw0E8 2uyTYO1HSKHkpxMRo8hPuCKm/v4wmyMYDBgztjyZZpKNCuPwExLeAuYTbuRMO5UGmNM9 gk95HNETllEuZbQSBogtPjTb8WSBX+oPfTfAi4k/SknRFbnGvzPl2MB5Xw6yqt0LEcVr v+Yg== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=mcmXDonJ; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id i47si12736023eda.91.2019.11.28.03.00.46; Thu, 28 Nov 2019 03:00:46 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b=mcmXDonJ; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727239AbfK1LAn (ORCPT + 8 others); Thu, 28 Nov 2019 06:00:43 -0500 Received: from fllv0016.ext.ti.com ([198.47.19.142]:59264 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727286AbfK1LAk (ORCPT ); Thu, 28 Nov 2019 06:00:40 -0500 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id xASB0YJl099863; Thu, 28 Nov 2019 05:00:34 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1574938834; bh=EOL7+ILAJKMK9C2xGDp+0K1qLoucOFWJ/CJCGCWfatY=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=mcmXDonJ5NWH485ZKvPfhkaToVfvAYupKM6cmCiosdBInYMhrfRItfCbirdAoXapC oc8xqLz99UE0wea/xt/YG2q6ZekIiINME7RyDHI0FMpLsj1aV6Ml7pkKb39Uy3dZo/ wq+0hj4DLlBqfHLKgWB7bT8yttFQ2dkFgrPeYU7U= Received: from DLEE106.ent.ti.com (dlee106.ent.ti.com [157.170.170.36]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id xASB0Xea032273 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Thu, 28 Nov 2019 05:00:33 -0600 Received: from DLEE105.ent.ti.com (157.170.170.35) by DLEE106.ent.ti.com (157.170.170.36) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 28 Nov 2019 05:00:33 -0600 Received: from lelv0326.itg.ti.com (10.180.67.84) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 28 Nov 2019 05:00:33 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0326.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASAxgJS073287; Thu, 28 Nov 2019 05:00:30 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , , , Subject: [PATCH v6 14/17] dmaengine: ti: New driver for K3 UDMA - split#6: Kconfig and Makefile Date: Thu, 28 Nov 2019 12:59:42 +0200 Message-ID: <20191128105945.13071-15-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191128105945.13071-1-peter.ujfalusi@ti.com> References: <20191128105945.13071-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Split patch for review containing: Kconfig and Makefile changes DMA driver for Texas Instruments K3 NAVSS Unified DMA – Peripheral Root Complex (UDMA-P) The UDMA-P is intended to perform similar (but significantly upgraded) functions as the packet-oriented DMA used on previous SoC devices. The UDMA-P module supports the transmission and reception of various packet types. The UDMA-P is architected to facilitate the segmentation and reassembly of SoC DMA data structure compliant packets to/from smaller data blocks that are natively compatible with the specific requirements of each connected peripheral. Multiple Tx and Rx channels are provided within the DMA which allow multiple segmentation or reassembly operations to be ongoing. The DMA controller maintains state information for each of the channels which allows packet segmentation and reassembly operations to be time division multiplexed between channels in order to share the underlying DMA hardware. An external DMA scheduler is used to control the ordering and rate at which this multiplexing occurs for Transmit operations. The ordering and rate of Receive operations is indirectly controlled by the order in which blocks are pushed into the DMA on the Rx PSI-L interface. The UDMA-P also supports acting as both a UTC and UDMA-C for its internal channels. Channels in the UDMA-P can be configured to be either Packet-Based or Third-Party channels on a channel by channel basis. The initial driver supports: - MEM_TO_MEM (TR mode) - DEV_TO_MEM (Packet / TR mode) - MEM_TO_DEV (Packet / TR mode) - Cyclic (Packet / TR mode) - Metadata for descriptors Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/Kconfig | 13 +++++++++++++ drivers/dma/ti/Makefile | 1 + 2 files changed, 14 insertions(+) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/Kconfig b/drivers/dma/ti/Kconfig index 72f3d2728178..a096f0ec3998 100644 --- a/drivers/dma/ti/Kconfig +++ b/drivers/dma/ti/Kconfig @@ -34,6 +34,19 @@ config DMA_OMAP Enable support for the TI sDMA (System DMA or DMA4) controller. This DMA engine is found on OMAP and DRA7xx parts. +config TI_K3_UDMA + tristate "Texas Instruments UDMA support" + depends on ARCH_K3 || COMPILE_TEST + depends on TI_SCI_PROTOCOL + depends on TI_SCI_INTA_IRQCHIP + select DMA_ENGINE + select DMA_VIRTUAL_CHANNELS + select TI_K3_RINGACC + select TI_K3_PSIL + help + Enable support for the TI UDMA (Unified DMA) controller. This + DMA engine is used in AM65x. + config TI_K3_PSIL bool diff --git a/drivers/dma/ti/Makefile b/drivers/dma/ti/Makefile index f8d912ad7eaf..9d787f009195 100644 --- a/drivers/dma/ti/Makefile +++ b/drivers/dma/ti/Makefile @@ -2,5 +2,6 @@ obj-$(CONFIG_TI_CPPI41) += cppi41.o obj-$(CONFIG_TI_EDMA) += edma.o obj-$(CONFIG_DMA_OMAP) += omap-dma.o +obj-$(CONFIG_TI_K3_UDMA) += k3-udma.o obj-$(CONFIG_TI_K3_PSIL) += k3-psil.o k3-psil-am654.o k3-psil-j721e.o obj-$(CONFIG_TI_DMA_CROSSBAR) += dma-crossbar.o From patchwork Thu Nov 28 10:59:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Ujfalusi X-Patchwork-Id: 180417 Delivered-To: patch@linaro.org Received: by 2002:a92:38d5:0:0:0:0:0 with SMTP id g82csp7358176ilf; Thu, 28 Nov 2019 03:01:00 -0800 (PST) X-Google-Smtp-Source: APXvYqyX5IYiyq5umKzNSUGLNG4eNV7o0k6w2pObPIPWQISEXo2kbpvy6fUsAQpKE2KK+Z88SHWC X-Received: by 2002:a17:906:27cc:: with SMTP id k12mr55274771ejc.181.1574938860339; Thu, 28 Nov 2019 03:01:00 -0800 (PST) ARC-Seal: i=1; a=rsa-sha256; t=1574938860; cv=none; d=google.com; s=arc-20160816; b=S+cZd0yPQcxF4hd+pruWQrvSDeNOusQAoavcZm4WWEsQpHUiRYt6PH0wCOnFLD9MWH mFKjrNhyZ0udNoZjmwsnSr7R4eJOEOpNak+imo9kFh3BmjfHdlb1e0uJBDORTNEGhuJV ew1OW2o0/lFRKn5TgSFpex2J4ikHBry5mAhAre5XO4d3aQ46gMaPKyv5nNdR3DmFh4bV I8lKoKOqCfpx0yDbFFqUYrQ8CNHW0e6KtRDX2QWJ9ewBlM7zJdljewVLyXBLxoSPxdjb We/kCmS/xgszQtXEayGDviVwo4KrB5atRe+inUvZgU+t8XMqkcEFv7yXaKl4L8XhI7pY mNrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :references:in-reply-to:message-id:date:subject:cc:to:from :dkim-signature; bh=CKcG9c1OO85WzN6gvf89EFnzSlyAjbPZ3H9PIl/A58I=; b=LiUk/mUsc6UlK9/5TiuL+euXns+aLAPhk1NOT3Ej/4vSFYHmnk6VOUXA+d+nPbydfB TULAGRWUZ3lG0v7ABEVkBOkLOP6bZXAEELoVAarpwGqFs+NLQUMYstbiZSYPZHd9wCwR CymRJuX9BRYL3+fgEmnJYhoDrA9m2weiEBJ8Pkpcd//C7XG7DxSvKu9b0hTvf0xPAvRK Inc7N+1Nsf5X4UDsMbTnlqEJ2+FD5xPanSo8XDts5oAukJd83c6LXORjHEhFKR0yRYrd DKpo5EUKNC93PS+e9WVeYkETU0NkoJ3mCBh9WaaspaXNqip274eh787NFoKj2vLpd7Uk djTw== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="Nhm/SWhQ"; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id f2si11284540ejr.260.2019.11.28.03.01.00; Thu, 28 Nov 2019 03:01:00 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@ti.com header.s=ti-com-17Q1 header.b="Nhm/SWhQ"; spf=pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=devicetree-owner@vger.kernel.org; dmarc=pass (p=QUARANTINE sp=NONE dis=NONE) header.from=ti.com Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727328AbfK1LA5 (ORCPT + 8 others); Thu, 28 Nov 2019 06:00:57 -0500 Received: from fllv0015.ext.ti.com ([198.47.19.141]:54030 "EHLO fllv0015.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727329AbfK1LAu (ORCPT ); Thu, 28 Nov 2019 06:00:50 -0500 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0015.ext.ti.com (8.15.2/8.15.2) with ESMTP id xASB0hNE117629; Thu, 28 Nov 2019 05:00:43 -0600 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1574938843; bh=CKcG9c1OO85WzN6gvf89EFnzSlyAjbPZ3H9PIl/A58I=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=Nhm/SWhQvxYzNtp9zNt+bLuViQiqC2zk0b2HOIZQEB/JlLfDrH70PxbouaysEU1QQ Oz/DO0EB45Cy6Xl79sQ3rfTO+oUhqrMtp2J87jtpHotBE4DzR2pK+r6KM83PnOiwqk RaKoNvDHcz/eHO6GN7O2TnnBcoy4KODaTvLXH0Sw= Received: from DLEE103.ent.ti.com (dlee103.ent.ti.com [157.170.170.33]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASB0ha6007643; Thu, 28 Nov 2019 05:00:43 -0600 Received: from DLEE110.ent.ti.com (157.170.170.21) by DLEE103.ent.ti.com (157.170.170.33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3; Thu, 28 Nov 2019 05:00:43 -0600 Received: from lelv0326.itg.ti.com (10.180.67.84) by DLEE110.ent.ti.com (157.170.170.21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1847.3 via Frontend Transport; Thu, 28 Nov 2019 05:00:43 -0600 Received: from feketebors.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by lelv0326.itg.ti.com (8.15.2/8.15.2) with ESMTP id xASAxgJV073287; Thu, 28 Nov 2019 05:00:40 -0600 From: Peter Ujfalusi To: , , , CC: , , , , , , , , , Subject: [PATCH v6 17/17] dmaengine: ti: k3-udma: Wait for peer teardown completion if supported Date: Thu, 28 Nov 2019 12:59:45 +0200 Message-ID: <20191128105945.13071-18-peter.ujfalusi@ti.com> X-Mailer: git-send-email 2.24.0 In-Reply-To: <20191128105945.13071-1-peter.ujfalusi@ti.com> References: <20191128105945.13071-1-peter.ujfalusi@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: devicetree-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: devicetree@vger.kernel.org Set the TDTYPE if it is supported on the platform (j721e) which will cause UDMAP to wait for the remote peer to finish the teardown before returning the teardown completed message. Signed-off-by: Peter Ujfalusi --- drivers/dma/ti/k3-udma.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) -- Peter Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki. Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki diff --git a/drivers/dma/ti/k3-udma.c b/drivers/dma/ti/k3-udma.c index c1450b0a8224..5073e44caaa0 100644 --- a/drivers/dma/ti/k3-udma.c +++ b/drivers/dma/ti/k3-udma.c @@ -85,6 +85,7 @@ struct udma_rchan { #define UDMA_FLAG_PDMA_ACC32 BIT(0) #define UDMA_FLAG_PDMA_BURST BIT(1) +#define UDMA_FLAG_TDTYPE BIT(2) struct udma_match_data { u32 psil_base; @@ -1545,7 +1546,8 @@ static int udma_tisci_tx_channel_config(struct udma_chan *uc) req_tx.tx_supr_tdpkt = uc->notdpkt; req_tx.tx_fetch_size = fetch_size >> 2; req_tx.txcq_qnum = tc_ring; - if (uc->ep_type == PSIL_EP_PDMA_XY) { + if (uc->ep_type == PSIL_EP_PDMA_XY && + ud->match_data->flags & UDMA_FLAG_TDTYPE) { /* wait for peer to complete the teardown for PDMAs */ req_tx.valid_params |= TI_SCI_MSG_VALUE_RM_UDMAP_CH_TX_TDTYPE_VALID; @@ -2979,7 +2981,7 @@ static struct udma_match_data am654_mcu_data = { static struct udma_match_data j721e_main_data = { .psil_base = 0x1000, .enable_memcpy_support = true, - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), .rchan_oes_offset = 0x400, .tpl_levels = 3, @@ -2993,7 +2995,7 @@ static struct udma_match_data j721e_main_data = { static struct udma_match_data j721e_mcu_data = { .psil_base = 0x6000, .enable_memcpy_support = false, /* MEM_TO_MEM is slow via MCU UDMA */ - .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST, + .flags = UDMA_FLAG_PDMA_ACC32 | UDMA_FLAG_PDMA_BURST | UDMA_FLAG_TDTYPE, .statictr_z_mask = GENMASK(23, 0), .rchan_oes_offset = 0x400, .tpl_levels = 2,