From patchwork Tue Sep 11 08:35:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vinod Koul X-Patchwork-Id: 146963 Delivered-To: patch@linaro.org Received: by 2002:a2e:1648:0:0:0:0:0 with SMTP id 8-v6csp3386232ljw; Tue, 11 Sep 2018 01:36:39 -0700 (PDT) X-Google-Smtp-Source: ANB0VdZabIe4dRo/FoQDJIvveSP9UBGJrIYkcn3iGDrayhvYfToeT+iBcT4oQ9za3/n0Qb5JC9NH X-Received: by 2002:a62:f909:: with SMTP id o9-v6mr27996447pfh.141.1536654999704; Tue, 11 Sep 2018 01:36:39 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1536654999; cv=none; d=google.com; s=arc-20160816; b=diIY////WIiHUIjOxp8gEmHHvqAG+1bVeiviVkExy1TR1FzBMN7aa/4IbCqv8xLWnX zd7GRZKDriUWnsUzS6DVZF/Z7PTHMi7xRuKobUo4V3+iRyEznB/1T05lQS/P6sG0qth8 KUUUzjxg8ihN3r38nEydlGdb9YwPujYpyu0yOehZ8hKZt08eNQnbFbPiQKVWFJptCw94 uRr84bPPc4BX+fviLItoi1JQ0qQZbhBqIucmBjmkNvmj10ZOh/Z3VoR8s4OOj/ISFsKC U+3T0eVGUbz1H8bI/Gv931OAxvCGsB2yX02KQuWMIaxOx5Bp/ZHrRVebOm7BZdoppdQK Kw0Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:references:in-reply-to:message-id:date :subject:cc:to:from:dkim-signature; bh=CF04F46u9K7CsCw2864V37p0J56z0qoDX7JKY2x+It8=; b=cvNKQ2gWt+s9c/adlD/LAk7eZW7dJlVXVkFkLs+nOwVvysi9p8SOWf/mPxQyKi7IDZ y2GD24X4EcJJbDyDKFxXArhwGDYxc8pktRUWw3PZe9YXTH51CUu2J6C3X9738UU6duLT C3j19AM/PrHLePjOhRfVC0ifv631Xg+Udw3vBHmkBb5HiuAGb4hbLH/80a+QS9NxXJsJ xpxjIBoUY0mTHJ9Y64YHKmny96ju5gVTQk+pkaNTQXqQT4RPz+IyI+K8PIjI0HKzgMkx aa6rmUmw/76gQj7I8y3qW0bFOHdD0kjAV/Q7vE5dzUhfJdx/3FRcpHyGLLAiKWakL5Dc O73g== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=B704ZPEO; spf=pass (google.com: best guess record for domain of dmaengine-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=dmaengine-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id 21-v6si19613906pfy.169.2018.09.11.01.36.39; Tue, 11 Sep 2018 01:36:39 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of dmaengine-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=B704ZPEO; spf=pass (google.com: best guess record for domain of dmaengine-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=dmaengine-owner@vger.kernel.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726563AbeIKNex (ORCPT + 3 others); Tue, 11 Sep 2018 09:34:53 -0400 Received: from mail.kernel.org ([198.145.29.99]:52260 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726387AbeIKNex (ORCPT ); Tue, 11 Sep 2018 09:34:53 -0400 Received: from localhost.localdomain (unknown [171.76.126.1]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3999020870; Tue, 11 Sep 2018 08:36:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1536654997; bh=wSvRLqBbf8LAwqX8if7+TukDJpNxSM3cdanfE/GNx4Y=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=B704ZPEOnrlfiFcQ7z8xrr2XwuitJTsxKjiKE492/tv0EQjpNxeqSFICtm2jP5oDH 69/kyC3Cl3TFULTFbpF2KHwU5iaLLD05BQyKNolbCFSRngVhsvsKWkP7OOCDkXlh7f 7QBHcrJRlBjQ8oB9/afHr53avYGFIKXxgiXw5R0M= From: Vinod Koul To: dmaengine@vger.kernel.org Cc: Vinod Koul Subject: [PATCH 06/12] dmaengine: fsl-edma: remove dma_slave_config direction usage Date: Tue, 11 Sep 2018 14:05:30 +0530 Message-Id: <20180911083536.16482-7-vkoul@kernel.org> X-Mailer: git-send-email 2.14.4 In-Reply-To: <20180911083536.16482-1-vkoul@kernel.org> References: <20180911083536.16482-1-vkoul@kernel.org> Sender: dmaengine-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: dmaengine@vger.kernel.org dma_slave_config direction was marked as deprecated quite some time back, remove the usage from this driver so that the field can be removed Signed-off-by: Vinod Koul --- drivers/dma/fsl-edma.c | 87 ++++++++++++++++++++++++++------------------------ 1 file changed, 46 insertions(+), 41 deletions(-) -- 2.14.4 diff --git a/drivers/dma/fsl-edma.c b/drivers/dma/fsl-edma.c index c7568869284e..f8b4408aedb8 100644 --- a/drivers/dma/fsl-edma.c +++ b/drivers/dma/fsl-edma.c @@ -140,14 +140,6 @@ struct fsl_edma_sw_tcd { struct fsl_edma_hw_tcd *vtcd; }; -struct fsl_edma_slave_config { - enum dma_transfer_direction dir; - enum dma_slave_buswidth addr_width; - u32 dev_addr; - u32 burst; - u32 attr; -}; - struct fsl_edma_chan { struct virt_dma_chan vchan; enum dma_status status; @@ -156,7 +148,8 @@ struct fsl_edma_chan { u32 slave_id; struct fsl_edma_engine *edma; struct fsl_edma_desc *edesc; - struct fsl_edma_slave_config fsc; + struct dma_slave_config cfg; + u32 attr; struct dma_pool *tcd_pool; }; @@ -164,6 +157,7 @@ struct fsl_edma_desc { struct virt_dma_desc vdesc; struct fsl_edma_chan *echan; bool iscyclic; + enum dma_transfer_direction dirn; unsigned int n_tcds; struct fsl_edma_sw_tcd tcd[]; }; @@ -347,20 +341,8 @@ static int fsl_edma_slave_config(struct dma_chan *chan, { struct fsl_edma_chan *fsl_chan = to_fsl_edma_chan(chan); - fsl_chan->fsc.dir = cfg->direction; - if (cfg->direction == DMA_DEV_TO_MEM) { - fsl_chan->fsc.dev_addr = cfg->src_addr; - fsl_chan->fsc.addr_width = cfg->src_addr_width; - fsl_chan->fsc.burst = cfg->src_maxburst; - fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->src_addr_width); - } else if (cfg->direction == DMA_MEM_TO_DEV) { - fsl_chan->fsc.dev_addr = cfg->dst_addr; - fsl_chan->fsc.addr_width = cfg->dst_addr_width; - fsl_chan->fsc.burst = cfg->dst_maxburst; - fsl_chan->fsc.attr = fsl_edma_get_tcd_attr(cfg->dst_addr_width); - } else { - return -EINVAL; - } + memcpy(&fsl_chan->cfg, cfg, sizeof(*cfg)); + return 0; } @@ -370,7 +352,7 @@ static size_t fsl_edma_desc_residue(struct fsl_edma_chan *fsl_chan, struct fsl_edma_desc *edesc = fsl_chan->edesc; void __iomem *addr = fsl_chan->edma->membase; u32 ch = fsl_chan->vchan.chan.chan_id; - enum dma_transfer_direction dir = fsl_chan->fsc.dir; + enum dma_transfer_direction dir = edesc->dirn; dma_addr_t cur_addr, dma_addr; size_t len, size; int i; @@ -550,7 +532,7 @@ static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic( u32 src_addr, dst_addr, last_sg, nbytes; u16 soff, doff, iter; - if (!is_slave_direction(fsl_chan->fsc.dir)) + if (!is_slave_direction(direction)) return NULL; sg_len = buf_len / period_len; @@ -558,9 +540,21 @@ static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic( if (!fsl_desc) return NULL; fsl_desc->iscyclic = true; + fsl_desc->dirn = direction; dma_buf_next = dma_addr; - nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; + if (direction == DMA_MEM_TO_DEV) { + fsl_chan->attr = + fsl_edma_get_tcd_attr(fsl_chan->cfg.dst_addr_width); + nbytes = fsl_chan->cfg.dst_addr_width * + fsl_chan->cfg.dst_maxburst; + } else { + fsl_chan->attr = + fsl_edma_get_tcd_attr(fsl_chan->cfg.src_addr_width); + nbytes = fsl_chan->cfg.src_addr_width * + fsl_chan->cfg.src_maxburst; + } + iter = period_len / nbytes; for (i = 0; i < sg_len; i++) { @@ -570,20 +564,20 @@ static struct dma_async_tx_descriptor *fsl_edma_prep_dma_cyclic( /* get next sg's physical address */ last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; - if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { + if (direction == DMA_MEM_TO_DEV) { src_addr = dma_buf_next; - dst_addr = fsl_chan->fsc.dev_addr; - soff = fsl_chan->fsc.addr_width; + dst_addr = fsl_chan->cfg.dst_addr; + soff = fsl_chan->cfg.dst_addr_width; doff = 0; } else { - src_addr = fsl_chan->fsc.dev_addr; + src_addr = fsl_chan->cfg.src_addr; dst_addr = dma_buf_next; soff = 0; - doff = fsl_chan->fsc.addr_width; + doff = fsl_chan->cfg.src_addr_width; } fsl_edma_fill_tcd(fsl_desc->tcd[i].vtcd, src_addr, dst_addr, - fsl_chan->fsc.attr, soff, nbytes, 0, iter, + fsl_chan->attr, soff, nbytes, 0, iter, iter, doff, last_sg, true, false, true); dma_buf_next += period_len; } @@ -603,42 +597,53 @@ static struct dma_async_tx_descriptor *fsl_edma_prep_slave_sg( u16 soff, doff, iter; int i; - if (!is_slave_direction(fsl_chan->fsc.dir)) + if (!is_slave_direction(direction)) return NULL; fsl_desc = fsl_edma_alloc_desc(fsl_chan, sg_len); if (!fsl_desc) return NULL; fsl_desc->iscyclic = false; + fsl_desc->dirn = direction; - nbytes = fsl_chan->fsc.addr_width * fsl_chan->fsc.burst; + if (direction == DMA_MEM_TO_DEV) { + fsl_chan->attr = + fsl_edma_get_tcd_attr(fsl_chan->cfg.dst_addr_width); + nbytes = fsl_chan->cfg.dst_addr_width * + fsl_chan->cfg.dst_maxburst; + } else { + fsl_chan->attr = + fsl_edma_get_tcd_attr(fsl_chan->cfg.src_addr_width); + nbytes = fsl_chan->cfg.src_addr_width * + fsl_chan->cfg.src_maxburst; + } for_each_sg(sgl, sg, sg_len, i) { /* get next sg's physical address */ last_sg = fsl_desc->tcd[(i + 1) % sg_len].ptcd; - if (fsl_chan->fsc.dir == DMA_MEM_TO_DEV) { + if (direction == DMA_MEM_TO_DEV) { src_addr = sg_dma_address(sg); - dst_addr = fsl_chan->fsc.dev_addr; - soff = fsl_chan->fsc.addr_width; + dst_addr = fsl_chan->cfg.dst_addr; + soff = fsl_chan->cfg.dst_addr_width; doff = 0; } else { - src_addr = fsl_chan->fsc.dev_addr; + src_addr = fsl_chan->cfg.src_addr; dst_addr = sg_dma_address(sg); soff = 0; - doff = fsl_chan->fsc.addr_width; + doff = fsl_chan->cfg.src_addr_width; } iter = sg_dma_len(sg) / nbytes; if (i < sg_len - 1) { last_sg = fsl_desc->tcd[(i + 1)].ptcd; fsl_edma_fill_tcd(fsl_desc->tcd[i].vtcd, src_addr, - dst_addr, fsl_chan->fsc.attr, soff, + dst_addr, fsl_chan->attr, soff, nbytes, 0, iter, iter, doff, last_sg, false, false, true); } else { last_sg = 0; fsl_edma_fill_tcd(fsl_desc->tcd[i].vtcd, src_addr, - dst_addr, fsl_chan->fsc.attr, soff, + dst_addr, fsl_chan->attr, soff, nbytes, 0, iter, iter, doff, last_sg, true, true, false); }