From patchwork Fri Sep 11 14:14:27 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Peter Griffin X-Patchwork-Id: 53476 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-wi0-f199.google.com (mail-wi0-f199.google.com [209.85.212.199]) by patches.linaro.org (Postfix) with ESMTPS id 190CF22B26 for ; Fri, 11 Sep 2015 14:16:08 +0000 (UTC) Received: by wisv5 with SMTP id v5sf19144845wis.0 for ; Fri, 11 Sep 2015 07:16:07 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=zujRctdU7LEe2ngvNVShhrup2WmqxIC6fscAz+tgTxo=; b=j+UmlGaqK4bCw/mg8E/LhH9KX9ZzArQL02AfWppDY1Xd9ogVpU9JHhghXPefNfn51c g5phD65miZJ/lDCfgHTBcxTIe9yZ3Np/HU5DOa26aSYlw8OK+adVai4YqBOhYq3n8YBq PbVUrL+syMxSQl1PR48rCGgfFCYL80hDtH4fTdn+vDtylHr8w3wLqnSIiviVi43BKDlP 2ZH8WmSx7xIh/SZCp/plnmSsOSC9RbYqIrxgHvFRk5srQ6tZuGQuyyQano7QQXHueAtJ B/8MRmqySTZtEaCJcMgAQqVvFHVmoggQLESeDMovmDhMlMr/YlUS0Bw6U5AwjrYvZx7U fUMw== X-Gm-Message-State: ALoCoQkmKH2GMsTL1+PAkozOB4QF/gPPk53TM1K68zm5X+n3Sn+0jegowmLF3fQqRHMBljepbLin X-Received: by 10.152.19.234 with SMTP id i10mr10270620lae.8.1441980967364; Fri, 11 Sep 2015 07:16:07 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.152.21.163 with SMTP id w3ls382875lae.21.gmail; Fri, 11 Sep 2015 07:16:07 -0700 (PDT) X-Received: by 10.112.64.228 with SMTP id r4mr41272440lbs.80.1441980967075; Fri, 11 Sep 2015 07:16:07 -0700 (PDT) Received: from mail-la0-f45.google.com (mail-la0-f45.google.com. [209.85.215.45]) by mx.google.com with ESMTPS id k3si368034laf.118.2015.09.11.07.16.07 for (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 11 Sep 2015 07:16:07 -0700 (PDT) Received-SPF: pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) client-ip=209.85.215.45; Received: by lanb10 with SMTP id b10so48600328lan.3 for ; Fri, 11 Sep 2015 07:16:07 -0700 (PDT) X-Received: by 10.152.203.134 with SMTP id kq6mr42635414lac.106.1441980966951; Fri, 11 Sep 2015 07:16:06 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.112.59.35 with SMTP id w3csp1627622lbq; Fri, 11 Sep 2015 07:16:05 -0700 (PDT) X-Received: by 10.68.104.98 with SMTP id gd2mr97388883pbb.130.1441980965754; Fri, 11 Sep 2015 07:16:05 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id tc9si714888pbc.232.2015.09.11.07.16.04; Fri, 11 Sep 2015 07:16:05 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of devicetree-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753108AbbIKOOs (ORCPT + 7 others); Fri, 11 Sep 2015 10:14:48 -0400 Received: from mail-wi0-f174.google.com ([209.85.212.174]:36693 "EHLO mail-wi0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753058AbbIKOOp (ORCPT ); Fri, 11 Sep 2015 10:14:45 -0400 Received: by wicgb1 with SMTP id gb1so64455287wic.1 for ; Fri, 11 Sep 2015 07:14:44 -0700 (PDT) X-Received: by 10.180.89.99 with SMTP id bn3mr16990330wib.61.1441980884273; Fri, 11 Sep 2015 07:14:44 -0700 (PDT) Received: from localhost.localdomain (cpc14-aztw22-2-0-cust189.18-1.cable.virginm.net. [82.45.1.190]) by smtp.gmail.com with ESMTPSA id i6sm577236wje.33.2015.09.11.07.14.43 (version=TLSv1.2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Fri, 11 Sep 2015 07:14:43 -0700 (PDT) From: Peter Griffin To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, srinivas.kandagatla@gmail.com, maxime.coquelin@st.com, patrice.chotard@st.com, vinod.koul@intel.com Cc: peter.griffin@linaro.org, lee.jones@linaro.org, robh+dt@kernel.org, dmaengine@vger.kernel.org, devicetree@vger.kernel.org Subject: [PATCH v2 5/9] dmaengine: st_fdma: Add xp70 firmware loading mechanism. Date: Fri, 11 Sep 2015 15:14:27 +0100 Message-Id: <1441980871-24475-6-git-send-email-peter.griffin@linaro.org> X-Mailer: git-send-email 1.9.1 In-Reply-To: <1441980871-24475-1-git-send-email-peter.griffin@linaro.org> References: <1441980871-24475-1-git-send-email-peter.griffin@linaro.org> Sender: devicetree-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: devicetree@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: peter.griffin@linaro.org X-Original-Authentication-Results: mx.google.com; spf=pass (google.com: domain of patch+caf_=patchwork-forward=linaro.org@linaro.org designates 209.85.215.45 as permitted sender) smtp.mailfrom=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , This patch adds the code to load the xp70 fdma firmware using the asynchronous request_firmware_nowait call so as not to delay bootup of builtin code. Signed-off-by: Peter Griffin --- drivers/dma/st_fdma.c | 199 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 199 insertions(+) diff --git a/drivers/dma/st_fdma.c b/drivers/dma/st_fdma.c index 24ebd0b..4288e79 100644 --- a/drivers/dma/st_fdma.c +++ b/drivers/dma/st_fdma.c @@ -83,6 +83,162 @@ static struct st_fdma_desc *to_st_fdma_desc(struct virt_dma_desc *vd) return container_of(vd, struct st_fdma_desc, vdesc); } +static void *st_fdma_seg_to_mem(struct st_fdma_dev *fdev, u64 da, int len) +{ + int i; + resource_size_t base = fdev->io_res->start; + const struct st_fdma_ram *fdma_mem = fdev->drvdata->fdma_mem; + void *ptr = NULL; + + for (i = 0; i < fdev->drvdata->num_mem; i++) { + int mem_off = da - (base + fdma_mem[i].offset); + + /* next mem if da is too small */ + if (mem_off < 0) + continue; + + /* next mem if da is too large */ + if (mem_off + len > fdma_mem[i].size) + continue; + + ptr = fdev->io_base + fdma_mem[i].offset + mem_off; + break; + } + + return ptr; +} + +static int +st_fdma_elf_sanity_check(struct st_fdma_dev *fdev, const struct firmware *fw) +{ + const char *fw_name = fdev->pdata->fw_name; + struct elf32_hdr *ehdr; + char class; + + if (!fw) { + dev_err(fdev->dev, "failed to load %s\n", fw_name); + return -EINVAL; + } + + if (fw->size < sizeof(*ehdr)) { + dev_err(fdev->dev, "Image is too small\n"); + return -EINVAL; + } + + ehdr = (struct elf32_hdr *)fw->data; + + /* We only support ELF32 at this point */ + class = ehdr->e_ident[EI_CLASS]; + if (class != ELFCLASS32) { + dev_err(fdev->dev, "Unsupported class: %d\n", class); + return -EINVAL; + } + + if (ehdr->e_ident[EI_DATA] != ELFDATA2LSB) { + dev_err(fdev->dev, "Unsupported firmware endianness" + "(%d) expected (%d)\n", ehdr->e_ident[EI_DATA], + ELFDATA2LSB); + return -EINVAL; + } + + if (fw->size < ehdr->e_shoff + sizeof(struct elf32_shdr)) { + dev_err(fdev->dev, "Image is too small (%u)\n", fw->size); + return -EINVAL; + } + + if (memcmp(ehdr->e_ident, ELFMAG, SELFMAG)) { + dev_err(fdev->dev, "Image is corrupted (bad magic)\n"); + return -EINVAL; + } + + if (ehdr->e_phnum != fdev->drvdata->num_mem) { + dev_err(fdev->dev, "spurious nb of segments (%d) expected (%d)" + "\n", ehdr->e_phnum, fdev->drvdata->num_mem); + return -EINVAL; + } + + if (ehdr->e_type != ET_EXEC) { + dev_err(fdev->dev, "Unsupported ELF header type (%d) expected" + " (%d)\n", ehdr->e_type, ET_EXEC); + return -EINVAL; + } + + if (ehdr->e_machine != EM_SLIM) { + dev_err(fdev->dev, "Unsupported ELF header machine (%d) " + "expected (%d)\n", ehdr->e_machine, EM_SLIM); + return -EINVAL; + } + if (ehdr->e_phoff > fw->size) { + dev_err(fdev->dev, "Firmware size is too small\n"); + return -EINVAL; + } + + return 0; +} + +static int +st_fdma_elf_load_segments(struct st_fdma_dev *fdev, const struct firmware *fw) +{ + struct device *dev = fdev->dev; + struct elf32_hdr *ehdr; + struct elf32_phdr *phdr; + int i, mem_loaded = 0; + const u8 *elf_data = fw->data; + + ehdr = (struct elf32_hdr *)elf_data; + phdr = (struct elf32_phdr *)(elf_data + ehdr->e_phoff); + + /* + * go through the available ELF segments + * the program header's paddr member to contain device addresses. + * We then go through the physically contiguous memory regions which we + * allocated (and mapped) earlier on the probe, + * and "translate" device address to kernel addresses, + * so we can copy the segments where they are expected. + */ + for (i = 0; i < ehdr->e_phnum; i++, phdr++) { + u32 da = phdr->p_paddr; + u32 memsz = phdr->p_memsz; + u32 filesz = phdr->p_filesz; + u32 offset = phdr->p_offset; + void *dst; + + if (phdr->p_type != PT_LOAD) + continue; + + dev_dbg(dev, "phdr: type %d da %#x ofst:%#x memsz %#x filesz %#x\n", + phdr->p_type, da, offset, memsz, filesz); + + if (filesz > memsz) { + dev_err(dev, "bad phdr filesz 0x%x memsz 0x%x\n", + filesz, memsz); + break; + } + + if (offset + filesz > fw->size) { + dev_err(dev, "truncated fw: need 0x%x avail 0x%zx\n", + offset + filesz, fw->size); + break; + } + + dst = st_fdma_seg_to_mem(fdev, da, memsz); + if (!dst) { + dev_err(dev, "bad phdr da 0x%x mem 0x%x\n", da, memsz); + break; + } + + if (phdr->p_filesz) + memcpy(dst, elf_data + phdr->p_offset, filesz); + + if (memsz > filesz) + memset(dst + filesz, 0, memsz - filesz); + + mem_loaded++; + } + + return (mem_loaded != fdev->drvdata->num_mem) ? -EIO : 0; +} + static void st_fdma_enable(struct st_fdma_dev *fdev) { unsigned long hw_id, hw_ver, fw_rev; @@ -125,6 +281,45 @@ static int st_fdma_disable(struct st_fdma_dev *fdev) return readl(fdev->io_base + FDMA_EN_OFST); } +static void st_fdma_fw_cb(const struct firmware *fw, void *context) +{ + struct st_fdma_dev *fdev = context; + int ret; + + ret = st_fdma_elf_sanity_check(fdev, fw); + if (ret) + goto out; + + st_fdma_disable(fdev); + ret = st_fdma_elf_load_segments(fdev, fw); + if (ret) + goto out; + + st_fdma_enable(fdev); + atomic_set(&fdev->fw_loaded, 1); +out: + release_firmware(fw); + complete_all(&fdev->fw_ack); +} + +static int st_fdma_get_fw(struct st_fdma_dev *fdev) +{ + int ret; + + init_completion(&fdev->fw_ack); + atomic_set(&fdev->fw_loaded, 0); + + ret = request_firmware_nowait(THIS_MODULE, FW_ACTION_HOTPLUG, + fdev->pdata->fw_name, fdev->dev, + GFP_KERNEL, fdev, st_fdma_fw_cb); + if (ret) { + dev_err(fdev->dev, "request_firmware_nowait err: %d\n", ret); + complete_all(&fdev->fw_ack); + } + + return ret; +} + static int st_fdma_dreq_get(struct st_fdma_chan *fchan) { struct st_fdma_dev *fdev = fchan->fdev; @@ -868,6 +1063,10 @@ static int st_fdma_probe(struct platform_device *pdev) vchan_init(&fchan->vchan, &fdev->dma_device); } + ret = st_fdma_get_fw(fdev); + if (ret) + goto err_clk; + /* Initialise the FDMA dreq (reserve 0 & 31 for FDMA use) */ fdev->dreq_mask = BIT(0) | BIT(31);