From patchwork Tue Mar 25 08:20:00 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Lee Jones X-Patchwork-Id: 26989 Return-Path: X-Original-To: linaro@patches.linaro.org Delivered-To: linaro@patches.linaro.org Received: from mail-vc0-f198.google.com (mail-vc0-f198.google.com [209.85.220.198]) by ip-10-151-82-157.ec2.internal (Postfix) with ESMTPS id D791A20062 for ; Tue, 25 Mar 2014 08:25:44 +0000 (UTC) Received: by mail-vc0-f198.google.com with SMTP id il7sf295558vcb.9 for ; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:mime-version:delivered-to:from:to:cc:subject :date:message-id:in-reply-to:references:sender:precedence:list-id :x-original-sender:x-original-authentication-results:mailing-list :list-post:list-help:list-archive:list-unsubscribe; bh=8j/krqWiRtguJwDfblRc184OQIUir/yqAwsjg9IcVnk=; b=MEyJJHqZKIpkUKC6H8lIVerW2ZdIPLjFlwhn7jrC/EUDZiwFIlKOkbhx5+e38eVnM6 MhFNL8eQfjledDjUYu8Y5SlxY8QibkF0brrz0CLzOCtnYSqq6wppdkg4+Ds7ia0h2xnb f3dvmekVMF1QtHn7E+ZxNAmcQFGQVPP9sMkefmLtuz52SnRWgt7IDyhdE1Ux3JgsYq/N MDGCkmk/30IirfGkoHFxyN4Jiy7dQVoi5Y6nlE/oOGw3BRF5vKVNhFk1wy9WAA/NgHkV Ua36STc6nHnChy1UGuGFE/JWJKibRDAimmNaPLjrDKcILiPGXvRM3ct9a+Q/9DRojKaz u4Ow== X-Gm-Message-State: ALoCoQlSdPvpxhVB9wkyJeeHRphorVmcy8CC9jSm/5DUZPsrOdogShEhwgA3of4t2sLfzZlUSUiY X-Received: by 10.224.68.72 with SMTP id u8mr6881855qai.1.1395735944620; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) MIME-Version: 1.0 X-BeenThere: patchwork-forward@linaro.org Received: by 10.140.106.165 with SMTP id e34ls127008qgf.29.gmail; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) X-Received: by 10.220.95.139 with SMTP id d11mr6359385vcn.21.1395735944515; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) Received: from mail-vc0-f179.google.com (mail-vc0-f179.google.com [209.85.220.179]) by mx.google.com with ESMTPS id yt16si3543949vcb.150.2014.03.25.01.25.44 for (version=TLSv1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Mar 2014 01:25:44 -0700 (PDT) Received-SPF: neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) client-ip=209.85.220.179; Received: by mail-vc0-f179.google.com with SMTP id ij19so129112vcb.38 for ; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) X-Received: by 10.221.39.138 with SMTP id tm10mr54666963vcb.7.1395735944324; Tue, 25 Mar 2014 01:25:44 -0700 (PDT) X-Forwarded-To: patchwork-forward@linaro.org X-Forwarded-For: patch@linaro.org patchwork-forward@linaro.org Delivered-To: patch@linaro.org Received: by 10.220.78.9 with SMTP id i9csp285514vck; Tue, 25 Mar 2014 01:25:43 -0700 (PDT) X-Received: by 10.68.248.7 with SMTP id yi7mr78056706pbc.31.1395735942669; Tue, 25 Mar 2014 01:25:42 -0700 (PDT) Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id ys6si10325822pab.90.2014.03.25.01.25.41; Tue, 25 Mar 2014 01:25:41 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-kernel-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1753391AbaCYIZc (ORCPT + 26 others); Tue, 25 Mar 2014 04:25:32 -0400 Received: from mail-we0-f174.google.com ([74.125.82.174]:57275 "EHLO mail-we0-f174.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1753132AbaCYIVm (ORCPT ); Tue, 25 Mar 2014 04:21:42 -0400 Received: by mail-we0-f174.google.com with SMTP id t60so77125wes.19 for ; Tue, 25 Mar 2014 01:21:40 -0700 (PDT) X-Received: by 10.180.36.8 with SMTP id m8mr20226418wij.42.1395735700690; Tue, 25 Mar 2014 01:21:40 -0700 (PDT) Received: from lee--X1.home (host109-148-113-193.range109-148.btcentralplus.com. [109.148.113.193]) by mx.google.com with ESMTPSA id k4sm5567676wib.19.2014.03.25.01.21.38 for (version=TLSv1.1 cipher=ECDHE-RSA-RC4-SHA bits=128/128); Tue, 25 Mar 2014 01:21:39 -0700 (PDT) From: Lee Jones To: linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org Cc: lee.jones@linaro.org, kernel@stlinux.com, computersforpeace@gmail.com, linux-mtd@lists.infradead.org, dwmw2@infradead.org, angus.clark@st.com, pekon@ti.com Subject: [RFC 43/47] mtd: nand: stm_nand_bch: read and write functions (BCH) Date: Tue, 25 Mar 2014 08:20:00 +0000 Message-Id: <1395735604-26706-44-git-send-email-lee.jones@linaro.org> X-Mailer: git-send-email 1.8.3.2 In-Reply-To: <1395735604-26706-1-git-send-email-lee.jones@linaro.org> References: <1395735604-26706-1-git-send-email-lee.jones@linaro.org> Sender: linux-kernel-owner@vger.kernel.org Precedence: list List-ID: X-Mailing-List: linux-kernel@vger.kernel.org X-Removed-Original-Auth: Dkim didn't pass. X-Original-Sender: lee.jones@linaro.org X-Original-Authentication-Results: mx.google.com; spf=neutral (google.com: 209.85.220.179 is neither permitted nor denied by best guess record for domain of patch+caf_=patchwork-forward=linaro.org@linaro.org) smtp.mail=patch+caf_=patchwork-forward=linaro.org@linaro.org Mailing-list: list patchwork-forward@linaro.org; contact patchwork-forward+owners@linaro.org X-Google-Group-Id: 836684582541 List-Post: , List-Help: , List-Archive: List-Unsubscribe: , Helper function for bch_mtd_read() and bch_mtd_write() to handle multi-page or non-aligned reads and writes respectively. Signed-off-by: Lee Jones --- drivers/mtd/nand/stm_nand_bch.c | 143 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 143 insertions(+) diff --git a/drivers/mtd/nand/stm_nand_bch.c b/drivers/mtd/nand/stm_nand_bch.c index 389ccee..bcaed32 100644 --- a/drivers/mtd/nand/stm_nand_bch.c +++ b/drivers/mtd/nand/stm_nand_bch.c @@ -507,6 +507,149 @@ static uint8_t bch_write_page(struct nandi_controller *nandi, return status; } +/* Helper function for bch_mtd_read to handle multi-page or non-aligned reads */ +static int bch_read(struct nandi_controller *nandi, + loff_t from, size_t len, + size_t *retlen, u_char *buf) +{ + struct mtd_ecc_stats stats; + uint32_t page_size = nandi->info.mtd.writesize; + uint32_t col_offs; + loff_t page_mask; + loff_t page_offs; + int ecc_errs, max_ecc_errs = 0; + int page_num; + size_t bytes; + uint8_t *p; + bool bounce = false; + + dev_dbg(nandi->dev, "%s: %llu @ 0x%012llx\n", __func__, + (unsigned long long)len, from); + + stats = nandi->info.mtd.ecc_stats; + page_mask = (loff_t)page_size - 1; + col_offs = (uint32_t)(from & page_mask); + page_offs = from & ~page_mask; + page_num = (int)(page_offs >> nandi->page_shift); + + while (len > 0) { + bytes = min((page_size - col_offs), len); + + if ((bytes != page_size) || + ((unsigned int)buf & (NANDI_BCH_DMA_ALIGNMENT - 1)) || + (!virt_addr_valid(buf))) /* vmalloc'd buffer! */ + bounce = true; + + if (page_num == nandi->cached_page) { + memcpy(buf, nandi->page_buf + col_offs, bytes); + goto done; + } + + p = bounce ? nandi->page_buf : buf; + + ecc_errs = bch_read_page(nandi, page_offs, p); + if (bounce) + memcpy(buf, p + col_offs, bytes); + + if (ecc_errs < 0) { + dev_err(nandi->dev, + "%s: uncorrectable error at 0x%012llx\n", + __func__, page_offs); + nandi->info.mtd.ecc_stats.failed++; + + /* Do not cache uncorrectable pages */ + if (bounce) + nandi->cached_page = -1; + + goto done; + } + + if (ecc_errs) { + dev_info(nandi->dev, + "%s: corrected %u error(s) at 0x%012llx\n", + __func__, ecc_errs, page_offs); + + nandi->info.mtd.ecc_stats.corrected += ecc_errs; + + if (ecc_errs > max_ecc_errs) + max_ecc_errs = ecc_errs; + } + + if (bounce) + nandi->cached_page = page_num; + +done: + buf += bytes; + len -= bytes; + + if (retlen) + *retlen += bytes; + + /* We are now page-aligned */ + page_offs += page_size; + page_num++; + col_offs = 0; + } + + /* Return '-EBADMSG' on uncorrectable errors */ + if (nandi->info.mtd.ecc_stats.failed - stats.failed) + return -EBADMSG; + + return max_ecc_errs; +} + +/* Helper function for mtd_write, to handle multi-page and non-aligned writes */ +static int bch_write(struct nandi_controller *nandi, + loff_t to, size_t len, + size_t *retlen, const uint8_t *buf) +{ + uint32_t page_size = nandi->info.mtd.writesize; + int page_num; + bool bounce = false; + const uint8_t *p = NULL; + uint8_t ret; + + dev_dbg(nandi->dev, "%s: %llu @ 0x%012llx\n", __func__, + (unsigned long long)len, to); + + BUG_ON(len & (page_size - 1)); + BUG_ON(to & (page_size - 1)); + + if (((unsigned long)buf & (NANDI_BCH_DMA_ALIGNMENT - 1)) || + !virt_addr_valid(buf)) { /* vmalloc'd buffer! */ + bounce = true; + } + + page_num = (int)(to >> nandi->page_shift); + + while (len > 0) { + if (bounce) { + memcpy(nandi->page_buf, buf, page_size); + p = nandi->page_buf; + nandi->cached_page = -1; + } else { + p = buf; + } + + if (nandi->cached_page == page_num) + nandi->cached_page = -1; + + ret = bch_write_page(nandi, to, p); + if (ret & NAND_STATUS_FAIL) + return -EIO; + + to += page_size; + page_num++; + buf += page_size; + len -= page_size; + + if (retlen) + *retlen += page_size; + } + + return 0; +} + /* * Hamming-FLEX operations */