From patchwork Thu Apr 16 13:23:51 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 227747 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3281BC2BB55 for ; Thu, 16 Apr 2020 14:47:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 135B2206B9 for ; Thu, 16 Apr 2020 14:47:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587048455; bh=TFXnfxHugoSP3Q7WZCuT3ZsZo+7q/qP8JgVFHiV9l8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=OtLJLLmX5GfWuanpyPa+YIpPruBtpEZQjO0xeAEUP9SCxgxMN4Y1mKrev4q2WBRtp 8rdlJQ8aJ6rGywZ/388tWXYm5qUe1w78QrSD/XgyFgMN777bROD1gX3TPtGPkbrUYK og9A1RjsxIG+SRX/eFY82cNe7T3spihx3WiKa/rs= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1731805AbgDPOq4 (ORCPT ); Thu, 16 Apr 2020 10:46:56 -0400 Received: from mail.kernel.org ([198.145.29.99]:46150 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2898671AbgDPN7H (ORCPT ); Thu, 16 Apr 2020 09:59:07 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A3BFF21927; Thu, 16 Apr 2020 13:59:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587045547; bh=TFXnfxHugoSP3Q7WZCuT3ZsZo+7q/qP8JgVFHiV9l8U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=dvEuRFzpETxoV8ngsQajSwBLUA+KYTCwmWSuW9+BWzXR7Azqv0Ngp1mJeemJDssDj z9Ym4ybUer3FDVzMgd2ora/lbaMUC1EvhFsndqVPSiuhxJAlLRzbCmfM4j6hZmTDpl o0l1grTvEAUyzvJURr37IYlE0lpfepcu9vH1zIyo= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Frieder Schrempf , Boris Brezillon , Miquel Raynal Subject: [PATCH 5.6 142/254] mtd: spinand: Stop using spinand->oobbuf for buffering bad block markers Date: Thu, 16 Apr 2020 15:23:51 +0200 Message-Id: <20200416131344.257402592@linuxfoundation.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200416131325.804095985@linuxfoundation.org> References: <20200416131325.804095985@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Frieder Schrempf commit 2148937501ee3d663e0010e519a553fea67ad103 upstream. For reading and writing the bad block markers, spinand->oobbuf is currently used as a buffer for the marker bytes. During the underlying read and write operations to actually get/set the content of the OOB area, the content of spinand->oobbuf is reused and changed by accessing it through spinand->oobbuf and/or spinand->databuf. This is a flaw in the original design of the SPI NAND core and at the latest from 13c15e07eedf ("mtd: spinand: Handle the case where PROGRAM LOAD does not reset the cache") on, it results in not having the bad block marker written at all, as the spinand->oobbuf is cleared to 0xff after setting the marker bytes to zero. To fix it, we now just store the two bytes for the marker on the stack and let the read/write operations copy it from/to the page buffer later. Fixes: 7529df465248 ("mtd: nand: Add core infrastructure to support SPI NANDs") Cc: stable@vger.kernel.org Signed-off-by: Frieder Schrempf Reviewed-by: Boris Brezillon Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20200218100432.32433-2-frieder.schrempf@kontron.de Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/nand/spi/core.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -568,18 +568,18 @@ static int spinand_mtd_write(struct mtd_ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, - .ooblen = 2, + .ooblen = sizeof(marker), .ooboffs = 0, - .oobbuf.in = spinand->oobbuf, + .oobbuf.in = marker, .mode = MTD_OPS_RAW, }; - memset(spinand->oobbuf, 0, 2); spinand_select_target(spinand, pos->target); spinand_read_page(spinand, &req, false); - if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff) + if (marker[0] != 0xff || marker[1] != 0xff) return true; return false; @@ -603,11 +603,12 @@ static int spinand_mtd_block_isbad(struc static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, .ooboffs = 0, - .ooblen = 2, - .oobbuf.out = spinand->oobbuf, + .ooblen = sizeof(marker), + .oobbuf.out = marker, }; int ret; @@ -622,7 +623,6 @@ static int spinand_markbad(struct nand_d spinand_erase_op(spinand, pos); - memset(spinand->oobbuf, 0, 2); return spinand_write_page(spinand, &req); }