From patchwork Thu Apr 16 13:23:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg Kroah-Hartman X-Patchwork-Id: 227930 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4092FC2BB55 for ; Thu, 16 Apr 2020 13:29:24 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1CDCA217D8 for ; Thu, 16 Apr 2020 13:29:23 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587043764; bh=NGyp50u4g8VGtPaLo9ioCKlt5yyWCGeaATm1stqskG0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:List-ID:From; b=ulR58M/cr0tCGjLLRfomLPJrF6A+f41SpZ20VcU5ThpNm3uv4/npx9daXMhOSnfh0 dD67HBEoh3473NWzDJw3ilV19VUfKk19H+joWP0IXGYZZJjDs0CuxfWRVmDxZcuIEu 2/iPYtdAztAVRli6qYnMUemsoLe/BJeNrrRMKTeQ= Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2895986AbgDPN3Q (ORCPT ); Thu, 16 Apr 2020 09:29:16 -0400 Received: from mail.kernel.org ([198.145.29.99]:38924 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2895979AbgDPN3O (ORCPT ); Thu, 16 Apr 2020 09:29:14 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id A7CBF217D8; Thu, 16 Apr 2020 13:29:13 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1587043754; bh=NGyp50u4g8VGtPaLo9ioCKlt5yyWCGeaATm1stqskG0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TikUNiiaSb8cTcObVsrEWPFBU5iBrYu3KApm1R+aoCn1NVvlqS8u95x+XOuXBZPnr 6Q/kuIo1v9qiPiST/6Xcy1my1iYQTvm6txrNaEF3EyvsbtaCeGiR/lpbc2k8l/CT8B G1USPmM6v/sMAMjbGjkz19JBT5tU6I7D3/zEO4cA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Frieder Schrempf , Boris Brezillon , Miquel Raynal Subject: [PATCH 4.19 087/146] mtd: spinand: Stop using spinand->oobbuf for buffering bad block markers Date: Thu, 16 Apr 2020 15:23:48 +0200 Message-Id: <20200416131254.718491858@linuxfoundation.org> X-Mailer: git-send-email 2.26.1 In-Reply-To: <20200416131242.353444678@linuxfoundation.org> References: <20200416131242.353444678@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Frieder Schrempf commit 2148937501ee3d663e0010e519a553fea67ad103 upstream. For reading and writing the bad block markers, spinand->oobbuf is currently used as a buffer for the marker bytes. During the underlying read and write operations to actually get/set the content of the OOB area, the content of spinand->oobbuf is reused and changed by accessing it through spinand->oobbuf and/or spinand->databuf. This is a flaw in the original design of the SPI NAND core and at the latest from 13c15e07eedf ("mtd: spinand: Handle the case where PROGRAM LOAD does not reset the cache") on, it results in not having the bad block marker written at all, as the spinand->oobbuf is cleared to 0xff after setting the marker bytes to zero. To fix it, we now just store the two bytes for the marker on the stack and let the read/write operations copy it from/to the page buffer later. Fixes: 7529df465248 ("mtd: nand: Add core infrastructure to support SPI NANDs") Cc: stable@vger.kernel.org Signed-off-by: Frieder Schrempf Reviewed-by: Boris Brezillon Signed-off-by: Miquel Raynal Link: https://lore.kernel.org/linux-mtd/20200218100432.32433-2-frieder.schrempf@kontron.de Signed-off-by: Greg Kroah-Hartman --- drivers/mtd/nand/spi/core.c | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) --- a/drivers/mtd/nand/spi/core.c +++ b/drivers/mtd/nand/spi/core.c @@ -629,18 +629,18 @@ static int spinand_mtd_write(struct mtd_ static bool spinand_isbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, - .ooblen = 2, + .ooblen = sizeof(marker), .ooboffs = 0, - .oobbuf.in = spinand->oobbuf, + .oobbuf.in = marker, .mode = MTD_OPS_RAW, }; - memset(spinand->oobbuf, 0, 2); spinand_select_target(spinand, pos->target); spinand_read_page(spinand, &req, false); - if (spinand->oobbuf[0] != 0xff || spinand->oobbuf[1] != 0xff) + if (marker[0] != 0xff || marker[1] != 0xff) return true; return false; @@ -664,11 +664,12 @@ static int spinand_mtd_block_isbad(struc static int spinand_markbad(struct nand_device *nand, const struct nand_pos *pos) { struct spinand_device *spinand = nand_to_spinand(nand); + u8 marker[2] = { }; struct nand_page_io_req req = { .pos = *pos, .ooboffs = 0, - .ooblen = 2, - .oobbuf.out = spinand->oobbuf, + .ooblen = sizeof(marker), + .oobbuf.out = marker, }; int ret; @@ -683,7 +684,6 @@ static int spinand_markbad(struct nand_d spinand_erase_op(spinand, pos); - memset(spinand->oobbuf, 0, 2); return spinand_write_page(spinand, &req); }