From patchwork Mon May 20 12:14:07 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Greg KH X-Patchwork-Id: 164621 Delivered-To: patch@linaro.org Received: by 2002:a92:9e1a:0:0:0:0:0 with SMTP id q26csp231293ili; Mon, 20 May 2019 05:34:43 -0700 (PDT) X-Google-Smtp-Source: APXvYqxGW14AsWoGA4DOOrrK4+Wq5B1rrpnnsyglXiw4nFffobZxpd+y5IlOhoRmEYaPw8amCz4j X-Received: by 2002:a63:fd52:: with SMTP id m18mr75539701pgj.267.1558355683769; Mon, 20 May 2019 05:34:43 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1558355683; cv=none; d=google.com; s=arc-20160816; b=w4HXRXrLgt02MGXQavdCD8+qevzvZaa9QiCxwUC9BGuUjNpvNis9KZsZTWwvUCrZjC JnWvBkPPlv7ehkwefFpKZXPiSL55qFQRCbTFcNb7x3ZroBcygrAFm0fU+GV/8jsPIVKh Vyi98WCn9RqlP7YL42FvIjkn/jp3666pkeAiYBWXwBeyGxehpxyk0o/xBQYxhaaXfhX4 0Pw5N5f+QTfK8h8zfZGw00mswenWvv2H/+/pHX56MKT3ZXPl/u1xtARI9F3MYl8azwIg hJOhQYahcW9x0GpNOGXqJL8bdht6l2Y6Nd2U1zwUBC0zAmlLSztoSLyVH73+HGy0QW2x etCg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=list-id:precedence:sender:content-transfer-encoding:mime-version :user-agent:references:in-reply-to:message-id:date:subject:cc:to :from:dkim-signature; bh=Kk+FINYSTHqaWjysVwi6DZb6/yuGbfud4vkMehbnu9g=; b=VW7VnIcwC7ewPG4AsqtPTMmsdodEb3H8IaiPWFVP18EqXELvcqAK0IiPzKPmx6kMGb y+pHdbYwtBibSu745SWz1/JsLvKLiMtNjzp/eecdcavO9pLAK0878bphMuk+vPZscUJH yP0l9C8Ibtl0z1wwYpGCCL7VWUf4WsTdzgl5cXWGx7mpqOLMCtywRxKUpTaKJOrEa6H3 OlZC8WZtr2E4lw2hr3BDQynWV2KHY3Vppduy4wwgZdiIjUHraXJHaemQ7BG0NqBVpGtq U3GEFBto3LXkGaqHjlsjqY04+B8pq2q6VezTe3NKQBdUx4e3E1HaKNfPI7NZXdP3MhpS wNKA== ARC-Authentication-Results: i=1; mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Vl9+RloC; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id e11si12396932pgv.569.2019.05.20.05.34.41; Mon, 20 May 2019 05:34:43 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=pass header.i=@kernel.org header.s=default header.b=Vl9+RloC; spf=pass (google.com: best guess record for domain of stable-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=stable-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2390767AbfETMdk (ORCPT + 14 others); Mon, 20 May 2019 08:33:40 -0400 Received: from mail.kernel.org ([198.145.29.99]:50764 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S2390755AbfETMdg (ORCPT ); Mon, 20 May 2019 08:33:36 -0400 Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl [83.86.89.107]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 3BFDB20815; Mon, 20 May 2019 12:33:35 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1558355615; bh=LIPVOzobOFCqE51m4NNu5tsrh3LZ6vm87nQqkPrsDEQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Vl9+RloC9GDGAtljBRID2zQYyMCvm/7Nnpp9OldluLFH/tzT+BT7q2HgPFwHmCQll m8fLj+wQ75gHyeZ9ViBJizAl4MFxr0HuuKXTt+0Sukof68Pzkx9695ZMQazdqVWt2b EKX5Mc/utAgjLqBh30AjVMHD8OGGO9nfYG4rnreA= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Gilad Ben-Yossef , Herbert Xu Subject: [PATCH 5.1 060/128] crypto: ccree - remove special handling of chained sg Date: Mon, 20 May 2019 14:14:07 +0200 Message-Id: <20190520115253.837081510@linuxfoundation.org> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20190520115249.449077487@linuxfoundation.org> References: <20190520115249.449077487@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Sender: stable-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: stable@vger.kernel.org From: Gilad Ben-Yossef commit c4b22bf51b815fb61a35a27fc847a88bc28ebb63 upstream. We were handling chained scattergather lists with specialized code needlessly as the regular sg APIs handle them just fine. The code handling this also had an (unused) code path with a use-before-init error, flagged by Coverity. Remove all special handling of chained sg and leave their handling to the regular sg APIs. Signed-off-by: Gilad Ben-Yossef Cc: stable@vger.kernel.org # v4.19+ Signed-off-by: Herbert Xu Signed-off-by: Greg Kroah-Hartman --- drivers/crypto/ccree/cc_buffer_mgr.c | 98 +++++++---------------------------- 1 file changed, 22 insertions(+), 76 deletions(-) --- a/drivers/crypto/ccree/cc_buffer_mgr.c +++ b/drivers/crypto/ccree/cc_buffer_mgr.c @@ -83,24 +83,17 @@ static void cc_copy_mac(struct device *d */ static unsigned int cc_get_sgl_nents(struct device *dev, struct scatterlist *sg_list, - unsigned int nbytes, u32 *lbytes, - bool *is_chained) + unsigned int nbytes, u32 *lbytes) { unsigned int nents = 0; while (nbytes && sg_list) { - if (sg_list->length) { - nents++; - /* get the number of bytes in the last entry */ - *lbytes = nbytes; - nbytes -= (sg_list->length > nbytes) ? - nbytes : sg_list->length; - sg_list = sg_next(sg_list); - } else { - sg_list = (struct scatterlist *)sg_page(sg_list); - if (is_chained) - *is_chained = true; - } + nents++; + /* get the number of bytes in the last entry */ + *lbytes = nbytes; + nbytes -= (sg_list->length > nbytes) ? + nbytes : sg_list->length; + sg_list = sg_next(sg_list); } dev_dbg(dev, "nents %d last bytes %d\n", nents, *lbytes); return nents; @@ -142,7 +135,7 @@ void cc_copy_sg_portion(struct device *d { u32 nents, lbytes; - nents = cc_get_sgl_nents(dev, sg, end, &lbytes, NULL); + nents = cc_get_sgl_nents(dev, sg, end, &lbytes); sg_copy_buffer(sg, nents, (void *)dest, (end - to_skip + 1), to_skip, (direct == CC_SG_TO_BUF)); } @@ -314,40 +307,10 @@ static void cc_add_sg_entry(struct devic sgl_data->num_of_buffers++; } -static int cc_dma_map_sg(struct device *dev, struct scatterlist *sg, u32 nents, - enum dma_data_direction direction) -{ - u32 i, j; - struct scatterlist *l_sg = sg; - - for (i = 0; i < nents; i++) { - if (!l_sg) - break; - if (dma_map_sg(dev, l_sg, 1, direction) != 1) { - dev_err(dev, "dma_map_page() sg buffer failed\n"); - goto err; - } - l_sg = sg_next(l_sg); - } - return nents; - -err: - /* Restore mapped parts */ - for (j = 0; j < i; j++) { - if (!sg) - break; - dma_unmap_sg(dev, sg, 1, direction); - sg = sg_next(sg); - } - return 0; -} - static int cc_map_sg(struct device *dev, struct scatterlist *sg, unsigned int nbytes, int direction, u32 *nents, u32 max_sg_nents, u32 *lbytes, u32 *mapped_nents) { - bool is_chained = false; - if (sg_is_last(sg)) { /* One entry only case -set to DLLI */ if (dma_map_sg(dev, sg, 1, direction) != 1) { @@ -361,35 +324,21 @@ static int cc_map_sg(struct device *dev, *nents = 1; *mapped_nents = 1; } else { /*sg_is_last*/ - *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes, - &is_chained); + *nents = cc_get_sgl_nents(dev, sg, nbytes, lbytes); if (*nents > max_sg_nents) { *nents = 0; dev_err(dev, "Too many fragments. current %d max %d\n", *nents, max_sg_nents); return -ENOMEM; } - if (!is_chained) { - /* In case of mmu the number of mapped nents might - * be changed from the original sgl nents - */ - *mapped_nents = dma_map_sg(dev, sg, *nents, direction); - if (*mapped_nents == 0) { - *nents = 0; - dev_err(dev, "dma_map_sg() sg buffer failed\n"); - return -ENOMEM; - } - } else { - /*In this case the driver maps entry by entry so it - * must have the same nents before and after map - */ - *mapped_nents = cc_dma_map_sg(dev, sg, *nents, - direction); - if (*mapped_nents != *nents) { - *nents = *mapped_nents; - dev_err(dev, "dma_map_sg() sg buffer failed\n"); - return -ENOMEM; - } + /* In case of mmu the number of mapped nents might + * be changed from the original sgl nents + */ + *mapped_nents = dma_map_sg(dev, sg, *nents, direction); + if (*mapped_nents == 0) { + *nents = 0; + dev_err(dev, "dma_map_sg() sg buffer failed\n"); + return -ENOMEM; } } @@ -571,7 +520,6 @@ void cc_unmap_aead_request(struct device struct crypto_aead *tfm = crypto_aead_reqtfm(req); struct cc_drvdata *drvdata = dev_get_drvdata(dev); u32 dummy; - bool chained; u32 size_to_unmap = 0; if (areq_ctx->mac_buf_dma_addr) { @@ -636,15 +584,14 @@ void cc_unmap_aead_request(struct device size_to_unmap += crypto_aead_ivsize(tfm); dma_unmap_sg(dev, req->src, - cc_get_sgl_nents(dev, req->src, size_to_unmap, - &dummy, &chained), + cc_get_sgl_nents(dev, req->src, size_to_unmap, &dummy), DMA_BIDIRECTIONAL); if (req->src != req->dst) { dev_dbg(dev, "Unmapping dst sgl: req->dst=%pK\n", sg_virt(req->dst)); dma_unmap_sg(dev, req->dst, cc_get_sgl_nents(dev, req->dst, size_to_unmap, - &dummy, &chained), + &dummy), DMA_BIDIRECTIONAL); } if (drvdata->coherent && @@ -1022,7 +969,6 @@ static int cc_aead_chain_data(struct cc_ unsigned int size_for_map = req->assoclen + req->cryptlen; struct crypto_aead *tfm = crypto_aead_reqtfm(req); u32 sg_index = 0; - bool chained = false; bool is_gcm4543 = areq_ctx->is_gcm4543; u32 size_to_skip = req->assoclen; @@ -1043,7 +989,7 @@ static int cc_aead_chain_data(struct cc_ size_for_map += (direct == DRV_CRYPTO_DIRECTION_ENCRYPT) ? authsize : 0; src_mapped_nents = cc_get_sgl_nents(dev, req->src, size_for_map, - &src_last_bytes, &chained); + &src_last_bytes); sg_index = areq_ctx->src_sgl->length; //check where the data starts while (sg_index <= size_to_skip) { @@ -1083,7 +1029,7 @@ static int cc_aead_chain_data(struct cc_ } dst_mapped_nents = cc_get_sgl_nents(dev, req->dst, size_for_map, - &dst_last_bytes, &chained); + &dst_last_bytes); sg_index = areq_ctx->dst_sgl->length; offset = size_to_skip; @@ -1484,7 +1430,7 @@ int cc_map_hash_request_update(struct cc dev_dbg(dev, " less than one block: curr_buff=%pK *curr_buff_cnt=0x%X copy_to=%pK\n", curr_buff, *curr_buff_cnt, &curr_buff[*curr_buff_cnt]); areq_ctx->in_nents = - cc_get_sgl_nents(dev, src, nbytes, &dummy, NULL); + cc_get_sgl_nents(dev, src, nbytes, &dummy); sg_copy_to_buffer(src, areq_ctx->in_nents, &curr_buff[*curr_buff_cnt], nbytes); *curr_buff_cnt += nbytes;