From patchwork Wed May 25 06:12:29 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "\(Exiting\) Baolin Wang" X-Patchwork-Id: 68560 Delivered-To: patch@linaro.org Received: by 10.140.92.199 with SMTP id b65csp1033936qge; Tue, 24 May 2016 23:13:11 -0700 (PDT) X-Received: by 10.66.65.133 with SMTP id x5mr3223433pas.108.1464156791710; Tue, 24 May 2016 23:13:11 -0700 (PDT) Return-Path: Received: from vger.kernel.org (vger.kernel.org. [209.132.180.67]) by mx.google.com with ESMTP id hu6si10242407pad.196.2016.05.24.23.13.11 for ; Tue, 24 May 2016 23:13:11 -0700 (PDT) Received-SPF: pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) client-ip=209.132.180.67; Authentication-Results: mx.google.com; dkim=neutral (body hash did not verify) header.i=@linaro.org; spf=pass (google.com: best guess record for domain of linux-crypto-owner@vger.kernel.org designates 209.132.180.67 as permitted sender) smtp.mailfrom=linux-crypto-owner@vger.kernel.org; dmarc=fail (p=NONE dis=NONE) header.from=linaro.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1755491AbcEYGM5 (ORCPT ); Wed, 25 May 2016 02:12:57 -0400 Received: from mail-pf0-f181.google.com ([209.85.192.181]:33557 "EHLO mail-pf0-f181.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1755481AbcEYGMz (ORCPT ); Wed, 25 May 2016 02:12:55 -0400 Received: by mail-pf0-f181.google.com with SMTP id b124so15070692pfb.0 for ; Tue, 24 May 2016 23:12:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=hjLd/m7uOUrw6EfBiJhyV1i8B/YXwVtwYQq5Gd+Cu6o=; b=U3Cfr5wrMgTs1YuQp7ka05yYjic61yeFVqHEc9QqkQfQGFmDCGAdnd3lc0JF1/8Cfm rogtilh9/u18yVQh3UUV7zMVFFQ1eu2nuFhWMvZuxYIp0/d5kUx5fYqIkTG+B3Q6wUVR NnC/RNVPCwGSd7CloWsan2HvEd0voR08BzZ1A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=hjLd/m7uOUrw6EfBiJhyV1i8B/YXwVtwYQq5Gd+Cu6o=; b=Cj2CblXNbEz6CzV/hCyoz77EhC38aWEP5/8VquyMJFf3aMtWPgZy5nnDq4VFb7O3/M 1lB0NdKqRHclm5ifRWnVnpt+k/hEeWXjoOZxtYCz06EcHb55lQY3SDfHvUngMVWfUHur KjbJ4rXp1emC/64Zl+o8/wP6dfpCAuIj8/vHd23kUv1We82RegH+gf+hqffkuB5B30qu cqw4DTg2kdsqfPfWoEqz2mpvxv8Z/C5VEwwB01/ig5s6HMMlcJ2heOaKMAlpB0Tc5Pew /85SbiALpFY8LMDory/4pkjs6rNl1kohG7HeCNoIAEXUZ9Kebi2y1vWElzCst0UX8/44 PypA== X-Gm-Message-State: ALyK8tKo2m5VZXntb8+K/fZur45AbQgR9XIiEJ+ZBgIyj56QlNZL0BfWYCk4ahS6aprFp/Bk X-Received: by 10.98.55.129 with SMTP id e123mr3247936pfa.4.1464156774591; Tue, 24 May 2016 23:12:54 -0700 (PDT) Received: from baolinwangubtpc.spreadtrum.com ([175.111.195.49]) by smtp.gmail.com with ESMTPSA id l129sm17484335pfc.5.2016.05.24.23.12.47 (version=TLS1 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Tue, 24 May 2016 23:12:53 -0700 (PDT) From: Baolin Wang To: axboe@kernel.dk, agk@redhat.com, snitzer@redhat.com, dm-devel@redhat.com, herbert@gondor.apana.org.au, davem@davemloft.net Cc: ebiggers3@gmail.com, js1304@gmail.com, tadeusz.struk@intel.com, smueller@chronox.de, standby24x7@gmail.com, shli@kernel.org, dan.j.williams@intel.com, martin.petersen@oracle.com, sagig@mellanox.com, kent.overstreet@gmail.com, keith.busch@intel.com, tj@kernel.org, ming.lei@canonical.com, broonie@kernel.org, arnd@arndb.de, linux-crypto@vger.kernel.org, linux-block@vger.kernel.org, linux-raid@vger.kernel.org, linux-kernel@vger.kernel.org, baolin.wang@linaro.org Subject: [RFC 1/3] block: Introduce blk_bio_map_sg() to map one bio Date: Wed, 25 May 2016 14:12:29 +0800 Message-Id: X-Mailer: git-send-email 1.7.9.5 In-Reply-To: References: In-Reply-To: References: Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org In dm-crypt, it need to map one bio to scatterlist for improving the hardware engine encryption efficiency. Thus this patch introduces the blk_bio_map_sg() function to map one bio with scatterlists. Signed-off-by: Baolin Wang --- block/blk-merge.c | 45 +++++++++++++++++++++++++++++++++++++++++++++ include/linux/blkdev.h | 3 +++ 2 files changed, 48 insertions(+) -- 1.7.9.5 -- To unsubscribe from this list: send the line "unsubscribe linux-crypto" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html diff --git a/block/blk-merge.c b/block/blk-merge.c index 2613531..9b92af4 100644 --- a/block/blk-merge.c +++ b/block/blk-merge.c @@ -417,6 +417,51 @@ single_segment: } /* + * map a bio to scatterlist, return number of sg entries setup. + */ +int blk_bio_map_sg(struct request_queue *q, struct bio *bio, + struct scatterlist *sglist, + struct scatterlist **sg) +{ + struct bio_vec bvec, bvprv = { NULL }; + struct bvec_iter iter; + int nsegs, cluster; + + nsegs = 0; + cluster = blk_queue_cluster(q); + + if (bio->bi_rw & REQ_DISCARD) { + /* + * This is a hack - drivers should be neither modifying the + * biovec, nor relying on bi_vcnt - but because of + * blk_add_request_payload(), a discard bio may or may not have + * a payload we need to set up here (thank you Christoph) and + * bi_vcnt is really the only way of telling if we need to. + */ + + if (bio->bi_vcnt) + goto single_segment; + + return 0; + } + + if (bio->bi_rw & REQ_WRITE_SAME) { +single_segment: + *sg = sglist; + bvec = bio_iovec(bio); + sg_set_page(*sg, bvec.bv_page, bvec.bv_len, bvec.bv_offset); + return 1; + } + + bio_for_each_segment(bvec, bio, iter) + __blk_segment_map_sg(q, &bvec, sglist, &bvprv, sg, + &nsegs, &cluster); + + return nsegs; +} +EXPORT_SYMBOL(blk_bio_map_sg); + +/* * map a request to scatterlist, return number of sg entries setup. Caller * must make sure sg can hold rq->nr_phys_segments entries */ diff --git a/include/linux/blkdev.h b/include/linux/blkdev.h index 1fd8fdf..e5de4f8 100644 --- a/include/linux/blkdev.h +++ b/include/linux/blkdev.h @@ -1013,6 +1013,9 @@ extern void blk_queue_write_cache(struct request_queue *q, bool enabled, bool fu extern struct backing_dev_info *blk_get_backing_dev_info(struct block_device *bdev); extern int blk_rq_map_sg(struct request_queue *, struct request *, struct scatterlist *); +extern int blk_bio_map_sg(struct request_queue *q, struct bio *bio, + struct scatterlist *sglist, + struct scatterlist **sg); extern void blk_dump_rq_flags(struct request *, char *); extern long nr_blockdev_pages(void);