From patchwork Fri Apr 24 16:44:26 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tero Kristo X-Patchwork-Id: 197827 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43A45C55197 for ; Fri, 24 Apr 2020 16:45:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1E95220781 for ; Fri, 24 Apr 2020 16:45:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="UkmjWT1u" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728211AbgDXQpF (ORCPT ); Fri, 24 Apr 2020 12:45:05 -0400 Received: from lelv0143.ext.ti.com ([198.47.23.248]:46556 "EHLO lelv0143.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727049AbgDXQpF (ORCPT ); Fri, 24 Apr 2020 12:45:05 -0400 Received: from lelv0265.itg.ti.com ([10.180.67.224]) by lelv0143.ext.ti.com (8.15.2/8.15.2) with ESMTP id 03OGiuRN060672; Fri, 24 Apr 2020 11:44:56 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1587746696; bh=uyTbbs8DNPmVaQfmOLco5ZqZWiGRH8llrQ3Oh8wYNoM=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=UkmjWT1uopNV2Un53K7bvvdOppvBaeK1kGGjYPeg5M55oyilhqhMFV342BWsVfib1 vbr436Xpp2l8+Qc76X3qvgh/LoUhXqLCBntoVTm9iSqF8KbO1f3P3Y1u6ElJFTbj+H oYi/hP1LUGhludy18toSFAPgYDlEVaTjHp+xf5kU= Received: from DLEE105.ent.ti.com (dlee105.ent.ti.com [157.170.170.35]) by lelv0265.itg.ti.com (8.15.2/8.15.2) with ESMTPS id 03OGiuEZ102940 (version=TLSv1.2 cipher=AES256-GCM-SHA384 bits=256 verify=FAIL); Fri, 24 Apr 2020 11:44:56 -0500 Received: from DLEE104.ent.ti.com (157.170.170.34) by DLEE105.ent.ti.com (157.170.170.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Fri, 24 Apr 2020 11:44:56 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE104.ent.ti.com (157.170.170.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Fri, 24 Apr 2020 11:44:56 -0500 Received: from sokoban.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03OGinT8033554; Fri, 24 Apr 2020 11:44:55 -0500 From: Tero Kristo To: , , CC: Keerthy Subject: [PATCHv2 3/7] crypto: sa2ul: add sha1/sha256/sha512 support Date: Fri, 24 Apr 2020 19:44:26 +0300 Message-ID: <20200424164430.3288-4-t-kristo@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200424164430.3288-1-t-kristo@ti.com> References: <20200424164430.3288-1-t-kristo@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Keerthy Add support for sha1/sha256/sha512 sa2ul based hardware authentication. Signed-off-by: Keerthy [t-kristo@ti.com: various bug fixes, major cleanups and refactoring of code] Signed-off-by: Tero Kristo --- drivers/crypto/sa2ul.c | 582 ++++++++++++++++++++++++++++++++++++++++- drivers/crypto/sa2ul.h | 31 ++- 2 files changed, 600 insertions(+), 13 deletions(-) diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c index 67f3189d8e2c..80e7a2a3366b 100644 --- a/drivers/crypto/sa2ul.c +++ b/drivers/crypto/sa2ul.c @@ -18,8 +18,10 @@ #include #include +#include #include #include +#include #include "sa2ul.h" @@ -69,20 +71,32 @@ static struct device *sa_k3_dev; /** * struct sa_cmdl_cfg - Command label configuration descriptor + * @aalg: authentication algorithm ID * @enc_eng_id: Encryption Engine ID supported by the SA hardware + * @auth_eng_id: Authentication Engine ID * @iv_size: Initialization Vector size + * @akey: Authentication key + * @akey_len: Authentication key length */ struct sa_cmdl_cfg { + int aalg; u8 enc_eng_id; + u8 auth_eng_id; u8 iv_size; + const u8 *akey; + u16 akey_len; }; /** * struct algo_data - Crypto algorithm specific data * @enc_eng: Encryption engine info structure + * @auth_eng: Authentication engine info structure + * @auth_ctrl: Authentication control word + * @hash_size: Size of digest * @iv_idx: iv index in psdata * @iv_out_size: iv out size * @ealg_id: Encryption Algorithm ID + * @aalg_id: Authentication algorithm ID * @mci_enc: Mode Control Instruction for Encryption algorithm * @mci_dec: Mode Control Instruction for Decryption * @inv_key: Whether the encryption algorithm demands key inversion @@ -90,9 +104,13 @@ struct sa_cmdl_cfg { */ struct algo_data { struct sa_eng_info enc_eng; + struct sa_eng_info auth_eng; + u8 auth_ctrl; + u8 hash_size; u8 iv_idx; u8 iv_out_size; u8 ealg_id; + u8 aalg_id; u8 *mci_enc; u8 *mci_dec; bool inv_key; @@ -109,6 +127,7 @@ struct sa_alg_tmpl { u32 type; /* CRYPTO_ALG_TYPE from */ union { struct skcipher_alg skcipher; + struct ahash_alg ahash; } alg; bool registered; }; @@ -166,6 +185,9 @@ struct sa_req { u8 enc_offset; u16 enc_size; u8 *enc_iv; + u8 auth_offset; + u16 auth_size; + u8 *auth_iv; u32 type; u32 *cmdl; struct crypto_async_request *base; @@ -354,6 +376,20 @@ static int sa_set_sc_enc(struct algo_data *ad, const u8 *key, u16 key_sz, return 0; } +/* Set Security context for the authentication engine */ +static void sa_set_sc_auth(struct algo_data *ad, const u8 *key, u16 key_sz, + u8 *sc_buf) +{ + /* Set Authentication mode selector to hash processing */ + sc_buf[0] = SA_HASH_PROCESSING; + /* Auth SW ctrl word: bit[6]=1 (upload computed hash to TLR section) */ + sc_buf[1] = SA_UPLOAD_HASH_TO_TLR; + sc_buf[1] |= ad->auth_ctrl; + + /* basic hash */ + sc_buf[1] |= SA_BASIC_HASH; +} + static inline void sa_copy_iv(u32 *out, const u8 *iv, bool size16) { int j; @@ -369,8 +405,9 @@ static inline void sa_copy_iv(u32 *out, const u8 *iv, bool size16) static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, struct sa_cmdl_upd_info *upd_info) { - u8 enc_offset = 0, total = 0; + u8 enc_offset = 0, auth_offset = 0, total = 0; u8 enc_next_eng = SA_ENG_ID_OUTPORT2; + u8 auth_next_eng = SA_ENG_ID_OUTPORT2; u32 *word_ptr = (u32 *)cmdl; int i; @@ -380,7 +417,10 @@ static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, /* Iniialize the command update structure */ memzero_explicit(upd_info, sizeof(*upd_info)); - if (cfg->enc_eng_id != SA_ENG_ID_NONE) + if (cfg->enc_eng_id) + total = SA_CMDL_HEADER_SIZE_BYTES; + + if (cfg->auth_eng_id) total = SA_CMDL_HEADER_SIZE_BYTES; if (cfg->iv_size) @@ -388,7 +428,7 @@ static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, enc_next_eng = SA_ENG_ID_OUTPORT2; - if (cfg->enc_eng_id != SA_ENG_ID_NONE) { + if (cfg->enc_eng_id) { upd_info->flags |= SA_CMDL_UPD_ENC; upd_info->enc_size.index = enc_offset >> 2; upd_info->enc_offset.index = upd_info->enc_size.index + 1; @@ -415,6 +455,16 @@ static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, } } + if (cfg->auth_eng_id) { + upd_info->flags |= SA_CMDL_UPD_AUTH; + upd_info->auth_size.index = auth_offset >> 2; + upd_info->auth_offset.index = upd_info->auth_size.index + 1; + cmdl[auth_offset + SA_CMDL_OFFSET_NESC] = auth_next_eng; + cmdl[auth_offset + SA_CMDL_OFFSET_LABEL_LEN] = + SA_CMDL_HEADER_SIZE_BYTES; + total += SA_CMDL_HEADER_SIZE_BYTES; + } + total = roundup(total, 8); for (i = 0; i < total / 4; i++) @@ -448,6 +498,27 @@ static inline void sa_update_cmdl(struct sa_req *req, u32 *cmdl, } } } + + if (likely(upd_info->flags & SA_CMDL_UPD_AUTH)) { + cmdl[upd_info->auth_size.index] &= ~SA_CMDL_PAYLOAD_LENGTH_MASK; + cmdl[upd_info->auth_size.index] |= req->auth_size; + cmdl[upd_info->auth_offset.index] &= + ~SA_CMDL_SOP_BYPASS_LEN_MASK; + cmdl[upd_info->auth_offset.index] |= + ((u32)req->auth_offset << + __ffs(SA_CMDL_SOP_BYPASS_LEN_MASK)); + if (upd_info->flags & SA_CMDL_UPD_AUTH_IV) { + sa_copy_iv(&cmdl[upd_info->auth_iv.index], + req->auth_iv, + (upd_info->auth_iv.size > 8)); + } + if (upd_info->flags & SA_CMDL_UPD_AUX_KEY) { + int offset = (req->auth_size & 0xF) ? 4 : 0; + + memcpy(&cmdl[upd_info->aux_key_info.index], + &upd_info->aux_key[offset], 16); + } + } } /* Format SWINFO words to be sent to SA */ @@ -481,21 +552,34 @@ static void sa_dump_sc(u8 *buf, dma_addr_t dma_addr) static int sa_init_sc(struct sa_ctx_info *ctx, const u8 *enc_key, - u16 enc_key_sz, struct algo_data *ad, u8 enc, u32 *swinfo) + u16 enc_key_sz, const u8 *auth_key, u16 auth_key_sz, + struct algo_data *ad, u8 enc, u32 *swinfo) { int enc_sc_offset = 0; + int auth_sc_offset = 0; u8 *sc_buf = ctx->sc; u16 sc_id = ctx->sc_id; u8 first_engine; memzero_explicit(sc_buf, SA_CTX_MAX_SZ); - enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; + if (ad->enc_eng.eng_id) { + enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; + first_engine = ad->enc_eng.eng_id; + sc_buf[1] = SA_SCCTL_FE_ENC; + ad->hash_size = ad->iv_out_size; + } else { + enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; + auth_sc_offset = enc_sc_offset + ad->enc_eng.sc_size; + first_engine = ad->auth_eng.eng_id; + sc_buf[1] = SA_SCCTL_FE_AUTH_ENC; + if (!ad->hash_size) + return -EINVAL; + ad->hash_size = roundup(ad->hash_size, 8); + } /* SCCTL Owner info: 0=host, 1=CP_ACE */ sc_buf[SA_CTX_SCCTL_OWNER_OFFSET] = 0; - /* SCCTL F/E control */ - sc_buf[1] = SA_SCCTL_FE_ENC; memcpy(&sc_buf[2], &sc_id, 2); sc_buf[4] = 0x0; sc_buf[5] = PRIV_ID; @@ -509,16 +593,19 @@ int sa_init_sc(struct sa_ctx_info *ctx, const u8 *enc_key, return -EINVAL; } + /* Prepare context for authentication engine */ + if (ad->auth_eng.sc_size) + sa_set_sc_auth(ad, auth_key, auth_key_sz, + &sc_buf[auth_sc_offset]); + /* Set the ownership of context to CP_ACE */ sc_buf[SA_CTX_SCCTL_OWNER_OFFSET] = 0x80; /* swizzle the security context */ sa_swiz_128(sc_buf, SA_CTX_MAX_SZ); - /* Setup SWINFO */ - first_engine = ad->enc_eng.eng_id; sa_set_swinfo(first_engine, ctx->sc_id, ctx->sc_phys, 1, 0, - SA_SW_INFO_FLAG_EVICT, ad->iv_out_size, swinfo); + SA_SW_INFO_FLAG_EVICT, ad->hash_size, swinfo); sa_dump_sc(sc_buf, ctx->sc_phys); @@ -656,7 +743,8 @@ static int sa_cipher_setkey(struct crypto_skcipher *tfm, const u8 *key, return ret; /* Setup Encryption Security Context & Command label template */ - if (sa_init_sc(&ctx->enc, key, keylen, ad, 1, &ctx->enc.epib[1])) + if (sa_init_sc(&ctx->enc, key, keylen, NULL, 0, ad, 1, + &ctx->enc.epib[1])) goto badkey; cmdl_len = sa_format_cmdl_gen(&cfg, @@ -668,7 +756,8 @@ static int sa_cipher_setkey(struct crypto_skcipher *tfm, const u8 *key, ctx->enc.cmdl_size = cmdl_len; /* Setup Decryption Security Context & Command label template */ - if (sa_init_sc(&ctx->dec, key, keylen, ad, 0, &ctx->dec.epib[1])) + if (sa_init_sc(&ctx->dec, key, keylen, NULL, 0, ad, 0, + &ctx->dec.epib[1])) goto badkey; cfg.enc_eng_id = ad->enc_eng.eng_id; @@ -1061,6 +1150,386 @@ static int sa_decrypt(struct skcipher_request *req) return sa_cipher_run(req, req->iv, 0); } +static void sa_sha_cleanup_cache_data(struct sa_sha_req_ctx *ctx) +{ + struct scatterlist *sg; + + if (!ctx->sg_next) + return; + + while (ctx->src) { + sg = ctx->src; + ctx->src = sg_next(ctx->src); + free_pages((u64)sg_virt(sg), get_order(sg->length)); + kfree(sg); + } +} + +static void sa_sha_dma_in_callback(void *data) +{ + struct sa_rx_data *rxd = (struct sa_rx_data *)data; + struct ahash_request *req; + struct crypto_ahash *tfm; + unsigned int authsize; + struct sa_sha_req_ctx *rctx; + int i, sg_nents; + size_t ml, pl; + u32 *mdptr, *result; + + req = container_of(rxd->req, struct ahash_request, base); + tfm = crypto_ahash_reqtfm(req); + authsize = crypto_ahash_digestsize(tfm); + rctx = ahash_request_ctx(req); + + mdptr = (u32 *)dmaengine_desc_get_metadata_ptr(rxd->tx_in, &pl, &ml); + result = (u32 *)req->result; + + for (i = 0; i < (authsize / 4); i++) + result[i] = htonl(mdptr[i + 4]); + + sg_nents = sg_nents_for_len(req->src, req->nbytes); + dma_unmap_sg(rxd->ddev, req->src, sg_nents, DMA_FROM_DEVICE); + + kfree(rxd->split_src_sg); + + kfree(rxd); + + sa_sha_cleanup_cache_data(rctx); + + ahash_request_complete(req, 0); +} + +static int zero_message_process(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + int sa_digest_size = crypto_ahash_digestsize(tfm); + + switch (sa_digest_size) { + case SHA1_DIGEST_SIZE: + memcpy(req->result, sha1_zero_message_hash, sa_digest_size); + break; + case SHA256_DIGEST_SIZE: + memcpy(req->result, sha256_zero_message_hash, sa_digest_size); + break; + case SHA512_DIGEST_SIZE: + memcpy(req->result, sha512_zero_message_hash, sa_digest_size); + break; + default: + return -EINVAL; + } + + return 0; +} + +static int sa_sha_run(struct ahash_request *req) +{ + struct sa_tfm_ctx *ctx = crypto_ahash_ctx(crypto_ahash_reqtfm(req)); + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + int ret; + struct sa_req sa_req = { 0 }; + size_t auth_len; + + if (!rctx->src) { + rctx->src = req->src; + rctx->len = req->nbytes; + } + + auth_len = rctx->len; + + if (!auth_len) + return zero_message_process(req); + + if (auth_len > SA_MAX_DATA_SZ || + (auth_len >= SA_UNSAFE_DATA_SZ_MIN && + auth_len <= SA_UNSAFE_DATA_SZ_MAX)) { + struct crypto_wait wait; + struct ahash_request *subreq; + + crypto_init_wait(&wait); + + subreq = ahash_request_alloc(ctx->fallback.ahash, GFP_KERNEL); + ahash_request_set_tfm(subreq, ctx->fallback.ahash); + subreq->base.flags = req->base.flags & CRYPTO_TFM_REQ_MAY_SLEEP; + subreq->nbytes = auth_len; + subreq->src = rctx->src; + subreq->result = req->result; + + ahash_request_set_callback(subreq, CRYPTO_TFM_REQ_MAY_BACKLOG, + crypto_req_done, &wait); + + ret = crypto_ahash_digest(subreq); + ret |= crypto_wait_req(ret, &wait); + + ahash_request_free(subreq); + + sa_sha_cleanup_cache_data(rctx); + + ahash_request_complete(req, ret); + + return ret; + } + + sa_req.size = auth_len; + sa_req.auth_size = auth_len; + sa_req.src = rctx->src; + sa_req.dst = rctx->src; + sa_req.enc = true; + sa_req.type = CRYPTO_ALG_TYPE_AHASH; + sa_req.callback = sa_sha_dma_in_callback; + sa_req.mdata_size = 28; + sa_req.ctx = ctx; + sa_req.base = &req->base; + + return sa_run(&sa_req); +} + +static int sa_sha_setup(struct sa_tfm_ctx *ctx, struct algo_data *ad) +{ + int bs = crypto_shash_blocksize(ctx->shash); + int cmdl_len; + struct sa_cmdl_cfg cfg; + + ad->enc_eng.sc_size = SA_CTX_ENC_TYPE1_SZ; + ad->auth_eng.eng_id = SA_ENG_ID_AM1; + ad->auth_eng.sc_size = SA_CTX_AUTH_TYPE2_SZ; + + memset(ctx->authkey, 0, bs); + memset(&cfg, 0, sizeof(cfg)); + cfg.aalg = ad->aalg_id; + cfg.enc_eng_id = ad->enc_eng.eng_id; + cfg.auth_eng_id = ad->auth_eng.eng_id; + cfg.iv_size = 0; + cfg.akey = NULL; + cfg.akey_len = 0; + + /* Setup Encryption Security Context & Command label template */ + if (sa_init_sc(&ctx->enc, NULL, 0, NULL, 0, ad, 0, + &ctx->enc.epib[1])) + goto badkey; + + cmdl_len = sa_format_cmdl_gen(&cfg, + (u8 *)ctx->enc.cmdl, + &ctx->enc.cmdl_upd_info); + if (cmdl_len <= 0 || (cmdl_len > SA_MAX_CMDL_WORDS * sizeof(u32))) + goto badkey; + + ctx->enc.cmdl_size = cmdl_len; + + return 0; + +badkey: + dev_err(sa_k3_dev, "%s: badkey\n", __func__); + return -EINVAL; +} + +static int sa_sha_cra_init_alg(struct crypto_tfm *tfm, const char *alg_base) +{ + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + struct sa_crypto_data *data = dev_get_drvdata(sa_k3_dev); + int ret; + + memset(ctx, 0, sizeof(*ctx)); + ctx->dev_data = data; + ret = sa_init_ctx_info(&ctx->enc, data); + if (ret) + return ret; + + if (alg_base) { + ctx->shash = crypto_alloc_shash(alg_base, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->shash)) { + dev_err(sa_k3_dev, "base driver %s couldn't be loaded\n", + alg_base); + return PTR_ERR(ctx->shash); + } + /* for fallback */ + ctx->fallback.ahash = + crypto_alloc_ahash(alg_base, 0, + CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->fallback.ahash)) { + dev_err(ctx->dev_data->dev, + "Could not load fallback driver\n"); + return PTR_ERR(ctx->fallback.ahash); + } + } + + dev_dbg(sa_k3_dev, "%s(0x%p) sc-ids(0x%x(0x%pad), 0x%x(0x%pad))\n", + __func__, tfm, ctx->enc.sc_id, &ctx->enc.sc_phys, + ctx->dec.sc_id, &ctx->dec.sc_phys); + + crypto_ahash_set_reqsize(__crypto_ahash_cast(tfm), + sizeof(struct sa_sha_req_ctx)); + + return 0; +} + +static int sa_sha_digest(struct ahash_request *req) +{ + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + + memzero_explicit(rctx, sizeof(*rctx)); + return sa_sha_run(req); +} + +static int sa_sha_init(struct ahash_request *req) +{ + struct crypto_ahash *tfm = crypto_ahash_reqtfm(req); + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + + dev_dbg(sa_k3_dev, "init: digest size: %d, rctx=%llx\n", + crypto_ahash_digestsize(tfm), (u64)rctx); + + memzero_explicit(rctx, sizeof(*rctx)); + + return 0; +} + +static int sa_sha_update(struct ahash_request *req) +{ + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + struct scatterlist *sg; + void *buf; + int pages; + struct page *pg; + + if (!req->nbytes) + return 0; + + if (rctx->buf_free >= req->nbytes) { + pg = sg_page(rctx->sg_next); + buf = kmap_atomic(pg); + scatterwalk_map_and_copy(buf + rctx->offset, req->src, 0, + req->nbytes, 0); + kunmap_atomic(buf); + rctx->buf_free -= req->nbytes; + rctx->sg_next->length += req->nbytes; + rctx->offset += req->nbytes; + } else { + pages = get_order(req->nbytes); + buf = (void *)__get_free_pages(GFP_ATOMIC, pages); + if (!buf) + return -ENOMEM; + + sg = kzalloc(sizeof(*sg) * 2, GFP_KERNEL); + if (!sg) + return -ENOMEM; + + sg_init_table(sg, 1); + sg_set_buf(sg, buf, req->nbytes); + scatterwalk_map_and_copy(buf, req->src, 0, req->nbytes, 0); + + rctx->buf_free = (PAGE_SIZE << pages) - req->nbytes; + + if (rctx->sg_next) { + sg_unmark_end(rctx->sg_next); + sg_chain(rctx->sg_next, 2, sg); + } else { + rctx->src = sg; + } + + rctx->sg_next = sg; + rctx->src_nents++; + + rctx->offset = req->nbytes; + } + + rctx->len += req->nbytes; + + return 0; +} + +static int sa_sha_final(struct ahash_request *req) +{ + return sa_sha_run(req); +} + +static int sa_sha_finup(struct ahash_request *req) +{ + sa_sha_update(req); + + return sa_sha_run(req); +} + +static int sa_sha_import(struct ahash_request *req, const void *in) +{ + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + + memcpy(rctx, in, sizeof(*rctx)); + return 0; +} + +static int sa_sha_export(struct ahash_request *req, void *out) +{ + struct sa_sha_req_ctx *rctx = ahash_request_ctx(req); + + memcpy(out, rctx, sizeof(*rctx)); + return 0; +} + +static int sa_sha1_cra_init(struct crypto_tfm *tfm) +{ + struct algo_data ad = { 0 }; + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + + sa_sha_cra_init_alg(tfm, "sha1"); + + ad.aalg_id = SA_AALG_ID_SHA1; + ad.hash_size = SHA1_DIGEST_SIZE; + ad.auth_ctrl = SA_AUTH_SW_CTRL_SHA1; + + sa_sha_setup(ctx, &ad); + + return 0; +} + +static int sa_sha256_cra_init(struct crypto_tfm *tfm) +{ + struct algo_data ad = { 0 }; + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + + sa_sha_cra_init_alg(tfm, "sha256"); + + ad.aalg_id = SA_AALG_ID_SHA2_256; + ad.hash_size = SHA256_DIGEST_SIZE; + ad.auth_ctrl = SA_AUTH_SW_CTRL_SHA256; + + sa_sha_setup(ctx, &ad); + + return 0; +} + +static int sa_sha512_cra_init(struct crypto_tfm *tfm) +{ + struct algo_data ad = { 0 }; + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + + sa_sha_cra_init_alg(tfm, "sha512"); + + ad.aalg_id = SA_AALG_ID_SHA2_512; + ad.hash_size = SHA512_DIGEST_SIZE; + ad.auth_ctrl = SA_AUTH_SW_CTRL_SHA512; + + sa_sha_setup(ctx, &ad); + + return 0; +} + +static void sa_sha_cra_exit(struct crypto_tfm *tfm) +{ + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(tfm); + struct sa_crypto_data *data = dev_get_drvdata(sa_k3_dev); + + dev_dbg(sa_k3_dev, "%s(0x%p) sc-ids(0x%x(0x%pad), 0x%x(0x%pad))\n", + __func__, tfm, ctx->enc.sc_id, &ctx->enc.sc_phys, + ctx->dec.sc_id, &ctx->dec.sc_phys); + + if (crypto_tfm_alg_type(tfm) == CRYPTO_ALG_TYPE_AHASH) + sa_free_ctx_info(&ctx->enc, data); + + crypto_free_shash(ctx->shash); + crypto_free_ahash(ctx->fallback.ahash); +} + static struct sa_alg_tmpl sa_algs[] = { { .type = CRYPTO_ALG_TYPE_SKCIPHER, @@ -1152,6 +1621,90 @@ static struct sa_alg_tmpl sa_algs[] = { .decrypt = sa_decrypt, } }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .alg.ahash = { + .halg.base = { + .cra_name = "sha1", + .cra_driver_name = "sha1-sa2ul", + .cra_priority = 400, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA1_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct sa_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_init = sa_sha1_cra_init, + .cra_exit = sa_sha_cra_exit, + }, + .halg.digestsize = SHA1_DIGEST_SIZE, + .halg.statesize = sizeof(struct sa_sha_req_ctx), + .init = sa_sha_init, + .update = sa_sha_update, + .final = sa_sha_final, + .finup = sa_sha_finup, + .digest = sa_sha_digest, + .export = sa_sha_export, + .import = sa_sha_import, + }, + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .alg.ahash = { + .halg.base = { + .cra_name = "sha256", + .cra_driver_name = "sha256-sa2ul", + .cra_priority = 400, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA256_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct sa_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_init = sa_sha256_cra_init, + .cra_exit = sa_sha_cra_exit, + }, + .halg.digestsize = SHA256_DIGEST_SIZE, + .halg.statesize = sizeof(struct sa_sha_req_ctx), + .init = sa_sha_init, + .update = sa_sha_update, + .final = sa_sha_final, + .finup = sa_sha_finup, + .digest = sa_sha_digest, + .export = sa_sha_export, + .import = sa_sha_import, + }, + }, + { + .type = CRYPTO_ALG_TYPE_AHASH, + .alg.ahash = { + .halg.base = { + .cra_name = "sha512", + .cra_driver_name = "sha512-sa2ul", + .cra_priority = 400, + .cra_flags = CRYPTO_ALG_TYPE_AHASH | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_NEED_FALLBACK, + .cra_blocksize = SHA512_BLOCK_SIZE, + .cra_ctxsize = sizeof(struct sa_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_init = sa_sha512_cra_init, + .cra_exit = sa_sha_cra_exit, + }, + .halg.digestsize = SHA512_DIGEST_SIZE, + .halg.statesize = sizeof(struct sa_sha_req_ctx), + .init = sa_sha_init, + .update = sa_sha_update, + .final = sa_sha_final, + .finup = sa_sha_finup, + .digest = sa_sha_digest, + .export = sa_sha_export, + .import = sa_sha_import, + }, + }, }; /* Register the algorithms in crypto framework */ @@ -1166,6 +1719,9 @@ void sa_register_algos(const struct device *dev) if (type == CRYPTO_ALG_TYPE_SKCIPHER) { alg_name = sa_algs[i].alg.skcipher.base.cra_name; err = crypto_register_skcipher(&sa_algs[i].alg.skcipher); + } else if (type == CRYPTO_ALG_TYPE_AHASH) { + alg_name = sa_algs[i].alg.ahash.halg.base.cra_name; + err = crypto_register_ahash(&sa_algs[i].alg.ahash); } else { dev_err(dev, "un-supported crypto algorithm (%d)", @@ -1192,6 +1748,8 @@ void sa_unregister_algos(const struct device *dev) continue; if (type == CRYPTO_ALG_TYPE_SKCIPHER) crypto_unregister_skcipher(&sa_algs[i].alg.skcipher); + else if (type == CRYPTO_ALG_TYPE_AHASH) + crypto_unregister_ahash(&sa_algs[i].alg.ahash); sa_algs[i].registered = false; } diff --git a/drivers/crypto/sa2ul.h b/drivers/crypto/sa2ul.h index 45ba86cb5d11..733b00bc6e0f 100644 --- a/drivers/crypto/sa2ul.h +++ b/drivers/crypto/sa2ul.h @@ -73,7 +73,6 @@ struct sa_tfm_ctx; #define SA_ENG_ID_AM1 4 /* Auth. engine with SHA1/MD5/SHA2 core */ #define SA_ENG_ID_AM2 5 /* Authentication engine for pass 2 */ #define SA_ENG_ID_OUTPORT2 20 /* Egress module 2 */ -#define SA_ENG_ID_NONE 0xff /* * Command Label Definitions @@ -156,6 +155,13 @@ struct sa_tfm_ctx; #define SA_ALIGN_MASK (sizeof(u32) - 1) #define SA_ALIGNED __aligned(32) +#define SA_AUTH_SW_CTRL_MD5 1 +#define SA_AUTH_SW_CTRL_SHA1 2 +#define SA_AUTH_SW_CTRL_SHA224 3 +#define SA_AUTH_SW_CTRL_SHA256 4 +#define SA_AUTH_SW_CTRL_SHA384 5 +#define SA_AUTH_SW_CTRL_SHA512 6 + /* SA2UL can only handle maximum data size of 64KB */ #define SA_MAX_DATA_SZ U16_MAX @@ -297,15 +303,38 @@ struct sa_tfm_ctx { struct sa_crypto_data *dev_data; struct sa_ctx_info enc; struct sa_ctx_info dec; + struct sa_ctx_info auth; int keylen; int iv_idx; u32 key[AES_KEYSIZE_256 / sizeof(u32)]; + u8 authkey[SHA512_BLOCK_SIZE]; + struct crypto_shash *shash; /* for fallback */ union { struct crypto_sync_skcipher *skcipher; + struct crypto_ahash *ahash; } fallback; }; +/** + * struct sa_sha_req_ctx: Structure used for sha request + * @dev_data: struct sa_crypto_data pointer + * @cmdl: Complete command label with psdata and epib included + * @src: source payload scatterlist pointer + * @src_nents: Number of nodes in source scatterlist + */ +struct sa_sha_req_ctx { + struct sa_crypto_data *dev_data; + u32 cmdl[SA_MAX_CMDL_WORDS + SA_PSDATA_CTX_WORDS]; + struct scatterlist *src; + unsigned int src_nents; + u32 mode; + struct scatterlist *sg_next; + int len; + int buf_free; + int offset; +}; + enum sa_submode { SA_MODE_GEN = 0, SA_MODE_CCM, From patchwork Fri Apr 24 16:44:27 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tero Kristo X-Patchwork-Id: 197829 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0D601C54FD0 for ; Fri, 24 Apr 2020 16:45:06 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id C767220781 for ; Fri, 24 Apr 2020 16:45:05 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="vrUbeVQn" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728140AbgDXQpF (ORCPT ); Fri, 24 Apr 2020 12:45:05 -0400 Received: from lelv0142.ext.ti.com ([198.47.23.249]:52018 "EHLO lelv0142.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728019AbgDXQpE (ORCPT ); Fri, 24 Apr 2020 12:45:04 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by lelv0142.ext.ti.com (8.15.2/8.15.2) with ESMTP id 03OGiwuE067865; Fri, 24 Apr 2020 11:44:58 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1587746698; bh=weFJOqNTDWPkcp7YzoTL7bykkwvYfFXwYc8zSMRdiI4=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=vrUbeVQn7pm1Bs4g8pz4NXr8IAeYD2TZBo7pxcWMGFjqka+n4q+ZqMNtzLaxdoQb0 KMAdExU2W5Q93cPvJ/Qoa1737vbCp/15aU1eYCSzbF2Vonj2lGXf4oHuRBsNXXsNIB SvbcEeTkzV68K9dNqIxAmCGfKSd1SSJbykSPC1DM= Received: from DFLE104.ent.ti.com (dfle104.ent.ti.com [10.64.6.25]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03OGiw1v044812; Fri, 24 Apr 2020 11:44:58 -0500 Received: from DFLE113.ent.ti.com (10.64.6.34) by DFLE104.ent.ti.com (10.64.6.25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Fri, 24 Apr 2020 11:44:57 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DFLE113.ent.ti.com (10.64.6.34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Fri, 24 Apr 2020 11:44:57 -0500 Received: from sokoban.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03OGinT9033554; Fri, 24 Apr 2020 11:44:56 -0500 From: Tero Kristo To: , , CC: Keerthy Subject: [PATCHv2 4/7] crypto: sa2ul: Add AEAD algorithm support Date: Fri, 24 Apr 2020 19:44:27 +0300 Message-ID: <20200424164430.3288-5-t-kristo@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200424164430.3288-1-t-kristo@ti.com> References: <20200424164430.3288-1-t-kristo@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Keerthy Add support for sa2ul hardware AEAD for hmac(sha256),cbc(aes) and hmac(sha1),cbc(aes) algorithms. Signed-off-by: Keerthy [t-kristo@ti.com: number of bug fixes, major refactoring and cleanup of code] Signed-off-by: Tero Kristo --- drivers/crypto/sa2ul.c | 539 +++++++++++++++++++++++++++++++++++++++-- drivers/crypto/sa2ul.h | 1 + 2 files changed, 519 insertions(+), 21 deletions(-) diff --git a/drivers/crypto/sa2ul.c b/drivers/crypto/sa2ul.c index 80e7a2a3366b..899ef90a3d76 100644 --- a/drivers/crypto/sa2ul.c +++ b/drivers/crypto/sa2ul.c @@ -9,6 +9,7 @@ * Tero Kristo */ #include +#include #include #include #include @@ -17,7 +18,9 @@ #include #include +#include #include +#include #include #include #include @@ -77,6 +80,7 @@ static struct device *sa_k3_dev; * @iv_size: Initialization Vector size * @akey: Authentication key * @akey_len: Authentication key length + * @enc: True, if this is an encode request */ struct sa_cmdl_cfg { int aalg; @@ -85,6 +89,7 @@ struct sa_cmdl_cfg { u8 iv_size; const u8 *akey; u16 akey_len; + bool enc; }; /** @@ -101,6 +106,8 @@ struct sa_cmdl_cfg { * @mci_dec: Mode Control Instruction for Decryption * @inv_key: Whether the encryption algorithm demands key inversion * @ctx: Pointer to the algorithm context + * @keyed_mac: Whether the authentication algorithm has key + * @prep_iopad: Function pointer to generate intermediate ipad/opad */ struct algo_data { struct sa_eng_info enc_eng; @@ -115,6 +122,9 @@ struct algo_data { u8 *mci_dec; bool inv_key; struct sa_tfm_ctx *ctx; + bool keyed_mac; + void (*prep_iopad)(struct algo_data *algo, const u8 *key, + u16 key_sz, u32 *ipad, u32 *opad); }; /** @@ -128,6 +138,7 @@ struct sa_alg_tmpl { union { struct skcipher_alg skcipher; struct ahash_alg ahash; + struct aead_alg aead; } alg; bool registered; }; @@ -231,6 +242,38 @@ static u8 mci_cbc_dec_array[3][MODE_CONTROL_BYTES] = { 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, }; +/* + * Mode Control Instructions for various Key lengths 128, 192, 256 + * For CBC (Cipher Block Chaining) mode for encryption + */ +static u8 mci_cbc_enc_no_iv_array[3][MODE_CONTROL_BYTES] = { + { 0x21, 0x00, 0x00, 0x18, 0x88, 0x0a, 0xaa, 0x4b, 0x7e, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + { 0x21, 0x00, 0x00, 0x18, 0x88, 0x4a, 0xaa, 0x4b, 0x7e, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + { 0x21, 0x00, 0x00, 0x18, 0x88, 0x8a, 0xaa, 0x4b, 0x7e, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, +}; + +/* + * Mode Control Instructions for various Key lengths 128, 192, 256 + * For CBC (Cipher Block Chaining) mode for decryption + */ +static u8 mci_cbc_dec_no_iv_array[3][MODE_CONTROL_BYTES] = { + { 0x31, 0x00, 0x00, 0x80, 0x8a, 0xca, 0x98, 0xf4, 0x40, 0xc0, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + { 0x31, 0x00, 0x00, 0x84, 0x8a, 0xca, 0x98, 0xf4, 0x40, 0xc0, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, + { 0x31, 0x00, 0x00, 0x88, 0x8a, 0xca, 0x98, 0xf4, 0x40, 0xc0, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, + 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00 }, +}; + /* * Mode Control Instructions for various Key lengths 128, 192, 256 * For ECB (Electronic Code Book) mode for encryption @@ -310,6 +353,82 @@ static void sa_swiz_128(u8 *in, u16 len) } } +/* Prepare the ipad and opad from key as per SHA algorithm step 1*/ +static void prepare_kiopad(u8 *k_ipad, u8 *k_opad, const u8 *key, u16 key_sz) +{ + int i; + + for (i = 0; i < key_sz; i++) { + k_ipad[i] = key[i] ^ 0x36; + k_opad[i] = key[i] ^ 0x5c; + } + + /* Instead of XOR with 0 */ + for (; i < SHA_MESSAGE_BYTES; i++) { + k_ipad[i] = 0x36; + k_opad[i] = 0x5c; + } +} + +static void sa_export_shash(struct shash_desc *hash, int block_size, + int digest_size, u32 *out) +{ + union { + struct sha1_state sha1; + struct sha256_state sha256; + struct sha512_state sha512; + } sha; + void *state; + u32 *result; + int i; + + switch (digest_size) { + case SHA1_DIGEST_SIZE: + state = &sha.sha1; + result = sha.sha1.state; + break; + case SHA256_DIGEST_SIZE: + state = &sha.sha256; + result = sha.sha256.state; + break; + default: + dev_err(sa_k3_dev, "%s: bad digest_size=%d\n", __func__, + digest_size); + return; + } + + crypto_shash_export(hash, state); + + for (i = 0; i < digest_size >> 2; i++) + out[i] = cpu_to_be32(result[i]); +} + +static void sa_prepare_iopads(struct algo_data *data, const u8 *key, + u16 key_sz, u32 *ipad, u32 *opad) +{ + SHASH_DESC_ON_STACK(shash, data->ctx->shash); + int block_size = crypto_shash_blocksize(data->ctx->shash); + int digest_size = crypto_shash_digestsize(data->ctx->shash); + u8 k_ipad[SHA_MESSAGE_BYTES]; + u8 k_opad[SHA_MESSAGE_BYTES]; + + shash->tfm = data->ctx->shash; + + prepare_kiopad(k_ipad, k_opad, key, key_sz); + + memzero_explicit(ipad, block_size); + memzero_explicit(opad, block_size); + + crypto_shash_init(shash); + crypto_shash_update(shash, k_ipad, block_size); + sa_export_shash(shash, block_size, digest_size, ipad); + + crypto_shash_init(shash); + crypto_shash_update(shash, k_opad, block_size); + + sa_export_shash(shash, block_size, digest_size, opad); +} + /* Derive the inverse key used in AES-CBC decryption operation */ static inline int sa_aes_inv_key(u8 *inv_key, const u8 *key, u16 key_sz) { @@ -380,14 +499,26 @@ static int sa_set_sc_enc(struct algo_data *ad, const u8 *key, u16 key_sz, static void sa_set_sc_auth(struct algo_data *ad, const u8 *key, u16 key_sz, u8 *sc_buf) { + u32 ipad[64], opad[64]; + /* Set Authentication mode selector to hash processing */ sc_buf[0] = SA_HASH_PROCESSING; /* Auth SW ctrl word: bit[6]=1 (upload computed hash to TLR section) */ sc_buf[1] = SA_UPLOAD_HASH_TO_TLR; sc_buf[1] |= ad->auth_ctrl; - /* basic hash */ - sc_buf[1] |= SA_BASIC_HASH; + /* Copy the keys or ipad/opad */ + if (ad->keyed_mac) { + ad->prep_iopad(ad, key, key_sz, ipad, opad); + + /* Copy ipad to AuthKey */ + memcpy(&sc_buf[32], ipad, ad->hash_size); + /* Copy opad to Aux-1 */ + memcpy(&sc_buf[64], opad, ad->hash_size); + } else { + /* basic hash */ + sc_buf[1] |= SA_BASIC_HASH; + } } static inline void sa_copy_iv(u32 *out, const u8 *iv, bool size16) @@ -417,16 +548,18 @@ static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, /* Iniialize the command update structure */ memzero_explicit(upd_info, sizeof(*upd_info)); - if (cfg->enc_eng_id) - total = SA_CMDL_HEADER_SIZE_BYTES; - - if (cfg->auth_eng_id) - total = SA_CMDL_HEADER_SIZE_BYTES; + if (cfg->enc_eng_id && cfg->auth_eng_id) { + if (cfg->enc) { + auth_offset = SA_CMDL_HEADER_SIZE_BYTES; + enc_next_eng = cfg->auth_eng_id; - if (cfg->iv_size) - total += cfg->iv_size; - - enc_next_eng = SA_ENG_ID_OUTPORT2; + if (cfg->iv_size) + auth_offset += cfg->iv_size; + } else { + enc_offset = SA_CMDL_HEADER_SIZE_BYTES; + auth_next_eng = cfg->enc_eng_id; + } + } if (cfg->enc_eng_id) { upd_info->flags |= SA_CMDL_UPD_ENC; @@ -447,11 +580,11 @@ static int sa_format_cmdl_gen(struct sa_cmdl_cfg *cfg, u8 *cmdl, cmdl[enc_offset + SA_CMDL_OFFSET_OPTION_CTRL1] = (SA_CTX_ENC_AUX2_OFFSET | (cfg->iv_size >> 3)); - enc_offset += SA_CMDL_HEADER_SIZE_BYTES + cfg->iv_size; + total += SA_CMDL_HEADER_SIZE_BYTES + cfg->iv_size; } else { cmdl[enc_offset + SA_CMDL_OFFSET_LABEL_LEN] = SA_CMDL_HEADER_SIZE_BYTES; - enc_offset += SA_CMDL_HEADER_SIZE_BYTES; + total += SA_CMDL_HEADER_SIZE_BYTES; } } @@ -559,23 +692,28 @@ int sa_init_sc(struct sa_ctx_info *ctx, const u8 *enc_key, int auth_sc_offset = 0; u8 *sc_buf = ctx->sc; u16 sc_id = ctx->sc_id; - u8 first_engine; + u8 first_engine = 0; memzero_explicit(sc_buf, SA_CTX_MAX_SZ); - if (ad->enc_eng.eng_id) { - enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; - first_engine = ad->enc_eng.eng_id; - sc_buf[1] = SA_SCCTL_FE_ENC; - ad->hash_size = ad->iv_out_size; - } else { + if (ad->auth_eng.eng_id) { + if (enc) + first_engine = ad->enc_eng.eng_id; + else + first_engine = ad->auth_eng.eng_id; + enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; auth_sc_offset = enc_sc_offset + ad->enc_eng.sc_size; - first_engine = ad->auth_eng.eng_id; sc_buf[1] = SA_SCCTL_FE_AUTH_ENC; if (!ad->hash_size) return -EINVAL; ad->hash_size = roundup(ad->hash_size, 8); + + } else if (ad->enc_eng.eng_id && !ad->auth_eng.eng_id) { + enc_sc_offset = SA_CTX_PHP_PE_CTX_SZ; + first_engine = ad->enc_eng.eng_id; + sc_buf[1] = SA_SCCTL_FE_ENC; + ad->hash_size = ad->iv_out_size; } /* SCCTL Owner info: 0=host, 1=CP_ACE */ @@ -1530,6 +1668,305 @@ static void sa_sha_cra_exit(struct crypto_tfm *tfm) crypto_free_ahash(ctx->fallback.ahash); } +static void sa_aead_dma_in_callback(void *data) +{ + struct sa_rx_data *rxd = (struct sa_rx_data *)data; + struct aead_request *req; + struct crypto_aead *tfm; + unsigned int start; + unsigned int authsize; + u8 auth_tag[SA_MAX_AUTH_TAG_SZ]; + size_t pl, ml; + int i, sglen; + int err = 0; + u16 auth_len; + u32 *mdptr; + bool diff_dst; + enum dma_data_direction dir_src; + + req = container_of(rxd->req, struct aead_request, base); + tfm = crypto_aead_reqtfm(req); + start = req->assoclen + req->cryptlen; + authsize = crypto_aead_authsize(tfm); + + diff_dst = (req->src != req->dst) ? true : false; + dir_src = diff_dst ? DMA_TO_DEVICE : DMA_BIDIRECTIONAL; + + mdptr = (u32 *)dmaengine_desc_get_metadata_ptr(rxd->tx_in, &pl, &ml); + for (i = 0; i < (authsize / 4); i++) + mdptr[i + 4] = htonl(mdptr[i + 4]); + + auth_len = req->assoclen + req->cryptlen; + if (!rxd->enc) + auth_len -= authsize; + + sglen = sg_nents_for_len(rxd->src, auth_len); + dma_unmap_sg(rxd->ddev, rxd->src, sglen, dir_src); + kfree(rxd->split_src_sg); + + if (diff_dst) { + sglen = sg_nents_for_len(rxd->dst, auth_len); + dma_unmap_sg(rxd->ddev, rxd->dst, sglen, DMA_FROM_DEVICE); + kfree(rxd->split_dst_sg); + } + + if (rxd->enc) { + scatterwalk_map_and_copy(&mdptr[4], req->dst, start, authsize, + 1); + } else { + start -= authsize; + scatterwalk_map_and_copy(auth_tag, req->src, start, authsize, + 0); + + err = memcmp(&mdptr[4], auth_tag, authsize) ? -EBADMSG : 0; + } + + kfree(rxd); + + aead_request_complete(req, err); +} + +static int sa_cra_init_aead(struct crypto_aead *tfm, const char *hash, + const char *fallback) +{ + struct sa_tfm_ctx *ctx = crypto_aead_ctx(tfm); + struct sa_crypto_data *data = dev_get_drvdata(sa_k3_dev); + int ret; + + memzero_explicit(ctx, sizeof(*ctx)); + + ctx->shash = crypto_alloc_shash(hash, 0, CRYPTO_ALG_NEED_FALLBACK); + if (IS_ERR(ctx->shash)) { + dev_err(sa_k3_dev, "base driver %s couldn't be loaded\n", hash); + return PTR_ERR(ctx->shash); + } + + ctx->fallback.aead = crypto_alloc_aead(fallback, 0, + CRYPTO_ALG_NEED_FALLBACK); + + if (IS_ERR(ctx->fallback.aead)) { + dev_err(sa_k3_dev, "fallback driver %s couldn't be loaded\n", + fallback); + return PTR_ERR(ctx->fallback.aead); + } + + crypto_aead_set_reqsize(tfm, sizeof(struct aead_request) + + crypto_aead_reqsize(ctx->fallback.aead)); + + ret = sa_init_ctx_info(&ctx->enc, data); + if (ret) + return ret; + + ret = sa_init_ctx_info(&ctx->dec, data); + if (ret) { + sa_free_ctx_info(&ctx->enc, data); + return ret; + } + + dev_dbg(sa_k3_dev, "%s(0x%p) sc-ids(0x%x(0x%pad), 0x%x(0x%pad))\n", + __func__, tfm, ctx->enc.sc_id, &ctx->enc.sc_phys, + ctx->dec.sc_id, &ctx->dec.sc_phys); + + return ret; +} + +static int sa_cra_init_aead_sha1(struct crypto_aead *tfm) +{ + return sa_cra_init_aead(tfm, "sha1", + "authenc(hmac(sha1-ce),cbc(aes-ce))"); +} + +static int sa_cra_init_aead_sha256(struct crypto_aead *tfm) +{ + return sa_cra_init_aead(tfm, "sha256", + "authenc(hmac(sha256-ce),cbc(aes-ce))"); +} + +static void sa_exit_tfm_aead(struct crypto_aead *tfm) +{ + struct sa_tfm_ctx *ctx = crypto_aead_ctx(tfm); + struct sa_crypto_data *data = dev_get_drvdata(sa_k3_dev); + + crypto_free_shash(ctx->shash); + crypto_free_aead(ctx->fallback.aead); + + sa_free_ctx_info(&ctx->enc, data); + sa_free_ctx_info(&ctx->dec, data); +} + +/* AEAD algorithm configuration interface function */ +static int sa_aead_setkey(struct crypto_aead *authenc, + const u8 *key, unsigned int keylen, + struct algo_data *ad) +{ + struct sa_tfm_ctx *ctx = crypto_aead_ctx(authenc); + struct crypto_authenc_keys keys; + int cmdl_len; + struct sa_cmdl_cfg cfg; + int key_idx; + + if (crypto_authenc_extractkeys(&keys, key, keylen) != 0) + return -EINVAL; + + /* Convert the key size (16/24/32) to the key size index (0/1/2) */ + key_idx = (keys.enckeylen >> 3) - 2; + if (key_idx >= 3) + return -EINVAL; + + ad->ctx = ctx; + ad->enc_eng.eng_id = SA_ENG_ID_EM1; + ad->enc_eng.sc_size = SA_CTX_ENC_TYPE1_SZ; + ad->auth_eng.eng_id = SA_ENG_ID_AM1; + ad->auth_eng.sc_size = SA_CTX_AUTH_TYPE2_SZ; + ad->mci_enc = mci_cbc_enc_no_iv_array[key_idx]; + ad->mci_dec = mci_cbc_dec_no_iv_array[key_idx]; + ad->inv_key = true; + ad->keyed_mac = true; + ad->ealg_id = SA_EALG_ID_AES_CBC; + ad->prep_iopad = sa_prepare_iopads; + + memset(&cfg, 0, sizeof(cfg)); + cfg.enc = true; + cfg.aalg = ad->aalg_id; + cfg.enc_eng_id = ad->enc_eng.eng_id; + cfg.auth_eng_id = ad->auth_eng.eng_id; + cfg.iv_size = crypto_aead_ivsize(authenc); + cfg.akey = keys.authkey; + cfg.akey_len = keys.authkeylen; + + /* Setup Encryption Security Context & Command label template */ + if (sa_init_sc(&ctx->enc, keys.enckey, keys.enckeylen, + keys.authkey, keys.authkeylen, + ad, 1, &ctx->enc.epib[1])) + return -EINVAL; + + cmdl_len = sa_format_cmdl_gen(&cfg, + (u8 *)ctx->enc.cmdl, + &ctx->enc.cmdl_upd_info); + if (cmdl_len <= 0 || (cmdl_len > SA_MAX_CMDL_WORDS * sizeof(u32))) + return -EINVAL; + + ctx->enc.cmdl_size = cmdl_len; + + /* Setup Decryption Security Context & Command label template */ + if (sa_init_sc(&ctx->dec, keys.enckey, keys.enckeylen, + keys.authkey, keys.authkeylen, + ad, 0, &ctx->dec.epib[1])) + return -EINVAL; + + cfg.enc = false; + cmdl_len = sa_format_cmdl_gen(&cfg, (u8 *)ctx->dec.cmdl, + &ctx->dec.cmdl_upd_info); + + if (cmdl_len <= 0 || (cmdl_len > SA_MAX_CMDL_WORDS * sizeof(u32))) + return -EINVAL; + + ctx->dec.cmdl_size = cmdl_len; + + crypto_aead_clear_flags(ctx->fallback.aead, CRYPTO_TFM_REQ_MASK); + crypto_aead_set_flags(ctx->fallback.aead, + crypto_aead_get_flags(authenc) & + CRYPTO_TFM_REQ_MASK); + crypto_aead_setkey(ctx->fallback.aead, key, keylen); + + return 0; +} + +static int sa_aead_setauthsize(struct crypto_aead *tfm, unsigned int authsize) +{ + struct sa_tfm_ctx *ctx = crypto_tfm_ctx(crypto_aead_tfm(tfm)); + + return crypto_aead_setauthsize(ctx->fallback.aead, authsize); +} + +static int sa_aead_cbc_sha1_setkey(struct crypto_aead *authenc, + const u8 *key, unsigned int keylen) +{ + struct algo_data ad = { 0 }; + + ad.ealg_id = SA_EALG_ID_AES_CBC; + ad.aalg_id = SA_AALG_ID_HMAC_SHA1; + ad.hash_size = SHA1_DIGEST_SIZE; + ad.auth_ctrl = SA_AUTH_SW_CTRL_SHA1; + + return sa_aead_setkey(authenc, key, keylen, &ad); +} + +static int sa_aead_cbc_sha256_setkey(struct crypto_aead *authenc, + const u8 *key, unsigned int keylen) +{ + struct algo_data ad = { 0 }; + + ad.ealg_id = SA_EALG_ID_AES_CBC; + ad.aalg_id = SA_AALG_ID_HMAC_SHA2_256; + ad.hash_size = SHA256_DIGEST_SIZE; + ad.auth_ctrl = SA_AUTH_SW_CTRL_SHA256; + + return sa_aead_setkey(authenc, key, keylen, &ad); +} + +static int sa_aead_run(struct aead_request *req, u8 *iv, int enc) +{ + struct crypto_aead *tfm = crypto_aead_reqtfm(req); + struct sa_tfm_ctx *ctx = crypto_aead_ctx(tfm); + struct sa_req sa_req = { 0 }; + size_t auth_size, enc_size; + + enc_size = req->cryptlen; + auth_size = req->assoclen + req->cryptlen; + + if (!enc) { + enc_size -= crypto_aead_authsize(tfm); + auth_size -= crypto_aead_authsize(tfm); + } + + if (auth_size > SA_MAX_DATA_SZ || + (auth_size >= SA_UNSAFE_DATA_SZ_MIN && + auth_size <= SA_UNSAFE_DATA_SZ_MAX)) { + struct aead_request *subreq = aead_request_ctx(req); + int ret; + + aead_request_set_tfm(subreq, ctx->fallback.aead); + aead_request_set_callback(subreq, req->base.flags, + req->base.complete, req->base.data); + aead_request_set_crypt(subreq, req->src, req->dst, + req->cryptlen, req->iv); + aead_request_set_ad(subreq, req->assoclen); + + ret = enc ? crypto_aead_encrypt(subreq) : + crypto_aead_decrypt(subreq); + return ret; + } + + sa_req.enc_offset = req->assoclen; + sa_req.enc_size = enc_size; + sa_req.auth_size = auth_size; + sa_req.size = auth_size; + sa_req.enc_iv = iv; + sa_req.type = CRYPTO_ALG_TYPE_AEAD; + sa_req.enc = enc; + sa_req.callback = sa_aead_dma_in_callback; + sa_req.mdata_size = 52; + sa_req.base = &req->base; + sa_req.ctx = ctx; + sa_req.src = req->src; + sa_req.dst = req->dst; + + return sa_run(&sa_req); +} + +/* AEAD algorithm encrypt interface function */ +static int sa_aead_encrypt(struct aead_request *req) +{ + return sa_aead_run(req, req->iv, 1); +} + +/* AEAD algorithm decrypt interface function */ +static int sa_aead_decrypt(struct aead_request *req) +{ + return sa_aead_run(req, req->iv, 0); +} + static struct sa_alg_tmpl sa_algs[] = { { .type = CRYPTO_ALG_TYPE_SKCIPHER, @@ -1705,6 +2142,61 @@ static struct sa_alg_tmpl sa_algs[] = { .import = sa_sha_import, }, }, + { + .type = CRYPTO_ALG_TYPE_AEAD, + .alg.aead = { + .base = { + .cra_name = "authenc(hmac(sha1),cbc(aes))", + .cra_driver_name = + "authenc(hmac(sha1),cbc(aes))-sa2ul", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_TYPE_AEAD | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_ctxsize = sizeof(struct sa_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_priority = 3000, + }, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA1_DIGEST_SIZE, + + .init = sa_cra_init_aead_sha1, + .exit = sa_exit_tfm_aead, + .setkey = sa_aead_cbc_sha1_setkey, + .setauthsize = sa_aead_setauthsize, + .encrypt = sa_aead_encrypt, + .decrypt = sa_aead_decrypt, + }, + }, + { + .type = CRYPTO_ALG_TYPE_AEAD, + .alg.aead = { + .base = { + .cra_name = "authenc(hmac(sha256),cbc(aes))", + .cra_driver_name = + "authenc(hmac(sha256),cbc(aes))-sa2ul", + .cra_blocksize = AES_BLOCK_SIZE, + .cra_flags = CRYPTO_ALG_TYPE_AEAD | + CRYPTO_ALG_KERN_DRIVER_ONLY | + CRYPTO_ALG_ASYNC | + CRYPTO_ALG_NEED_FALLBACK, + .cra_ctxsize = sizeof(struct sa_tfm_ctx), + .cra_module = THIS_MODULE, + .cra_alignmask = 0, + .cra_priority = 3000, + }, + .ivsize = AES_BLOCK_SIZE, + .maxauthsize = SHA256_DIGEST_SIZE, + + .init = sa_cra_init_aead_sha256, + .exit = sa_exit_tfm_aead, + .setkey = sa_aead_cbc_sha256_setkey, + .setauthsize = sa_aead_setauthsize, + .encrypt = sa_aead_encrypt, + .decrypt = sa_aead_decrypt, + }, + }, }; /* Register the algorithms in crypto framework */ @@ -1722,6 +2214,9 @@ void sa_register_algos(const struct device *dev) } else if (type == CRYPTO_ALG_TYPE_AHASH) { alg_name = sa_algs[i].alg.ahash.halg.base.cra_name; err = crypto_register_ahash(&sa_algs[i].alg.ahash); + } else if (type == CRYPTO_ALG_TYPE_AEAD) { + alg_name = sa_algs[i].alg.aead.base.cra_name; + err = crypto_register_aead(&sa_algs[i].alg.aead); } else { dev_err(dev, "un-supported crypto algorithm (%d)", @@ -1750,6 +2245,8 @@ void sa_unregister_algos(const struct device *dev) crypto_unregister_skcipher(&sa_algs[i].alg.skcipher); else if (type == CRYPTO_ALG_TYPE_AHASH) crypto_unregister_ahash(&sa_algs[i].alg.ahash); + else if (type == CRYPTO_ALG_TYPE_AEAD) + crypto_unregister_aead(&sa_algs[i].alg.aead); sa_algs[i].registered = false; } diff --git a/drivers/crypto/sa2ul.h b/drivers/crypto/sa2ul.h index 733b00bc6e0f..952faad06ec6 100644 --- a/drivers/crypto/sa2ul.h +++ b/drivers/crypto/sa2ul.h @@ -313,6 +313,7 @@ struct sa_tfm_ctx { union { struct crypto_sync_skcipher *skcipher; struct crypto_ahash *ahash; + struct crypto_aead *aead; } fallback; }; From patchwork Fri Apr 24 16:44:29 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tero Kristo X-Patchwork-Id: 197828 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.8 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_SIGNED, DKIM_VALID, DKIM_VALID_AU, HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH, MAILING_LIST_MULTI, SIGNED_OFF_BY, SPF_HELO_NONE, SPF_PASS, URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABEEFC54FCB for ; Fri, 24 Apr 2020 16:45:08 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 8CFC920728 for ; Fri, 24 Apr 2020 16:45:08 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=ti.com header.i=@ti.com header.b="PKStFdgl" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1728297AbgDXQpH (ORCPT ); Fri, 24 Apr 2020 12:45:07 -0400 Received: from fllv0016.ext.ti.com ([198.47.19.142]:36514 "EHLO fllv0016.ext.ti.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1728119AbgDXQpG (ORCPT ); Fri, 24 Apr 2020 12:45:06 -0400 Received: from fllv0035.itg.ti.com ([10.64.41.0]) by fllv0016.ext.ti.com (8.15.2/8.15.2) with ESMTP id 03OGj0Y6125934; Fri, 24 Apr 2020 11:45:00 -0500 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ti.com; s=ti-com-17Q1; t=1587746700; bh=5OyXkewc+Kn9FLREBcSoDlBcSAoQbLaJSLz2ZT75tfg=; h=From:To:CC:Subject:Date:In-Reply-To:References; b=PKStFdgl/MTkS/xuu7FaM1xD4lEELLfZKJgn6lxspiKyf5T+RYuttz1DITAHM5/i4 ZahY/S/nOq4Gspl5leNxr6Mc8V1Hw9XZo1XYQ6DvGetZikS4Gzl28iy0+Pd02wtkCc GWq0PeVcOrFE8hPjEuyxIy4OHeal3ttjyfS2D1TE= Received: from DLEE100.ent.ti.com (dlee100.ent.ti.com [157.170.170.30]) by fllv0035.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03OGj021044847; Fri, 24 Apr 2020 11:45:00 -0500 Received: from DLEE113.ent.ti.com (157.170.170.24) by DLEE100.ent.ti.com (157.170.170.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3; Fri, 24 Apr 2020 11:45:00 -0500 Received: from fllv0039.itg.ti.com (10.64.41.19) by DLEE113.ent.ti.com (157.170.170.24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1979.3 via Frontend Transport; Fri, 24 Apr 2020 11:45:00 -0500 Received: from sokoban.ti.com (ileax41-snat.itg.ti.com [10.172.224.153]) by fllv0039.itg.ti.com (8.15.2/8.15.2) with ESMTP id 03OGinTB033554; Fri, 24 Apr 2020 11:44:59 -0500 From: Tero Kristo To: , , CC: Keerthy Subject: [PATCHv2 6/7] arm64: dts: ti: k3-am6: Add crypto accelarator node Date: Fri, 24 Apr 2020 19:44:29 +0300 Message-ID: <20200424164430.3288-7-t-kristo@ti.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200424164430.3288-1-t-kristo@ti.com> References: <20200424164430.3288-1-t-kristo@ti.com> MIME-Version: 1.0 X-EXCLAIMER-MD-CONFIG: e1e8a2fd-e40a-4ac6-ac9b-f7e9cc9ee180 Sender: linux-crypto-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-crypto@vger.kernel.org From: Keerthy Add crypto accelarator node for supporting hardware crypto algorithms, including SHA1, SHA256, SHA512, AES, 3DES, and AEAD suites. Signed-off-by: Keerthy [t-kristo@ti.com: Modifications based on introduction of yaml binding] Signed-off-by: Tero Kristo --- arch/arm64/boot/dts/ti/k3-am65-main.dtsi | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi index 11887c72f23a..4602ca8e2392 100644 --- a/arch/arm64/boot/dts/ti/k3-am65-main.dtsi +++ b/arch/arm64/boot/dts/ti/k3-am65-main.dtsi @@ -112,6 +112,28 @@ power-domains = <&k3_pds 148 TI_SCI_PD_EXCLUSIVE>; }; + crypto: crypto@4E00000 { + compatible = "ti,am654-sa2ul"; + reg = <0x0 0x4E00000 0x0 0x1200>; + power-domains = <&k3_pds 136 TI_SCI_PD_EXCLUSIVE>; + #address-cells = <2>; + #size-cells = <2>; + ranges = <0x0 0x04E00000 0x00 0x04E00000 0x0 0x30000>; + status = "okay"; + + dmas = <&main_udmap 0xc000>, <&main_udmap 0x4000>, + <&main_udmap 0x4001>; + dma-names = "tx", "rx1", "rx2"; + dma-coherent; + + rng: rng@4e10000 { + compatible = "inside-secure,safexcel-eip76"; + reg = <0x0 0x4e10000 0x0 0x7d>; + interrupts = ; + clocks = <&k3_clks 136 1>; + }; + }; + main_pmx0: pinmux@11c000 { compatible = "pinctrl-single"; reg = <0x0 0x11c000 0x0 0x2e4>;