Kernel panic - encryption/decryption failed when open file on Arm64

Message ID CAKv+Gu8w+BuwxQjOtpnFPHnJNUzq7m0K+KJ8=FG2wHigaB54ng@mail.gmail.com
State New
Headers show

Commit Message

Ard Biesheuvel Sept. 9, 2016, 10:56 a.m.
On 9 September 2016 at 11:31, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 9 September 2016 at 11:19, xiakaixu <xiakaixu@huawei.com> wrote:

>> Hi,

>>

>> After a deeply research about this crash, seems it is a specific

>> bug that only exists in armv8 board. And it occurs in this function

>> in arch/arm64/crypto/aes-glue.c.

>>

>> static int ctr_encrypt(struct blkcipher_desc *desc, struct scatterlist *dst,

>>                        struct scatterlist *src, unsigned int nbytes)

>> {

>>        ...

>>

>>         desc->flags &= ~CRYPTO_TFM_REQ_MAY_SLEEP;

>>         blkcipher_walk_init(&walk, dst, src, nbytes);

>>         err = blkcipher_walk_virt_block(desc, &walk, AES_BLOCK_SIZE); --->

>> page allocation failed

>>

>>         ...

>>

>>         while ((blocks = (walk.nbytes / AES_BLOCK_SIZE))) {           ---->

>> walk.nbytes = 0, and skip this loop

>>                 aes_ctr_encrypt(walk.dst.virt.addr, walk.src.virt.addr,

>>                                 (u8 *)ctx->key_enc, rounds, blocks, walk.iv,

>>                                 first);

>>         ...

>>                 err = blkcipher_walk_done(desc, &walk,

>>                                           walk.nbytes % AES_BLOCK_SIZE);

>>         }

>>         if (nbytes) {                                                 ---->

>> enter this if() statement

>>                 u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;

>>                 u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;

>>         ...

>>

>>                 aes_ctr_encrypt(tail, tsrc, (u8 *)ctx->key_enc, rounds,

>> ----> the the sencond input parameter is NULL, so crash...

>>                                 blocks, walk.iv, first);

>>         ...

>>         }

>>         ...

>> }

>>

>>

>> If the page allocation failed in the function blkcipher_walk_virt_block(),

>> the variable walk.nbytes = 0, so it will skip the while() loop and enter

>> the if(nbytes) statment. But here the varibale tsrc is NULL and it is also

>> the sencond input parameter of the function aes_ctr_encrypt()... Kernel

>> Panic...

>>

>> I have also researched the similar function in other architectures, and

>> there if(walk.nbytes) is used, not this if(nbytes) statement in the armv8.

>> so I think this armv8 function ctr_encrypt() should deal with the page

>> allocation failed situation.

>>


Does this solve your problem?

                u8 __aligned(8) tail[AES_BLOCK_SIZE];
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Ard Biesheuvel Sept. 12, 2016, 5:40 p.m. | #1
On 12 September 2016 at 03:16, liushuoran <liushuoran@huawei.com> wrote:
> Hi Ard,

>

> Thanks for the prompt reply. With the patch, there is no panic anymore. But it seems that the encryption/decryption is not successful anyway.

>

> As Herbert points out, "If the page allocation fails in blkcipher_walk_next it'll simply switch over to processing it block by block". So does that mean the encryption/decryption should be successful even if the page allocation fails? Please correct me if I misunderstand anything. Thanks in advance.

>


Perhaps Herbert can explain: I don't see how the 'n = 0' assignment
results in the correct path being taken; this chunk (blkcipher.c:252)

if (unlikely(n < bsize)) {
    err = blkcipher_next_slow(desc, walk, bsize, walk->alignmask);
    goto set_phys_lowmem;
}

is skipped due to the fact that n == 0 and therefore bsize == 0, and
so the condition is always false for n == 0

Therefore we end up here (blkcipher.c:257)

walk->nbytes = n;
if (walk->flags & BLKCIPHER_WALK_COPY) {
    err = blkcipher_next_copy(walk);
    goto set_phys_lowmem;
}

where blkcipher_next_copy() unconditionally calls memcpy() with
walk->page as destination (even though we ended up here due to the
fact that walk->page == NULL)

So to me, it seems like we should be taking the blkcipher_next_slow()
path, which does a kmalloc() and bails with -ENOMEM if that fails.
--
To unsubscribe from this list: send the line "unsubscribe linux-crypto" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Patch hide | download patch | download mbox

diff --git a/arch/arm64/crypto/aes-glue.c b/arch/arm64/crypto/aes-glue.c
index 5c888049d061..6b2aa0fd6cd0 100644
--- a/arch/arm64/crypto/aes-glue.c
+++ b/arch/arm64/crypto/aes-glue.c
@@ -216,7 +216,7 @@  static int ctr_encrypt(struct blkcipher_desc
*desc, struct scatterlist *dst,
                err = blkcipher_walk_done(desc, &walk,
                                          walk.nbytes % AES_BLOCK_SIZE);
        }
-       if (nbytes) {
+       if (walk.nbytes % AES_BLOCK_SIZE) {
                u8 *tdst = walk.dst.virt.addr + blocks * AES_BLOCK_SIZE;
                u8 *tsrc = walk.src.virt.addr + blocks * AES_BLOCK_SIZE;