mbox series

[v2,00/19] crypto: arm64 - play nice with CONFIG_PREEMPT

Message ID 20171204122645.31535-1-ard.biesheuvel@linaro.org
Headers show
Series crypto: arm64 - play nice with CONFIG_PREEMPT | expand

Message

Ard Biesheuvel Dec. 4, 2017, 12:26 p.m. UTC
This is a followup 'crypto: arm64 - disable NEON across scatterwalk API
calls' sent out last Friday.

As reported by Sebastian, the way the arm64 NEON crypto code currently
keeps kernel mode NEON enabled across calls into skcipher_walk_xxx() is
causing problems with RT builds, given that the skcipher walk API may
allocate and free temporary buffers it uses to present the input and
output arrays to the crypto algorithm in blocksize sized chunks (where
blocksize is the natural blocksize of the crypto algorithm), and doing
so with NEON enabled means we're alloc/free'ing memory with preemption
disabled.

This was deliberate: when this code was introduced, each kernel_neon_begin()
and kernel_neon_end() call incurred a fixed penalty of storing resp.
loading the contents of all NEON registers to/from memory, and so doing
it less often had an obvious performance benefit. However, in the mean time,
we have refactored the core kernel mode NEON code, and now kernel_neon_begin()
only incurs this penalty the first time it is called after entering the kernel,
and the NEON register restore is deferred until returning to userland. This
means pulling those calls into the loops that iterate over the input/output
of the crypto algorithm is not a big deal anymore (although there are some
places in the code where we relied on the NEON registers retaining their
values between calls)

So let's clean this up for arm64: update the NEON based skcipher drivers to
no longer keep the NEON enabled when calling into the skcipher walk API.

As pointed out by Peter, this only solves part of the problem. So let's
tackle it more thoroughly, and update the algorithms to test the NEED_RESCHED
flag each time after processing a fixed chunk of input. An attempt was made
to align the different algorithms with regards to how much work such a fixed
chunk entails, i.e., yielding every block for an algorithm that operates on
16 byte blocks at < 1 cycles per byte seems rather pointless.

Changes since v1:
- add CRC-T10DIF test vector (#1)
- stop using GFP_ATOMIC in scatterwalk API calls, now that they are executed
  with preemption enabled (#2 - #6)
- do some preparatory refactoring on the AES block mode code (#7 - #9)
- add yield patches (#10 - #18)
- add test patch (#19) - DO NOT MERGE

Cc: Dave Martin <Dave.Martin@arm.com>
Cc: Russell King - ARM Linux <linux@armlinux.org.uk>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Mark Rutland <mark.rutland@arm.com>
Cc: linux-rt-users@vger.kernel.org
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>

Ard Biesheuvel (19):
  crypto: testmgr - add a new test case for CRC-T10DIF
  crypto: arm64/aes-ce-ccm - move kernel mode neon en/disable into loop
  crypto: arm64/aes-blk - move kernel mode neon en/disable into loop
  crypto: arm64/aes-bs - move kernel mode neon en/disable into loop
  crypto: arm64/chacha20 - move kernel mode neon en/disable into loop
  crypto: arm64/ghash - move kernel mode neon en/disable into loop
  crypto: arm64/aes-blk - remove configurable interleave
  crypto: arm64/aes-blk - add 4 way interleave to CBC encrypt path
  crypto: arm64/aes-blk - add 4 way interleave to CBC-MAC encrypt path
  crypto: arm64/sha256-neon - play nice with CONFIG_PREEMPT kernels
  arm64: assembler: add macro to conditionally yield the NEON under
    PREEMPT
  crypto: arm64/sha1-ce - yield every 8 blocks of input
  crypto: arm64/sha2-ce - yield every 8 blocks of input
  crypto: arm64/aes-blk - yield after processing each 64 bytes of input
  crypto: arm64/aes-bs - yield after processing each 128 bytes of input
  crypto: arm64/aes-ghash - yield after processing fixed number of
    blocks
  crypto: arm64/crc32-ce - yield NEON every 16 blocks of input
  crypto: arm64/crct10dif-ce - yield NEON every 8 blocks of input
  DO NOT MERGE

 arch/arm64/crypto/Makefile             |   3 -
 arch/arm64/crypto/aes-ce-ccm-glue.c    |  47 +-
 arch/arm64/crypto/aes-ce.S             |  17 +-
 arch/arm64/crypto/aes-glue.c           |  95 ++-
 arch/arm64/crypto/aes-modes.S          | 624 ++++++++++----------
 arch/arm64/crypto/aes-neon.S           |   2 +
 arch/arm64/crypto/aes-neonbs-core.S    | 317 ++++++----
 arch/arm64/crypto/aes-neonbs-glue.c    |  48 +-
 arch/arm64/crypto/chacha20-neon-glue.c |  12 +-
 arch/arm64/crypto/crc32-ce-core.S      |  55 +-
 arch/arm64/crypto/crct10dif-ce-core.S  |  39 +-
 arch/arm64/crypto/ghash-ce-core.S      | 128 ++--
 arch/arm64/crypto/ghash-ce-glue.c      |  17 +-
 arch/arm64/crypto/sha1-ce-core.S       |  45 +-
 arch/arm64/crypto/sha2-ce-core.S       |  40 +-
 arch/arm64/crypto/sha256-glue.c        |  36 +-
 arch/arm64/include/asm/assembler.h     |  83 +++
 crypto/testmgr.h                       | 259 ++++++++
 18 files changed, 1231 insertions(+), 636 deletions(-)

-- 
2.11.0

--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Comments

Ard Biesheuvel Dec. 6, 2017, 12:25 p.m. UTC | #1
On 6 December 2017 at 12:12, Dave P Martin <Dave.Martin@arm.com> wrote:
> On Wed, Dec 06, 2017 at 11:57:12AM +0000, Ard Biesheuvel wrote:

>> On 6 December 2017 at 11:51, Dave Martin <Dave.Martin@arm.com> wrote:

>> > On Tue, Dec 05, 2017 at 06:04:34PM +0000, Ard Biesheuvel wrote:

>> >> On 5 December 2017 at 12:45, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:

>> >> >

>> >> >

>> >> >> On 5 Dec 2017, at 12:28, Dave Martin <Dave.Martin@arm.com> wrote:

>> >> >>

>> >> >>> On Mon, Dec 04, 2017 at 12:26:37PM +0000, Ard Biesheuvel wrote:

>> >> >>> Add a support macro to conditionally yield the NEON (and thus the CPU)

>> >> >>> that may be called from the assembler code. Given that especially the

>> >> >>> instruction based accelerated crypto code may use very tight loops, add

>> >> >>> some parametrization so that the TIF_NEED_RESCHED flag test is only

>> >> >>> executed every so many loop iterations.

>> >> >>>

>> >> >>> In some cases, yielding the NEON involves saving and restoring a non

>> >> >>> trivial amount of context (especially in the CRC folding algorithms),

>> >> >>> and so the macro is split into two, and the code in between is only

>> >> >>> executed when the yield path is taken, allowing the contex to be preserved.

>> >> >>> The second macro takes a label argument that marks the resume-from-yield

>> >> >>> path, which should restore the preserved context again.

>> >> >>>

>> >> >>> Signed-off-by: Ard Biesheuvel <ard.biesheuvel@linaro.org>

>> >> >>> ---

>> >> >>> arch/arm64/include/asm/assembler.h | 50 ++++++++++++++++++++

>> >> >>> 1 file changed, 50 insertions(+)

>> >> >>>

>> >> >>> diff --git a/arch/arm64/include/asm/assembler.h b/arch/arm64/include/asm/assembler.h

>> >> >>> index aef72d886677..917b026d3e00 100644

>> >> >>> --- a/arch/arm64/include/asm/assembler.h

>> >> >>> +++ b/arch/arm64/include/asm/assembler.h

>> >> >>> @@ -512,4 +512,54 @@ alternative_else_nop_endif

>> >> >>> #endif

>> >> >>>    .endm

>> >> >>>

>> >> >>> +/*

>> >> >>> + * yield_neon - check whether to yield to another runnable task from

>> >> >>> + *        kernel mode NEON code (running with preemption disabled)

>> >> >>> + *

>> >> >>> + * - Check whether the preempt count is exactly 1, in which case disabling

>> >> >>> + *   preemption once will make the task preemptible. If this is not the case,

>> >> >>> + *   yielding is pointless.

>> >> >>> + * - Check whether TIF_NEED_RESCHED is set, and if so, disable and re-enable

>> >> >>> + *   kernel mode NEON (which will trigger a reschedule), and branch to the

>> >> >>> + *   yield fixup code at @lbl.

>> >> >>> + */

>> >> >>> +    .macro        yield_neon, lbl:req, ctr, order, stride, loop

>> >> >>> +    yield_neon_pre    \ctr, \order, \stride, \loop

>> >> >>> +    yield_neon_post    \lbl

>> >> >>> +    .endm

>> >> >>> +

>> >> >>> +    .macro        yield_neon_pre, ctr, order=0, stride, loop=4444f

>> >> >>> +#ifdef CONFIG_PREEMPT

>> >> >>> +    /*

>> >> >>> +     * With some algorithms, it makes little sense to poll the

>> >> >>> +     * TIF_NEED_RESCHED flag after every iteration, so only perform

>> >> >>> +     * the check every 2^order strides.

>> >> >>> +     */

>> >> >>> +    .if        \order > 1

>> >> >>> +    .if        (\stride & (\stride - 1)) != 0

>> >> >>> +    .error        "stride should be a power of 2"

>> >> >>> +    .endif

>> >> >>> +    tst        \ctr, #((1 << \order) * \stride - 1) & ~(\stride - 1)

>> >> >>> +    b.ne        \loop

>> >> >>> +    .endif

>> >> >>

>> >> >> I'm not sure what baking in this check gives us, and this seems to

>> >> >> conflate two rather different things: yielding and defining a

>> >> >> "heartbeat" frequency for the calling code.

>> >> >>

>> >> >> Can we separate out the crypto-loop-helper semantics from the yield-

>> >> >> neon stuff?

>> >> >>

>> >> >

>> >> > Fair enough. I incorporated the check here so it disappears from the code entirely when !CONFIG_PREEMPT, because otherwise, you end up with a sequence that is mispredicted every # iterations without any benefit.

>> >> > I guess i could macroise that separately though.

>> >> >

>> >> >> If we had

>> >> >>

>> >> >>    if_cond_yield_neon

>> >> >>    // patchup code

>> >> >>    endif_yield_neon

>> >> >>

>> >>

>> >> I like this, but ...

>> >>

>> >> >> then the caller is free to conditionally branch over that as appropriate

>> >> >> like

>> >> >>

>> >> >> loop:

>> >> >>    // crypto stuff

>> >> >>    tst x0, #0xf

>> >> >>    b.ne    loop

>> >> >>

>> >> >>    if_cond_yield_neon

>> >> >>    // patchup code

>> >> >>    endif_cond_yield_neon

>> >> >>

>> >>

>> >> I need to put the post patchup code somewhere too. Please refer to the

>> >> CRC patches for the best examples of this.

>> >

>> > I'm not sure I follow.  If you need to do something different after

>> > patchup, can't you either pull that code into the if...endif too, or

>> > otherwise put a branch before the endif?

>> >

>>

>> No, not really.

>>

>> What I currently have is

>>

>> >    if_cond_yield_neon

>> >    // patchup code

>> >    endif_cond_yield_neon

>>

>> which is being turned into

>>

>>     get_thread_info x0

>>     ldr w1, [x0, #TSK_TI_PREEMPT]

>>     ldr x0, [x0, #TSK_TI_FLAGS]

>>     cmp w1, #1 // == PREEMPT_OFFSET

>>     csel x0, x0, xzr, eq

>>     tbnz x0, #TIF_NEED_RESCHED, 5555f // needs rescheduling?

>> 4444:

>>

>>     .subsection 1

>> 5555:

>>     // patchup code

>>     bl kernel_neon_end

>>     bl kernel_neon_begin

>>     // post patchup code goes here

>>     b 4444b

>>     .subsection

>>

>> so what I basically need is a place to put the post patchup code,

>> unless I open code it and branch to a different label right after

>> kernel_neon_begin

>

> Ah, I get you.  I had convinced myself that there was only post-

> patchup code.

>

> So you conceptually want something like

>

> if_will_cond_yield_neon

>         // pre-yield patchup code

> do_cond_yield_neon

>         // post-yield patchup code

> endif_yield_neon

>

> .macro if_will_cond_yield_neon

>         // do preempt_count == 1 && TIF_NEED_RESCHED check

>         b.wont_yield 3432f

> .endm

>

> .macro do_cond_yield_neon

>         bl      kernel_neon_end

>         bl      kernel_neon_begin

> .endm

>

> .macro endif_yield_neon

> 3432:

> .endm

>

> The main difference I see is that this doesn't get the slow-case

> code out of line.

>

> Or have I still missed something?

>


No, that seems accurate. I think the idiom looks much better, but I'd
still like to get the code out of line, so that everything disappears
completely for !CONFIG_PREEMPT.

I'll try to rework my stuff using this as a starting point.
--
To unsubscribe from this list: send the line "unsubscribe linux-rt-users" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html