mbox series

[RFC,v1,0/2] crypto: tcrypt: small changes to run under KUnit

Message ID 20210715213138.1363079-1-dlatypov@google.com
Headers show
Series crypto: tcrypt: small changes to run under KUnit | expand

Message

Daniel Latypov July 15, 2021, 9:31 p.m. UTC
tcrypt.c calls itself a "[q]uick & dirty crypto testing module."
One that "will only exist until we have a better testing mechanism."

This RFC seeks to start a discussion if KUnit can fill the role of that
"better testing mechanism."

As-is, these example changes don't make the test code much cleaner.
But they do provide a new way of running the test that's hopefully more
accessible, namely
$ ./tools/testing/kunit/kunit.py run --kunitconfig=crypto
...
[16:53:16] Starting KUnit Kernel ...
[16:53:19] ============================================================
[16:53:19] ======== [PASSED] tcrypt ========
[16:53:19] [PASSED] tcrypt
[16:53:19] ============================================================
[16:53:19] Testing complete. 1 tests run. 0 failed. 0 crashed. 0 skipped.
[16:53:19] Elapsed time: 8.806s total, 0.001s configuring, 5.764s building, 0.000s running

This series contains
* an initial patch with the boilerplate needed to run
under KUnit and track pass/fail status in the `test` context object
instead of passing around an int.
* another change to plumb the `test` context object to other test cases
that could previously fail w/o affecting the overall pass/fail status.

I haven't reformatted the code for now just to make the changes a bit
easier to read and skim over. checkpatch.pl isn't happy about the
spacing in the second patch.

== Other pros ==

If we want, we can go down the path of trying to structure the tests in
more idiomatic KUnit fashion to get some benefits like 
* automatically freeing memory at the end of tests when allocated using
  kunit_kmalloc() and friends instead of having to maintain labels
  * can be made to call arbitrary cleanup functions as well
  * this hopefully makes the tests more readable, at the expense of a
  bit more runtime overhead dynamically tracking these allocations
  * doing this just for the kmalloc's and __get_free_page() directly in
  * tcrypt.c saves 100+ lines of code.

* flagging test cases with KASAN issues (and eventually UBSAN)

* a bit easier way to dynamically run subsets of tests via glob
  * e.g. kunit.py run 'crypto.*sha512*'
  * KUnit currently only supports filtering by the suite ("crypto")
  right now, but we should have test-level filtering support soonish.

== Cons ==

The main cons are we'd be slightly changing how these tests are built
and run with these example patches

These changes are mainly
* building the test now requires CONFIG_KUNIT
* Running `insmod` on the module will always return 0, even if tests
  failed
  * the test instead prints out (K)TAP output to denote pass/fail
  * this is the format kselftest uses, but it's not fully standardized

And if we eventually try to restructure the test as mentioned above:
* more disruptive changes to how the tests are run
  * we'd have to move away from using the "mode" parameter
* a decent amount of code churn

Daniel Latypov (2):
  crypto: tcrypt: minimal conversion to run under KUnit
  crypto: tcrypt: call KUNIT_FAIL() instead of pr_err()

 crypto/Kconfig  |    5 +-
 crypto/tcrypt.c | 1063 +++++++++++++++++++++++------------------------
 2 files changed, 524 insertions(+), 544 deletions(-)


base-commit: e9338abf0e186336022293d2e454c106761f262b

Comments

Herbert Xu July 23, 2021, 6:43 a.m. UTC | #1
On Thu, Jul 15, 2021 at 02:31:37PM -0700, Daniel Latypov wrote:
>> == Questions ==

> * does this seem like it would make running the test easier?


I don't mind.  tcrypt these days isn't used so much for correctness
testing.  It's mostly being used for speed testing.  A secondary
use is to instantiate templates.

> * does `tvmem` actually need page-aligned buffers?


I think it may be needed for those split-SG test cases where
we deliberately create a buffer that straddles a page boundary.

> * I have no clue how FIPS intersects with all of this.


It doesn't really matter because in FIPS mode when a correctness
test fails the kernel panics.

>   * would it be fine to leave the test code built-in for FIPS instead of

>   returning -EAGAIN?


The returning -EAGAIN is irrelevant in FIPS mode.  It's more of
an aid in normal mode when you use tcrypt for speed testing.

Thanks,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Daniel Latypov July 23, 2021, 7:31 p.m. UTC | #2
On Thu, Jul 22, 2021 at 11:43 PM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>

> On Thu, Jul 15, 2021 at 02:31:37PM -0700, Daniel Latypov wrote:

> >> == Questions ==

> > * does this seem like it would make running the test easier?

>

> I don't mind.  tcrypt these days isn't used so much for correctness

> testing.  It's mostly being used for speed testing.  A secondary

> use is to instantiate templates.


Thanks, that makes a lot of sense.
In that case, how useful would `kunit.py run` be? I.e. Do people
mostly want to see numbers on bare metal?

The default mode of `kunit.py run` is to use ARCH=um.
I assume (for at least most of the library-type crypto code) it should
have the same performance characteristics, but that might not be the
case. I can try and get some numbers on that.

There's an option to make `kunit.py run` use ARCH=86_64, but it'll be
in a QEMU VM, so again there's some performance overhead.

If either option seems useful, then perhaps a minimal patch like this
would be beneficial.
I can make it even smaller and less intrusive by restoring the "ret +=
..." code and having a single `KUNIT_EXPECT_EQ_MSG(test, ret, 0, "at
least one test case failed")` at the very end.

It does not sound like patch #2 or any future attempts to try and make
use of KUnit features is necessarily worth it, if correctness testing
isn't really the goal of tcrypt.c anymore.

>

> > * does `tvmem` actually need page-aligned buffers?

>

> I think it may be needed for those split-SG test cases where

> we deliberately create a buffer that straddles a page boundary.

>

> > * I have no clue how FIPS intersects with all of this.

>

> It doesn't really matter because in FIPS mode when a correctness

> test fails the kernel panics.

>

> >   * would it be fine to leave the test code built-in for FIPS instead of

> >   returning -EAGAIN?

>

> The returning -EAGAIN is irrelevant in FIPS mode.  It's more of

> an aid in normal mode when you use tcrypt for speed testing.

>

> Thanks,

> --

> Email: Herbert Xu <herbert@gondor.apana.org.au>

> Home Page: http://gondor.apana.org.au/~herbert/

> PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
Herbert Xu July 30, 2021, 2:55 a.m. UTC | #3
On Fri, Jul 23, 2021 at 12:31:28PM -0700, Daniel Latypov wrote:
>

> Thanks, that makes a lot of sense.

> In that case, how useful would `kunit.py run` be? I.e. Do people

> mostly want to see numbers on bare metal?


I think it's a mix of both.  As in performance on bare metal and
under virtualisation may be of interest.  I don't think you're going
to be going through kunit for the speed tests though, because you
need to supply module parameters for tcrypt to do that.

Cheers,
-- 
Email: Herbert Xu <herbert@gondor.apana.org.au>
Home Page: http://gondor.apana.org.au/~herbert/
PGP Key: http://gondor.apana.org.au/~herbert/pubkey.txt
David Gow July 30, 2021, 5:33 a.m. UTC | #4
On Fri, Jul 30, 2021 at 10:55 AM Herbert Xu <herbert@gondor.apana.org.au> wrote:
>

> On Fri, Jul 23, 2021 at 12:31:28PM -0700, Daniel Latypov wrote:

> >

> > Thanks, that makes a lot of sense.

> > In that case, how useful would `kunit.py run` be? I.e. Do people

> > mostly want to see numbers on bare metal?

>

> I think it's a mix of both.  As in performance on bare metal and

> under virtualisation may be of interest.  I don't think you're going

> to be going through kunit for the speed tests though, because you

> need to supply module parameters for tcrypt to do that.


FYI, there is a patch for kunit_tool which will allow kernel
parameters to be passed through:
https://patchwork.kernel.org/project/linux-kselftest/patch/20210715160819.1107685-1-dlatypov@google.com/

That being said, no-one's ever used any of the KUnit tooling for
performance testing before, as far as I know, so whether or not it
turns out to be useful or not remains to be seen. With this patch,
it'd at least be an option if you wanted to try it.

-- David