diff mbox series

test: make tests should use pytest -ra

Message ID 20200418120852.104484-1-xypron.glpk@gmx.de
State New
Headers show
Series test: make tests should use pytest -ra | expand

Commit Message

Heinrich Schuchardt April 18, 2020, 12:08 p.m. UTC
By passing -ra to pytest we get a summary indicating which tests failed
and why tests were skipped.

Here is an example output:

--
2.25.1

Comments

Simon Glass April 19, 2020, 11:38 p.m. UTC | #1
On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>
> By passing -ra to pytest we get a summary indicating which tests failed
> and why tests were skipped.
>
> Here is an example output:
>
> ======================== short test summary info =========================
> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> configuration is defined
> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
>
> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> ---
>  test/run | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
>

This is really noisy - I get lots of extra output. Can we make this an option?

Regards,
Simon
Heinrich Schuchardt April 20, 2020, 6:23 p.m. UTC | #2
On 4/20/20 1:38 AM, Simon Glass wrote:
> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>>
>> By passing -ra to pytest we get a summary indicating which tests failed
>> and why tests were skipped.
>>
>> Here is an example output:
>>
>> ======================== short test summary info =========================
>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
>> configuration is defined
>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
>>
>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
>> ---
>>  test/run | 6 +++---
>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>
>
> This is really noisy - I get lots of extra output. Can we make this an option?

When I run 'make tests' I get 41 out of 199 lines explaining skipped
and failed tests.

Lines like these are noise because there is no actionable information:

test/py/tests/test_fs/test_basic.py
sssssssssssssssssssssssssssssssssssssss [  0%]
test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
test/py/tests/test_fs/test_symlink.py ssss [  0%]
test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]

This new line has actionable information:

SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
filesystem: fat16

Next step is to change this line to provide a more useful output, e.g.

-    except CalledProcessError:
-        pytest.skip('Setup failed for filesystem: ' + fs_type)
+    except CalledProcessError as err:
+        pytest.skip('Setup failed for filesystem: ' + fs_type + \
+            ', {}'.format(err))

SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
filesystem: fat16, Command 'mkfs.vfat -F 16
build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
status 127.

Now we know that that the test is wrong by assuming that mkfs.vfat is in
the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
can fix it.

We should get rid of all skipped tests especially on Travis CI and
Gitlab CI. Further we should provide instructions to set up a local
system to avoid skipping tests.

Simon, why do you want to remove the actionable information?

Best regards

Heinrich
Tom Rini April 20, 2020, 6:32 p.m. UTC | #3
On Mon, Apr 20, 2020 at 08:23:08PM +0200, Heinrich Schuchardt wrote:
> On 4/20/20 1:38 AM, Simon Glass wrote:
> > On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> >>
> >> By passing -ra to pytest we get a summary indicating which tests failed
> >> and why tests were skipped.
> >>
> >> Here is an example output:
> >>
> >> ======================== short test summary info =========================
> >> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> >> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> >> configuration is defined
> >> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> >>
> >> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> >> ---
> >>  test/run | 6 +++---
> >>  1 file changed, 3 insertions(+), 3 deletions(-)
> >>
> >
> > This is really noisy - I get lots of extra output. Can we make this an option?
> 
> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> and failed tests.
> 
> Lines like these are noise because there is no actionable information:
> 
> test/py/tests/test_fs/test_basic.py
> sssssssssssssssssssssssssssssssssssssss [  0%]
> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> 
> This new line has actionable information:
> 
> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> filesystem: fat16
> 
> Next step is to change this line to provide a more useful output, e.g.
> 
> -    except CalledProcessError:
> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> +    except CalledProcessError as err:
> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> +            ', {}'.format(err))
> 
> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> filesystem: fat16, Command 'mkfs.vfat -F 16
> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> status 127.
> 
> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> can fix it.
> 
> We should get rid of all skipped tests especially on Travis CI and
> Gitlab CI. Further we should provide instructions to set up a local
> system to avoid skipping tests.
> 
> Simon, why do you want to remove the actionable information?

OK, so I ran this through "qcheck" and I got a bit more output, but some
of which indicates we should figure out how to enable these tests for
sandbox out of the box.

The follow-up patch I really want to see is passing -ra in the
travis/gitlab/azure file as I very much agree we should get everything
running in CI or have a good reason why not (ie the MMC tests might be
hard to enable on all platforms via qemu but we should make a good try
at it).

And giving better error messages too would be very helpful so it's
clearer why we aren't (and why some people would not want to) run the FS
tests for example as we really haven't figured out a good 100%
non-root-user method to do those tests.
Simon Glass April 20, 2020, 6:45 p.m. UTC | #4
Hi Heinrich,

On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>
> On 4/20/20 1:38 AM, Simon Glass wrote:
> > On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> >>
> >> By passing -ra to pytest we get a summary indicating which tests failed
> >> and why tests were skipped.
> >>
> >> Here is an example output:
> >>
> >> ======================== short test summary info =========================
> >> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> >> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> >> configuration is defined
> >> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> >>
> >> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> >> ---
> >>  test/run | 6 +++---
> >>  1 file changed, 3 insertions(+), 3 deletions(-)
> >>
> >
> > This is really noisy - I get lots of extra output. Can we make this an option?
>
> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> and failed tests.
>
> Lines like these are noise because there is no actionable information:
>
> test/py/tests/test_fs/test_basic.py
> sssssssssssssssssssssssssssssssssssssss [  0%]
> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
>
> This new line has actionable information:
>
> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> filesystem: fat16
>
> Next step is to change this line to provide a more useful output, e.g.
>
> -    except CalledProcessError:
> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> +    except CalledProcessError as err:
> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> +            ', {}'.format(err))
>
> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> filesystem: fat16, Command 'mkfs.vfat -F 16
> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> status 127.
>
> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> can fix it.
>
> We should get rid of all skipped tests especially on Travis CI and
> Gitlab CI. Further we should provide instructions to set up a local
> system to avoid skipping tests.
>
> Simon, why do you want to remove the actionable information?

I don't want to remove it. It isn't there now!

Let's fix it before we enable it. Otherwise it is just noise. The
device tree fiasco is a real pain.

BTW the fs tests are flaky for me so I seldom run them.

Regards,
Simon
Heinrich Schuchardt April 20, 2020, 7 p.m. UTC | #5
On 4/20/20 8:32 PM, Tom Rini wrote:
> On Mon, Apr 20, 2020 at 08:23:08PM +0200, Heinrich Schuchardt wrote:
>> On 4/20/20 1:38 AM, Simon Glass wrote:
>>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>>>>
>>>> By passing -ra to pytest we get a summary indicating which tests failed
>>>> and why tests were skipped.
>>>>
>>>> Here is an example output:
>>>>
>>>> ======================== short test summary info =========================
>>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
>>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
>>>> configuration is defined
>>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
>>>>
>>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
>>>> ---
>>>>  test/run | 6 +++---
>>>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>
>>> This is really noisy - I get lots of extra output. Can we make this an option?
>>
>> When I run 'make tests' I get 41 out of 199 lines explaining skipped
>> and failed tests.
>>
>> Lines like these are noise because there is no actionable information:
>>
>> test/py/tests/test_fs/test_basic.py
>> sssssssssssssssssssssssssssssssssssssss [  0%]
>> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
>> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
>> test/py/tests/test_fs/test_symlink.py ssss [  0%]
>> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
>>
>> This new line has actionable information:
>>
>> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
>> filesystem: fat16
>>
>> Next step is to change this line to provide a more useful output, e.g.
>>
>> -    except CalledProcessError:
>> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
>> +    except CalledProcessError as err:
>> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
>> +            ', {}'.format(err))
>>
>> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
>> filesystem: fat16, Command 'mkfs.vfat -F 16
>> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
>> status 127.
>>
>> Now we know that that the test is wrong by assuming that mkfs.vfat is in
>> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
>> can fix it.
>>
>> We should get rid of all skipped tests especially on Travis CI and
>> Gitlab CI. Further we should provide instructions to set up a local
>> system to avoid skipping tests.
>>
>> Simon, why do you want to remove the actionable information?
>
> OK, so I ran this through "qcheck" and I got a bit more output, but some
> of which indicates we should figure out how to enable these tests for
> sandbox out of the box.
>
> The follow-up patch I really want to see is passing -ra in the
> travis/gitlab/azure file as I very much agree we should get everything
> running in CI or have a good reason why not (ie the MMC tests might be
> hard to enable on all platforms via qemu but we should make a good try
> at it).
>
> And giving better error messages too would be very helpful so it's
> clearer why we aren't (and why some people would not want to) run the FS
> tests for example as we really haven't figured out a good 100%
> non-root-user method to do those tests.
>

The tests run fine on my Debian system with
https://github.com/xypron2/u-boot/commit/f465d52a5a539761d9e5602331280ce07a1bcbca
where I assume mkfs.* and fsck.ext4 are in /sbin. The same should work
on Ubuntu.

On current distributions there are also symlinks in /usr/sbin/ but
https://askubuntu.com/questions/674847/sudo-usr-sbin-mkfs-ext4-command-not-found
indicates that this does not hold true for elder distributions.

I will submit the test once tested in Travis CI and Gitlab CI.

Best regards

Heinrich
Heinrich Schuchardt April 20, 2020, 7:03 p.m. UTC | #6
On 4/20/20 8:45 PM, Simon Glass wrote:
> Hi Heinrich,
>
> On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>>
>> On 4/20/20 1:38 AM, Simon Glass wrote:
>>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>>>>
>>>> By passing -ra to pytest we get a summary indicating which tests failed
>>>> and why tests were skipped.
>>>>
>>>> Here is an example output:
>>>>
>>>> ======================== short test summary info =========================
>>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
>>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
>>>> configuration is defined
>>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
>>>>
>>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
>>>> ---
>>>>  test/run | 6 +++---
>>>>  1 file changed, 3 insertions(+), 3 deletions(-)
>>>>
>>>
>>> This is really noisy - I get lots of extra output. Can we make this an option?
>>
>> When I run 'make tests' I get 41 out of 199 lines explaining skipped
>> and failed tests.
>>
>> Lines like these are noise because there is no actionable information:
>>
>> test/py/tests/test_fs/test_basic.py
>> sssssssssssssssssssssssssssssssssssssss [  0%]
>> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
>> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
>> test/py/tests/test_fs/test_symlink.py ssss [  0%]
>> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
>>
>> This new line has actionable information:
>>
>> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
>> filesystem: fat16
>>
>> Next step is to change this line to provide a more useful output, e.g.
>>
>> -    except CalledProcessError:
>> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
>> +    except CalledProcessError as err:
>> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
>> +            ', {}'.format(err))
>>
>> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
>> filesystem: fat16, Command 'mkfs.vfat -F 16
>> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
>> status 127.
>>
>> Now we know that that the test is wrong by assuming that mkfs.vfat is in
>> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
>> can fix it.
>>
>> We should get rid of all skipped tests especially on Travis CI and
>> Gitlab CI. Further we should provide instructions to set up a local
>> system to avoid skipping tests.
>>
>> Simon, why do you want to remove the actionable information?
>
> I don't want to remove it. It isn't there now!
>
> Let's fix it before we enable it. Otherwise it is just noise. The
> device tree fiasco is a real pain.
>
> BTW the fs tests are flaky for me so I seldom run them.

What do you mean by flaky?

Do you mean unreliable (cf.
https://www.urbandictionary.com/define.php?term=flaky)?

What is unreliable about the tests?

Best regards

Heinrich

>
> Regards,
> Simon
>
Simon Glass April 21, 2020, 5:36 p.m. UTC | #7
Hi Heinrich,

On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
>
> On 4/20/20 8:45 PM, Simon Glass wrote:
> > Hi Heinrich,
> >
> > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> >>
> >> On 4/20/20 1:38 AM, Simon Glass wrote:
> >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> >>>>
> >>>> By passing -ra to pytest we get a summary indicating which tests failed
> >>>> and why tests were skipped.
> >>>>
> >>>> Here is an example output:
> >>>>
> >>>> ======================== short test summary info =========================
> >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> >>>> configuration is defined
> >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> >>>>
> >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> >>>> ---
> >>>>  test/run | 6 +++---
> >>>>  1 file changed, 3 insertions(+), 3 deletions(-)
> >>>>
> >>>
> >>> This is really noisy - I get lots of extra output. Can we make this an option?
> >>
> >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> >> and failed tests.
> >>
> >> Lines like these are noise because there is no actionable information:
> >>
> >> test/py/tests/test_fs/test_basic.py
> >> sssssssssssssssssssssssssssssssssssssss [  0%]
> >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> >> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> >>
> >> This new line has actionable information:
> >>
> >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> >> filesystem: fat16
> >>
> >> Next step is to change this line to provide a more useful output, e.g.
> >>
> >> -    except CalledProcessError:
> >> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> >> +    except CalledProcessError as err:
> >> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> >> +            ', {}'.format(err))
> >>
> >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> >> filesystem: fat16, Command 'mkfs.vfat -F 16
> >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> >> status 127.
> >>
> >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> >> can fix it.
> >>
> >> We should get rid of all skipped tests especially on Travis CI and
> >> Gitlab CI. Further we should provide instructions to set up a local
> >> system to avoid skipping tests.
> >>
> >> Simon, why do you want to remove the actionable information?
> >
> > I don't want to remove it. It isn't there now!
> >
> > Let's fix it before we enable it. Otherwise it is just noise. The
> > device tree fiasco is a real pain.
> >
> > BTW the fs tests are flaky for me so I seldom run them.
>
> What do you mean by flaky?
>
> Do you mean unreliable (cf.
> https://www.urbandictionary.com/define.php?term=flaky)?

Yes!

>
> What is unreliable about the tests?

You have it above - the filesystem tests sometimes fail for me.

I think all the other tests are good, although I think there is one
that has a time delay in it that needs to be fixed.

Also we should really run the tests concurrently like binman does (see
tools/concurrencytest).

Regards,
Simon
Tom Rini April 21, 2020, 5:47 p.m. UTC | #8
On Tue, Apr 21, 2020 at 11:36:32AM -0600, Simon Glass wrote:
> Hi Heinrich,
> 
> On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> >
> > On 4/20/20 8:45 PM, Simon Glass wrote:
> > > Hi Heinrich,
> > >
> > > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > >>
> > >> On 4/20/20 1:38 AM, Simon Glass wrote:
> > >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > >>>>
> > >>>> By passing -ra to pytest we get a summary indicating which tests failed
> > >>>> and why tests were skipped.
> > >>>>
> > >>>> Here is an example output:
> > >>>>
> > >>>> ======================== short test summary info =========================
> > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> > >>>> configuration is defined
> > >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> > >>>>
> > >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > >>>> ---
> > >>>>  test/run | 6 +++---
> > >>>>  1 file changed, 3 insertions(+), 3 deletions(-)
> > >>>>
> > >>>
> > >>> This is really noisy - I get lots of extra output. Can we make this an option?
> > >>
> > >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> > >> and failed tests.
> > >>
> > >> Lines like these are noise because there is no actionable information:
> > >>
> > >> test/py/tests/test_fs/test_basic.py
> > >> sssssssssssssssssssssssssssssssssssssss [  0%]
> > >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> > >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> > >> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> > >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> > >>
> > >> This new line has actionable information:
> > >>
> > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > >> filesystem: fat16
> > >>
> > >> Next step is to change this line to provide a more useful output, e.g.
> > >>
> > >> -    except CalledProcessError:
> > >> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> > >> +    except CalledProcessError as err:
> > >> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> > >> +            ', {}'.format(err))
> > >>
> > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > >> filesystem: fat16, Command 'mkfs.vfat -F 16
> > >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> > >> status 127.
> > >>
> > >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> > >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> > >> can fix it.
> > >>
> > >> We should get rid of all skipped tests especially on Travis CI and
> > >> Gitlab CI. Further we should provide instructions to set up a local
> > >> system to avoid skipping tests.
> > >>
> > >> Simon, why do you want to remove the actionable information?
> > >
> > > I don't want to remove it. It isn't there now!
> > >
> > > Let's fix it before we enable it. Otherwise it is just noise. The
> > > device tree fiasco is a real pain.
> > >
> > > BTW the fs tests are flaky for me so I seldom run them.
> >
> > What do you mean by flaky?
> >
> > Do you mean unreliable (cf.
> > https://www.urbandictionary.com/define.php?term=flaky)?
> 
> Yes!
> 
> >
> > What is unreliable about the tests?
> 
> You have it above - the filesystem tests sometimes fail for me.

I think I've seen some of the FAT tests historically, but not recently.
The biggest problem I see with them is that they are skipped or run for
seemingly random reasons, under gitlab.  And the root patch here would
help to see why.  For example:
https://gitlab.denx.de/u-boot/u-boot/-/jobs/80886 skips them but
https://gitlab.denx.de/u-boot/u-boot/-/jobs/80850 runs them.  May be a
runner config problem?

> I think all the other tests are good, although I think there is one
> that has a time delay in it that needs to be fixed.
> 
> Also we should really run the tests concurrently like binman does (see
> tools/concurrencytest).

I'm not sure how we could run most tests concurrently and keep things
generic.   We can spawn N sandbox binaries but only one physical board.
Simon Glass April 21, 2020, 6:13 p.m. UTC | #9
Hi Tom,

On Tue, 21 Apr 2020 at 11:47, Tom Rini <trini at konsulko.com> wrote:
>
> On Tue, Apr 21, 2020 at 11:36:32AM -0600, Simon Glass wrote:
> > Hi Heinrich,
> >
> > On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > >
> > > On 4/20/20 8:45 PM, Simon Glass wrote:
> > > > Hi Heinrich,
> > > >
> > > > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > >>
> > > >> On 4/20/20 1:38 AM, Simon Glass wrote:
> > > >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > >>>>
> > > >>>> By passing -ra to pytest we get a summary indicating which tests failed
> > > >>>> and why tests were skipped.
> > > >>>>
> > > >>>> Here is an example output:
> > > >>>>
> > > >>>> ======================== short test summary info =========================
> > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> > > >>>> configuration is defined
> > > >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> > > >>>>
> > > >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > > >>>> ---
> > > >>>>  test/run | 6 +++---
> > > >>>>  1 file changed, 3 insertions(+), 3 deletions(-)
> > > >>>>
> > > >>>
> > > >>> This is really noisy - I get lots of extra output. Can we make this an option?
> > > >>
> > > >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> > > >> and failed tests.
> > > >>
> > > >> Lines like these are noise because there is no actionable information:
> > > >>
> > > >> test/py/tests/test_fs/test_basic.py
> > > >> sssssssssssssssssssssssssssssssssssssss [  0%]
> > > >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> > > >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> > > >> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> > > >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> > > >>
> > > >> This new line has actionable information:
> > > >>
> > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > >> filesystem: fat16
> > > >>
> > > >> Next step is to change this line to provide a more useful output, e.g.
> > > >>
> > > >> -    except CalledProcessError:
> > > >> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> > > >> +    except CalledProcessError as err:
> > > >> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> > > >> +            ', {}'.format(err))
> > > >>
> > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > >> filesystem: fat16, Command 'mkfs.vfat -F 16
> > > >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> > > >> status 127.
> > > >>
> > > >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> > > >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> > > >> can fix it.
> > > >>
> > > >> We should get rid of all skipped tests especially on Travis CI and
> > > >> Gitlab CI. Further we should provide instructions to set up a local
> > > >> system to avoid skipping tests.
> > > >>
> > > >> Simon, why do you want to remove the actionable information?
> > > >
> > > > I don't want to remove it. It isn't there now!
> > > >
> > > > Let's fix it before we enable it. Otherwise it is just noise. The
> > > > device tree fiasco is a real pain.
> > > >
> > > > BTW the fs tests are flaky for me so I seldom run them.
> > >
> > > What do you mean by flaky?
> > >
> > > Do you mean unreliable (cf.
> > > https://www.urbandictionary.com/define.php?term=flaky)?
> >
> > Yes!
> >
> > >
> > > What is unreliable about the tests?
> >
> > You have it above - the filesystem tests sometimes fail for me.
>
> I think I've seen some of the FAT tests historically, but not recently.
> The biggest problem I see with them is that they are skipped or run for
> seemingly random reasons, under gitlab.  And the root patch here would
> help to see why.  For example:
> https://gitlab.denx.de/u-boot/u-boot/-/jobs/80886 skips them but
> https://gitlab.denx.de/u-boot/u-boot/-/jobs/80850 runs them.  May be a
> runner config problem?

Oh gosh that is odd.

>
> > I think all the other tests are good, although I think there is one
> > that has a time delay in it that needs to be fixed.
> >
> > Also we should really run the tests concurrently like binman does (see
> > tools/concurrencytest).
>
> I'm not sure how we could run most tests concurrently and keep things
> generic.   We can spawn N sandbox binaries but only one physical board.

Yes this is only for sandbox. It is pretty easy to turn on when it is allowed.

The current code in binman does this:

use_concurrent = True
try:
    from concurrencytest import ConcurrentTestSuite, fork_for_tests
except:
    use_concurrent = False


then:

    if use_concurrent and processes != 1:
        concurrent_suite = ConcurrentTestSuite(suite,
                fork_for_tests(processes or multiprocessing.cpu_count()))
        concurrent_suite.run(result)
    else:
        suite.run(result)

Regards,
Simon
Tom Rini April 21, 2020, 8:01 p.m. UTC | #10
On Tue, Apr 21, 2020 at 12:13:08PM -0600, Simon Glass wrote:
> Hi Tom,
> 
> On Tue, 21 Apr 2020 at 11:47, Tom Rini <trini at konsulko.com> wrote:
> >
> > On Tue, Apr 21, 2020 at 11:36:32AM -0600, Simon Glass wrote:
> > > Hi Heinrich,
> > >
> > > On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > >
> > > > On 4/20/20 8:45 PM, Simon Glass wrote:
> > > > > Hi Heinrich,
> > > > >
> > > > > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > >>
> > > > >> On 4/20/20 1:38 AM, Simon Glass wrote:
> > > > >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > >>>>
> > > > >>>> By passing -ra to pytest we get a summary indicating which tests failed
> > > > >>>> and why tests were skipped.
> > > > >>>>
> > > > >>>> Here is an example output:
> > > > >>>>
> > > > >>>> ======================== short test summary info =========================
> > > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> > > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> > > > >>>> configuration is defined
> > > > >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> > > > >>>>
> > > > >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > > > >>>> ---
> > > > >>>>  test/run | 6 +++---
> > > > >>>>  1 file changed, 3 insertions(+), 3 deletions(-)
> > > > >>>>
> > > > >>>
> > > > >>> This is really noisy - I get lots of extra output. Can we make this an option?
> > > > >>
> > > > >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> > > > >> and failed tests.
> > > > >>
> > > > >> Lines like these are noise because there is no actionable information:
> > > > >>
> > > > >> test/py/tests/test_fs/test_basic.py
> > > > >> sssssssssssssssssssssssssssssssssssssss [  0%]
> > > > >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> > > > >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> > > > >> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> > > > >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> > > > >>
> > > > >> This new line has actionable information:
> > > > >>
> > > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > > >> filesystem: fat16
> > > > >>
> > > > >> Next step is to change this line to provide a more useful output, e.g.
> > > > >>
> > > > >> -    except CalledProcessError:
> > > > >> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> > > > >> +    except CalledProcessError as err:
> > > > >> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> > > > >> +            ', {}'.format(err))
> > > > >>
> > > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > > >> filesystem: fat16, Command 'mkfs.vfat -F 16
> > > > >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> > > > >> status 127.
> > > > >>
> > > > >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> > > > >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> > > > >> can fix it.
> > > > >>
> > > > >> We should get rid of all skipped tests especially on Travis CI and
> > > > >> Gitlab CI. Further we should provide instructions to set up a local
> > > > >> system to avoid skipping tests.
> > > > >>
> > > > >> Simon, why do you want to remove the actionable information?
> > > > >
> > > > > I don't want to remove it. It isn't there now!
> > > > >
> > > > > Let's fix it before we enable it. Otherwise it is just noise. The
> > > > > device tree fiasco is a real pain.
> > > > >
> > > > > BTW the fs tests are flaky for me so I seldom run them.
> > > >
> > > > What do you mean by flaky?
> > > >
> > > > Do you mean unreliable (cf.
> > > > https://www.urbandictionary.com/define.php?term=flaky)?
> > >
> > > Yes!
> > >
> > > >
> > > > What is unreliable about the tests?
> > >
> > > You have it above - the filesystem tests sometimes fail for me.
> >
> > I think I've seen some of the FAT tests historically, but not recently.
> > The biggest problem I see with them is that they are skipped or run for
> > seemingly random reasons, under gitlab.  And the root patch here would
> > help to see why.  For example:
> > https://gitlab.denx.de/u-boot/u-boot/-/jobs/80886 skips them but
> > https://gitlab.denx.de/u-boot/u-boot/-/jobs/80850 runs them.  May be a
> > runner config problem?
> 
> Oh gosh that is odd.
> 
> >
> > > I think all the other tests are good, although I think there is one
> > > that has a time delay in it that needs to be fixed.
> > >
> > > Also we should really run the tests concurrently like binman does (see
> > > tools/concurrencytest).
> >
> > I'm not sure how we could run most tests concurrently and keep things
> > generic.   We can spawn N sandbox binaries but only one physical board.
> 
> Yes this is only for sandbox. It is pretty easy to turn on when it is allowed.
> 
> The current code in binman does this:
> 
> use_concurrent = True
> try:
>     from concurrencytest import ConcurrentTestSuite, fork_for_tests
> except:
>     use_concurrent = False
> 
> 
> then:
> 
>     if use_concurrent and processes != 1:
>         concurrent_suite = ConcurrentTestSuite(suite,
>                 fork_for_tests(processes or multiprocessing.cpu_count()))
>         concurrent_suite.run(result)
>     else:
>         suite.run(result)

Right, OK.  But have you proof-of-concepted that around pytest?  Rewrite
all of our tests in a different python framework seems like a big ask.
I can see it being useful if a big part of the bottleneck is running
check/qcheck in your own dev cycle.  I personally don't try and
concurrent my HW tests even if I could for everything-not-Pi.
Simon Glass April 21, 2020, 8:47 p.m. UTC | #11
Hi Tom,

On Tue, 21 Apr 2020 at 14:01, Tom Rini <trini at konsulko.com> wrote:
>
> On Tue, Apr 21, 2020 at 12:13:08PM -0600, Simon Glass wrote:
> > Hi Tom,
> >
> > On Tue, 21 Apr 2020 at 11:47, Tom Rini <trini at konsulko.com> wrote:
> > >
> > > On Tue, Apr 21, 2020 at 11:36:32AM -0600, Simon Glass wrote:
> > > > Hi Heinrich,
> > > >
> > > > On Mon, 20 Apr 2020 at 13:03, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > >
> > > > > On 4/20/20 8:45 PM, Simon Glass wrote:
> > > > > > Hi Heinrich,
> > > > > >
> > > > > > On Mon, 20 Apr 2020 at 12:23, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > > >>
> > > > > >> On 4/20/20 1:38 AM, Simon Glass wrote:
> > > > > >>> On Sat, 18 Apr 2020 at 06:09, Heinrich Schuchardt <xypron.glpk at gmx.de> wrote:
> > > > > >>>>
> > > > > >>>> By passing -ra to pytest we get a summary indicating which tests failed
> > > > > >>>> and why tests were skipped.
> > > > > >>>>
> > > > > >>>> Here is an example output:
> > > > > >>>>
> > > > > >>>> ======================== short test summary info =========================
> > > > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
> > > > > >>>> SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
> > > > > >>>> configuration is defined
> > > > > >>>> SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized
> > > > > >>>>
> > > > > >>>> Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
> > > > > >>>> ---
> > > > > >>>>  test/run | 6 +++---
> > > > > >>>>  1 file changed, 3 insertions(+), 3 deletions(-)
> > > > > >>>>
> > > > > >>>
> > > > > >>> This is really noisy - I get lots of extra output. Can we make this an option?
> > > > > >>
> > > > > >> When I run 'make tests' I get 41 out of 199 lines explaining skipped
> > > > > >> and failed tests.
> > > > > >>
> > > > > >> Lines like these are noise because there is no actionable information:
> > > > > >>
> > > > > >> test/py/tests/test_fs/test_basic.py
> > > > > >> sssssssssssssssssssssssssssssssssssssss [  0%]
> > > > > >> test/py/tests/test_fs/test_ext.py ssssssssssssssssssssss [  0%]
> > > > > >> test/py/tests/test_fs/test_mkdir.py ssssssssssss [  0%]
> > > > > >> test/py/tests/test_fs/test_symlink.py ssss [  0%]
> > > > > >> test/py/tests/test_fs/test_unlink.py ssssssssssssss [  0%]
> > > > > >>
> > > > > >> This new line has actionable information:
> > > > > >>
> > > > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > > > >> filesystem: fat16
> > > > > >>
> > > > > >> Next step is to change this line to provide a more useful output, e.g.
> > > > > >>
> > > > > >> -    except CalledProcessError:
> > > > > >> -        pytest.skip('Setup failed for filesystem: ' + fs_type)
> > > > > >> +    except CalledProcessError as err:
> > > > > >> +        pytest.skip('Setup failed for filesystem: ' + fs_type + \
> > > > > >> +            ', {}'.format(err))
> > > > > >>
> > > > > >> SKIPPED [13] test/py/tests/test_fs/conftest.py:340: Setup failed for
> > > > > >> filesystem: fat16, Command 'mkfs.vfat -F 16
> > > > > >> build-sandbox/persistent-data/3GB.fat16.img' returned non-zero exit
> > > > > >> status 127.
> > > > > >>
> > > > > >> Now we know that that the test is wrong by assuming that mkfs.vfat is in
> > > > > >> the path instead of using /usr/sbin/mkfs.vfat or /sbin/mkfs.vfat and we
> > > > > >> can fix it.
> > > > > >>
> > > > > >> We should get rid of all skipped tests especially on Travis CI and
> > > > > >> Gitlab CI. Further we should provide instructions to set up a local
> > > > > >> system to avoid skipping tests.
> > > > > >>
> > > > > >> Simon, why do you want to remove the actionable information?
> > > > > >
> > > > > > I don't want to remove it. It isn't there now!
> > > > > >
> > > > > > Let's fix it before we enable it. Otherwise it is just noise. The
> > > > > > device tree fiasco is a real pain.
> > > > > >
> > > > > > BTW the fs tests are flaky for me so I seldom run them.
> > > > >
> > > > > What do you mean by flaky?
> > > > >
> > > > > Do you mean unreliable (cf.
> > > > > https://www.urbandictionary.com/define.php?term=flaky)?
> > > >
> > > > Yes!
> > > >
> > > > >
> > > > > What is unreliable about the tests?
> > > >
> > > > You have it above - the filesystem tests sometimes fail for me.
> > >
> > > I think I've seen some of the FAT tests historically, but not recently.
> > > The biggest problem I see with them is that they are skipped or run for
> > > seemingly random reasons, under gitlab.  And the root patch here would
> > > help to see why.  For example:
> > > https://gitlab.denx.de/u-boot/u-boot/-/jobs/80886 skips them but
> > > https://gitlab.denx.de/u-boot/u-boot/-/jobs/80850 runs them.  May be a
> > > runner config problem?
> >
> > Oh gosh that is odd.
> >
> > >
> > > > I think all the other tests are good, although I think there is one
> > > > that has a time delay in it that needs to be fixed.
> > > >
> > > > Also we should really run the tests concurrently like binman does (see
> > > > tools/concurrencytest).
> > >
> > > I'm not sure how we could run most tests concurrently and keep things
> > > generic.   We can spawn N sandbox binaries but only one physical board.
> >
> > Yes this is only for sandbox. It is pretty easy to turn on when it is allowed.
> >
> > The current code in binman does this:
> >
> > use_concurrent = True
> > try:
> >     from concurrencytest import ConcurrentTestSuite, fork_for_tests
> > except:
> >     use_concurrent = False
> >
> >
> > then:
> >
> >     if use_concurrent and processes != 1:
> >         concurrent_suite = ConcurrentTestSuite(suite,
> >                 fork_for_tests(processes or multiprocessing.cpu_count()))
> >         concurrent_suite.run(result)
> >     else:
> >         suite.run(result)
>
> Right, OK.  But have you proof-of-concepted that around pytest?  Rewrite
> all of our tests in a different python framework seems like a big ask.

It doesn't require any changes to the tests. It is just a different
runner, although if you break the rules (independent tests) you might
not get away with it. At least I didn't make any changes in binman. It
makes a large difference there:

ellesmere:~/u$ time binman test
<unittest.result.TestResult run=262 errors=0 failures=0>

real 0m1.680s
...
ellesmere:~/u$ time binman test -P 1
<unittest.result.TestResult run=262 errors=0 failures=0>

real 0m8.843s

> I can see it being useful if a big part of the bottleneck is running
> check/qcheck in your own dev cycle.  I personally don't try and
> concurrent my HW tests even if I could for everything-not-Pi.

Yes I don't have a way to run tests concurrently on the lab either.
But in principal it could be done, since the host machine is mostly
idle. I haven't tried it with gitlab-runner either.

Regards,
Simon
diff mbox series

Patch

======================== short test summary info =========================
SKIPPED [1] test/py/tests/test_efi_loader.py:81: No DHCP server available
SKIPPED [1] test/py/tests/test_efi_loader.py:100: No static network
configuration is defined
SKIPPED [2] test/py/tests/test_efi_loader.py:115: Network not initialized

Signed-off-by: Heinrich Schuchardt <xypron.glpk at gmx.de>
---
 test/run | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/test/run b/test/run
index d635622c10..67d51d75f7 100755
--- a/test/run
+++ b/test/run
@@ -19,17 +19,17 @@  run_test() {
 failures=0

 # Run all tests that the standard sandbox build can support
-run_test "sandbox" ./test/py/test.py --bd sandbox --build -m "${mark_expr}"
+run_test "sandbox" ./test/py/test.py -ra --bd sandbox --build -m "${mark_expr}"

 # Run tests which require sandbox_spl
-run_test "sandbox_spl" ./test/py/test.py --bd sandbox_spl --build \
+run_test "sandbox_spl" ./test/py/test.py -ra --bd sandbox_spl --build \
 	-k 'test_ofplatdata or test_handoff'

 # Run tests for the flat-device-tree version of sandbox. This is a special
 # build which does not enable CONFIG_OF_LIVE for the live device tree, so we can
 # check that functionality is the same. The standard sandbox build (above) uses
 # CONFIG_OF_LIVE.
-run_test "sandbox_flattree" ./test/py/test.py --bd sandbox_flattree --build \
+run_test "sandbox_flattree" ./test/py/test.py -ra --bd sandbox_flattree --build \
 	-k test_ut

 # Set up a path to dtc (device-tree compiler) and libfdt.py, a library it