diff mbox

[Branch,~linaro-validation/lava-test/trunk] Rev 121: add doc how to write new cases

Message ID 20120302015810.15599.94702.launchpad@ackee.canonical.com
State Accepted
Headers show

Commit Message

Spring Zhang March 2, 2012, 1:58 a.m. UTC
Merge authors:
  Spring Zhang (qzhang)
Related merge proposals:
  https://code.launchpad.net/~qzhang/lava-test/doc-add-case/+merge/95304
  proposed by: Spring Zhang (qzhang)
  review: Approve - Zygmunt Krynicki (zkrynicki)
------------------------------------------------------------
revno: 121 [merge]
committer: Spring Zhang <spring.zhang@linaro.org>
branch nick: lava-test-doc-add-case
timestamp: Fri 2012-03-02 09:56:22 +0800
message:
  add doc how to write new cases
modified:
  doc/usage.rst


--
lp:lava-test
https://code.launchpad.net/~linaro-validation/lava-test/trunk

You are subscribed to branch lp:lava-test.
To unsubscribe from this branch go to https://code.launchpad.net/~linaro-validation/lava-test/trunk/+edit-subscription
diff mbox

Patch

=== modified file 'doc/usage.rst'
--- doc/usage.rst	2011-09-12 09:19:10 +0000
+++ doc/usage.rst	2012-03-01 03:04:30 +0000
@@ -135,18 +135,163 @@ 
 tests need to follow Linaro development work flow, get reviewed and finally
 merged. Depending on your situation this may be undesired.
 
-.. todo::
-
-    Describe how tests are discovered, loaded and used. It would be
-    nice to have a tutorial that walks the user through wrapping a
-    simple pass/fail test. 
+There is a wonderful guide describing `How to Writing Test Definitions
+<https://wiki.linaro.org/QA/AutomatedTestingFramework#Writing_Tests_Definitions>`_ from Linaro wiki:
+
+Test definitions are simply a way of telling LAVA-Test how to install a test,
+run it, and interpret the results. Tests definitions are in a simplified python
+format, and can be as simple as a few lines. More advanced test definitions can
+be written by deriving from the base classes. 
+
+Defining a simple test
+++++++++++++++++++++++
+
+**Example 1** The simplest possible example might look something like this::
+
+    from lava_test.core.installers import TestInstaller
+    from lava_test.core.runners import TestRunner
+    from lava_test.core.tests import Test
+
+    RUNSTEPS = ['echo "It works!"']
+    runme = TestRunner(RUNSTEPS)
+    testobj = Test(test_id="example1", runner=runme)
+
+In this example, we simply give it a list of commands to run in a shell,
+provided by RUNSTEPS. We pass RUNSTEPS to create an TestRunner instance. Then
+that runner is passed to create an !Test instance called 'testobj'. If you were
+to save this under the test_definitions directory as 'example1.py', then run
+'./lava-test run example1' from the bin directory, you would have a test result
+for it under your results directory, with output saying "It works!"
+
+**Example 2** Usually, you will want to do more than just interact with things
+already on the system. Usually a test suite needs to be installed before running
+it. For this example, let's say you have a test suite you can download from
+http://www.linaro.org/linarotest-0.1.tgz. NOTE: This file does not actually
+exist, but is used only for example purposes::
+
+    from lava_test.core.installers import TestInstaller
+    from lava_test.core.parsers import TestParser
+    from lava_test.core.runners import TestRunner
+    from lava_test.core.tests import Test
+
+    INSTALLSTEPS = ['tar -xzf linarotest-0.1.tgz',
+                'cd linarotest-0.1',
+                'make install']
+    RUNSTEPS = ['cd linarotest-0.1/bin',
+                './runall']
+    installit = TestInstaller(INSTALLSTEPS, url='http://www.linaro.org/linarotest-0.1.tgz', md5='a9cb8a348e0d8b0a8247083d37392c89f'
+    runit = TestRunner(RUNSTEPS)
+    testobj = Test(test_id="LinaroTest", version="0.1", installer=installit, runner=runit)
+
+Before running the test in this example, an extra installation step will take
+place. Since we provided a url and md5, the file specified by the url will first
+be downloaded, and the md5sum will be checked. An md5 is recommended for
+checking the integrity of the download, but if it is not provided then it will
+simply skip it rather than fail. Next the steps specified in INSTALLSTEPS will
+be executed, and finally the steps in RUNSTEPS will be executed.
+
+**Example 3** A slight variation on example 2 might be a case where you want to install a test that is already in the archive. Rather than specifying a url to download the test from, you can simply do something like this instead::
+
+    ...
+    DEPS = ['linarotest']
+    installit = TestInstaller(deps=DEPS)
+    ...
+
+This is also how dependencies can be specified if you have, for instance,
+libraries that need to be installed before attempting to compile a test you want
+to run. Those dependencies will be installed before attempting to run any steps.
+In this example though, there is no need to specify a url, and no need to
+specify steps to run to build from a tarball. All that is needed is to specify a
+dependency which will take care of installing the test. Again, this is a
+fictitious example.
+
+Adding Results Parsing
+++++++++++++++++++++++
+
+Because every test has its own way of displaying results, there is no common,
+enforced way of interpreting the results from any given test. That means that
+every test definition also has to define a parser so that LAVA-Test can
+understand how to pick out the most useful bits of information from the output.
+What we've tried to do, is make this as simple as possible for the most common
+cases, while providing the tools necessary to handle more complex output.
+
+To start off, there are some fields you are always going to want to either pull
+from the results, or define. For all tests:
+
+* test_case_id - This is just a field that uniquely identifies the test. This can contain letters, numbers, underscores, dashes, or periods. If you use any illegal characters, they will automatically be dropped by the TestParser base class before parsing the results. Spaces will be automatically converted to underscores. If you wish to change this behaviour, make sure that you either handle fixing the test_case_id in your parser, or override the TestParser.fixids() method.
+* result - result is simply the result of the test. This applies to both qualitative as well as quantitative tests, and the meaning is specific to the test itself. The valid values for result are: "pass", "fail", "skip", or "unknown".
+
+For performance tests, you will also want to have the following two fields:
+
+* measurement - the "score" or resulting measurement from the benchmark.
+* units - a string defining the units represented by the measurement in some way that will be meaningful to someone looking at the results later.
+
+For results parsing, it's probably easier to look at some examples. Several
+tests have already been defined in the lava-test test_definitions directory
+that serve as useful examples. Here are some snippits to start off though:
+
+**Stream example**
+
+The stream test does several tests to measure memory bandwidth. The relevant
+portion of the output looks something like this::
+
+    Function      Rate (MB/s)   Avg time     Min time     Max time
+    Copy:        3573.4219       0.0090       0.0090       0.0094
+    Scale:       3519.1727       0.0092       0.0091       0.0095
+    Add:         4351.7842       0.0112       0.0110       0.0113
+    Triad:       4429.2382       0.0113       0.0108       0.0125
+
+So we have 4 test_case_ids here: Copy, Scale, Add, and Triad. For the result, we
+will just use pass for everything. Optionally though, if there were some
+threshold under which we knew it would be considered a fail, we could detect
+that and have it fail in that case. The number we really care about in the
+results is the rate, which has a units of (MB/s).
+
+First we need a pattern to match the lines and yield the test_case_id and the
+measurement::
+
+    PATTERN = "^(?P<test_case_id>\w+):\W+(?P<measurement>\d+\.\d+)"
+
+Passing this pattern when initializing an TestParser object will help it to find
+the test_case_id and measurement from the lines that have it, while not matching
+on any other lines. We also want to append the pass result, and units to each
+test result found. There's a helper for that when creating the TestParser object
+called appendall, that lets you give it a dict of values to append to all test
+results found at parse time. The full line to create the parser would be::
+
+    streamparser = lava_test.core.parsers.TestParser(PATTERN, appendall={'units':'MB/s', 'result':'pass'})
+
+For the complete code, see the stream test definition in LAVA-Test.
+
+**LTP**
+
+Another useful case to look at is LTP, because it is a qualitative test with
+several possible result codes. The TestParser also supports being created with a
+fixupdict, which takes a dict of result strings to match to the valid Lava-test
+result strings. For instance, LTP has result strings such as "TPASS", "TFAIL",
+"TCONF", "TBROK", which the dashboard will now accept::
+
+    FIXUPS = {"TBROK":"fail",
+              "TCONF":"skip",
+              "TFAIL":"fail",
+              "TINFO":"unknown",
+              "TPASS":"pass",
+              "TWARN":"unknown"}
+
+Now when creating the TestParser object, we can call it with fixupdict = FIXUPS
+so that it knows how to properly translate the result strings.
+
+The full LTP test definition actually derives its own TestParser class to deal
+with additional peculiarities of LTP output. This is sometimes necessary, but as
+common features are found that would make it possible to eliminate or simplify
+cases like this, they should be merged into the Lava-test libraries.
 
 Maintaining out-of-tree tests
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 
 For some kinds of tests (proprietary, non-generic, in rapid development, fused
 with application code) contributing their definition to upstream LAVA Test
-project would be impractical. 
+project would be impractical.
 
 In such cases the test maintainer can still leverage LAVA to actually run and
 process the test without being entangled in the review process or going through