diff mbox

[Branch,~linaro-validation/lava-dispatcher/trunk] Rev 463: a few lava_test_shell things:

Message ID 20121122004612.12689.35823.launchpad@ackee.canonical.com
State Accepted
Headers show

Commit Message

Michael-Doyle Hudson Nov. 22, 2012, 12:46 a.m. UTC
Merge authors:
  Michael Hudson-Doyle (mwhudson)
Related merge proposals:
  https://code.launchpad.net/~mwhudson/lava-dispatcher/more-obvious-json-disk-bundle-equivalence/+merge/135299
  proposed by: Michael Hudson-Doyle (mwhudson)
------------------------------------------------------------
revno: 463 [merge]
committer: Michael Hudson-Doyle <michael.hudson@linaro.org>
branch nick: trunk
timestamp: Thu 2012-11-22 13:45:16 +1300
message:
  a few lava_test_shell things:
  * Shuffle the layout of the results directory around on disk to make it more like a bundle
  * Adds a lava-test-case script that a test can use to record information about a test case.
  * Document the above and and the lava-test-case-attach script
added:
  lava_test_shell/lava-test-case
modified:
  doc/lava_test_shell.rst
  lava_dispatcher/actions/lava_test_shell.py
  lava_dispatcher/lava_test_shell.py
  lava_test_shell/lava-test-case-attach
  lava_test_shell/lava-test-runner-android
  lava_test_shell/lava-test-runner-ubuntu
  lava_test_shell/lava-test-shell


--
lp:lava-dispatcher
https://code.launchpad.net/~linaro-validation/lava-dispatcher/trunk

You are subscribed to branch lp:lava-dispatcher.
To unsubscribe from this branch go to https://code.launchpad.net/~linaro-validation/lava-dispatcher/trunk/+edit-subscription
diff mbox

Patch

=== modified file 'doc/lava_test_shell.rst'
--- doc/lava_test_shell.rst	2012-11-09 16:36:56 +0000
+++ doc/lava_test_shell.rst	2012-11-21 01:23:19 +0000
@@ -3,12 +3,12 @@ 
 
 The ``lava_test_shell`` action provides a way to employ a more black-box style
 testing appoach with the target device. The test definition format is quite
-flexible allows for some interesting things.
-
-Minimal Test Definition
-=======================
-
-::
+flexible and allows for some interesting things.
+
+Quick start
+===========
+
+A minimal test definition looks like this::
 
   metadata:
       format: Lava-Test Test Definition 1.0
@@ -19,10 +19,93 @@ 
           - echo "test-1: pass"
           - echo "test-2: fail"
 
+  parse:
       pattern: "(?P<test_case_id>.*-*):\\s+(?P<result>(pass|fail))"
 
-The main thing to note is that the parse pattern requires regex expressions
-like \\s to be escaped, so it must be \\\\s
+Note that the parse pattern has similar quoting rules as Python, so
+\\s must be escaped as \\\\s and similar.
+
+A lava-test-shell is run by:
+
+ * "compiling" the above test defintion into a shell script
+ * copying this script onto the device and arranging for it to be run
+   when the device boots
+ * booting the device and letting the test run
+ * retrieving the output from the device and turning it into a test
+   result bundle
+
+Writing a test for lava-test-shell
+==================================
+
+For the majority of cases, the above approach is the easiest thing to
+do: write shell code that outputs "test-case-id: result" for each test
+case you are interested in.  This is similar to how the lava-test
+parsing works, so until we get around to writing documentation here,
+see
+http://lava-test.readthedocs.org/en/latest/usage.html#adding-results-parsing.
+
+The advantage of the parsing approach is that it means your test is
+easy to work on independently from LAVA: simply write a script that
+produces the right sort of output, and then provide a very small
+amount of glue to wire it up in LAVA.  However, when you need it,
+there is also a more powerful, LAVA-specific, way of writing tests.
+When a test runs, ``$PATH`` is arranged so that some LAVA-specific
+utilities are available:
+
+ * ``lava-test-case``
+ * ``lava-test-case-attach``
+
+
+lava-test-case
+--------------
+
+lava-test-case records the results of a single test case.  For example::
+
+  steps:
+    - "lava-test-case simpletestcase --result pass"
+
+It has two forms.  One takes arguments to describe the outcome of the
+test case and the other takes the shell command to run -- the exit
+code of this shell command is used to produce the test result.
+
+Both forms take the name of the testcase as the first argument.
+
+The first form takes these additional arguments:
+
+ * ``--result $RESULT``: $RESULT should be one of pass/fail/skip/unknown
+ * ``--measurement $MEASUREMENT``: A numerical measurement associated with the test result
+ * ``--units $UNITS``: The units of $MEASUREMENT
+
+``--result`` must always be specified.  For example::
+
+  run:
+    steps:
+      - "lava-test-case bottle-count --result pass --measurement 99 --units bottles"
+
+The second form is indicated by the --shell argument, for example::
+
+  run:
+    steps:
+      - "lava-test-case fail-test --shell false"
+      - "lava-test-case pass-test --shell true"
+
+
+lava-test-case-attach
+---------------------
+
+This attaches a file to a test result with a particular ID, for example::
+
+  steps:
+    - "echo content > file.txt"
+    - "lava-test-case test-attach --result pass"
+    - "lava-test-case-attach test-attach file.txt text/plain"
+
+The arguments are:
+
+ 1. test case id
+ 2. the file to attach
+ 3. (optional) the MIME type of the file (if no MIME type is passed, a
+    guess is made based on the filename)
 
 Handling Dependencies (Ubuntu)
 ==============================

=== modified file 'lava_dispatcher/actions/lava_test_shell.py'
--- lava_dispatcher/actions/lava_test_shell.py	2012-11-21 22:55:57 +0000
+++ lava_dispatcher/actions/lava_test_shell.py	2012-11-21 23:22:01 +0000
@@ -61,30 +61,45 @@ 
 #
 # /lava/
 #    results/
-#       cpuinfo.txt                Hardware info.
-#       meminfo.txt                Ditto.
-#       build.txt                  Software info.
-#       pkgs.txt                   Ditto
+#       hwcontext/                 Each test_run in the bundle has the same
+#                                  hw & sw context info attached to it.
+#          cpuinfo.txt             Hardware info.
+#          meminfo.txt             Ditto.
+#       swcontext/
+#          build.txt               Software info.
+#          pkgs.txt                Ditto
 #       ${IDX}_${TEST_ID}-${TIMESTAMP}/
-#          testdef.yml             Attached to the test run in the bundle for
-#                                  archival purposes.
-#          install.sh              Ditto.
-#          run.sh                  Ditto.
-#          stdout.log              The standard output of run.sh.
-#          stderr.log              The standard error of run.sh (actually not
-#                                  created currently)
-#          return_code             The exit code of run.sh.
-#          attachments/            Contains attachments for test results.
+#          testdef.yml
+#          stdout.log
+#          return_code          The exit code of run.sh.
+#          attachments/
+#             install.sh
+#             run.sh
+#             ${FILENAME}          The attached data.
+#             ${FILENAME}.mimetype  The mime type of the attachment.
+#             attributes/
+#                ${ATTRNAME}    Content is value of attribute
+#          tags/
+#             ${TAGNAME}           Content of file is ignored.
+#          results/
 #             ${TEST_CASE_ID}/     Names the test result.
-#                ${FILENAME}           The attached data.
-#                ${FILENAME}.mimetype  The mime type of the attachment.
+#                result            (Optional)
+#                measurement
+#                units
+#                message
+#                timestamp
+#                duration
+#                attributes/
+#                   ${ATTRNAME}    Content is value of attribute
+#                attachments/      Contains attachments for test results.
+#                   ${FILENAME}           The attached data.
+#                   ${FILENAME}.mimetype  The mime type of the attachment.
 #
 # After the test run has completed, the /lava/results directory is pulled over
 # to the host and turned into a bundle for submission to the dashboard.
 
-import json
 import yaml
-import glob
+from glob import glob
 import time
 import logging
 import os
@@ -94,6 +109,8 @@ 
 import subprocess
 import tempfile
 
+from linaro_dashboard_bundle.io import DocumentIO
+
 import lava_dispatcher.lava_test_shell as lava_test_shell
 import lava_dispatcher.utils as utils
 
@@ -107,10 +124,12 @@ 
 LAVA_TEST_UPSTART = '%s/lava-test-runner.conf' % LAVA_TEST_DIR
 LAVA_TEST_INITD = '%s/lava-test-runner.init.d' % LAVA_TEST_DIR
 LAVA_TEST_SHELL = '%s/lava-test-shell' % LAVA_TEST_DIR
+LAVA_TEST_CASE = '%s/lava-test-case' % LAVA_TEST_DIR
 LAVA_TEST_CASE_ATTACH = '%s/lava-test-case-attach' % LAVA_TEST_DIR
 
 Target.android_deployment_data['lava_test_runner'] = LAVA_TEST_ANDROID
 Target.android_deployment_data['lava_test_shell'] = LAVA_TEST_SHELL
+Target.android_deployment_data['lava_test_case'] = LAVA_TEST_CASE
 Target.android_deployment_data['lava_test_case_attach'] = LAVA_TEST_CASE_ATTACH
 Target.android_deployment_data['lava_test_sh_cmd'] = '/system/bin/mksh'
 Target.android_deployment_data['lava_test_dir'] = '/data/lava'
@@ -118,6 +137,7 @@ 
 
 Target.ubuntu_deployment_data['lava_test_runner'] = LAVA_TEST_UBUNTU
 Target.ubuntu_deployment_data['lava_test_shell'] = LAVA_TEST_SHELL
+Target.ubuntu_deployment_data['lava_test_case'] = LAVA_TEST_CASE
 Target.ubuntu_deployment_data['lava_test_case_attach'] = LAVA_TEST_CASE_ATTACH
 Target.ubuntu_deployment_data['lava_test_sh_cmd'] = '/bin/sh'
 Target.ubuntu_deployment_data['lava_test_dir'] = '/lava'
@@ -125,6 +145,7 @@ 
 
 Target.oe_deployment_data['lava_test_runner'] = LAVA_TEST_UBUNTU
 Target.oe_deployment_data['lava_test_shell'] = LAVA_TEST_SHELL
+Target.oe_deployment_data['lava_test_case'] = LAVA_TEST_CASE
 Target.oe_deployment_data['lava_test_case_attach'] = LAVA_TEST_CASE_ATTACH
 Target.oe_deployment_data['lava_test_sh_cmd'] = '/bin/sh'
 Target.oe_deployment_data['lava_test_dir'] = '/lava'
@@ -285,22 +306,18 @@ 
 
     def _copy_runner(self, mntdir, target):
         runner = target.deployment_data['lava_test_runner']
-        shell = target.deployment_data['lava_test_shell']
         shutil.copy(runner, '%s/bin/lava-test-runner' % mntdir)
         os.chmod('%s/bin/lava-test-runner' % mntdir, XMOD)
-        with open(shell, 'r') as fin:
-            with open('%s/bin/lava-test-shell' % mntdir, 'w') as fout:
-                shcmd = target.deployment_data['lava_test_sh_cmd']
-                fout.write("#!%s\n\n" % shcmd)
-                fout.write(fin.read())
-                os.fchmod(fout.fileno(), XMOD)
-
-        tc = target.deployment_data['lava_test_case_attach']
-        with open(tc, 'r') as fin:
-            with open('%s/bin/lava-test-case-attach' % mntdir, 'w') as fout:
-                fout.write('#!%s\n\n' % shcmd)
-                fout.write(fin.read())
-                os.fchmod(fout.fileno(), XMOD)
+
+        shcmd = target.deployment_data['lava_test_sh_cmd']
+
+        for key in ['lava_test_shell', 'lava_test_case', 'lava_test_case_attach']:
+            fname = target.deployment_data[key]
+            with open(fname, 'r') as fin:
+                with open('%s/bin/%s' % (mntdir, os.path.basename(fname)), 'w') as fout:
+                    fout.write("#!%s\n\n" % shcmd)
+                    fout.write(fin.read())
+                    os.fchmod(fout.fileno(), XMOD)
 
     def _bzr_info(self, url, bzrdir):
         cwd = os.getcwd()
@@ -390,7 +407,7 @@ 
                     f.write('%s\n' % cmd)
 
         if testdef_repo:
-            for filepath in glob.glob(os.path.join(testdef_repo, '*')):
+            for filepath in glob(os.path.join(testdef_repo, '*')):
                 shutil.copy2(filepath, hostdir)
             logging.info('copied all test files')
 
@@ -448,7 +465,7 @@ 
             (fd, name) = tempfile.mkstemp(
                 prefix='lava-test-shell', suffix='.bundle', dir=rdir)
             with os.fdopen(fd, 'w') as f:
-                json.dump(bundle, f)
+                DocumentIO.dump(f, bundle)
 
     def _assert_target(self, target):
         """ Ensure the target has the proper deployment data required by this

=== modified file 'lava_dispatcher/lava_test_shell.py'
--- lava_dispatcher/lava_test_shell.py	2012-11-19 20:02:08 +0000
+++ lava_dispatcher/lava_test_shell.py	2012-11-22 00:43:48 +0000
@@ -19,7 +19,7 @@ 
 # with this program; if not, see <http://www.gnu.org/licenses>.
 
 import datetime
-import errno
+import decimal
 import mimetypes
 import yaml
 import logging
@@ -115,8 +115,71 @@ 
     return ctx
 
 
-def _get_test_results(testdef, stdout, attachments_dir):
-    results = []
+def _attachments_from_dir(dir):
+    attachments = []
+    for filename, filepath in _directory_names_and_paths(dir, ignore_missing=True):
+        if filename.endswith('.mimetype'):
+            continue
+        mime_type = _read_content(filepath + '.mimetype', ignore_missing=True)
+        if not mime_type:
+            mime_type = mimetypes.guess_type(filepath)[0]
+            if mime_type is None:
+                mime_type = 'application/octet-stream'
+        attachments.append(
+            create_attachment(filename, _read_content(filepath), mime_type))
+    return attachments
+
+
+def _attributes_from_dir(dir):
+    attributes = {}
+    for filename, filepath in _directory_names_and_paths(dir, ignore_missing=True):
+        if os.path.isfile(filepath):
+            attributes[filename] = _read_content(filepath)
+    return attributes
+
+
+def _result_from_dir(dir):
+    result = {
+        'test_case_id': os.path.basename(dir),
+        }
+
+    for fname in 'result', 'measurement', 'units', 'message', 'timestamp', 'duration':
+        fpath = os.path.join(dir, fname)
+        if os.path.isfile(fpath):
+            result[fname] = _read_content(fpath).strip()
+
+    if 'measurement' in result:
+        try:
+            result['measurement'] = decimal.Decimal(result['measurement'])
+        except decimal.InvalidOperation:
+            logging.warning("Invalid measurement for %s: %s" % (dir, result['measurement']))
+            del result['measurement']
+
+    result['attachments'] = _attachments_from_dir(os.path.join(dir, 'attachments'))
+    result['attributes'] = _attributes_from_dir(os.path.join(dir, 'attributes'))
+
+    return result
+
+
+def _merge_results(dest, src):
+    tc_id = dest['test_case_id']
+    assert tc_id == src['test_case_id']
+    for attrname in 'result', 'measurement', 'units', 'message', 'timestamp', 'duration':
+        if attrname in dest:
+            if attrname in src:
+                if dest[attrname] != src[attrname]:
+                    logging.warning(
+                        'differing values for %s in result for %s: %s and %s',
+                        attrname, tc_id, dest[attrname], src[attrname])
+        else:
+            if attrname in src:
+                dest[attrname] = src
+    dest.setdefault('attachments', []).extend(src.get('attachments', []))
+    dest.setdefault('attributes', {}).update(src.get('attributes', []))
+
+
+def _get_test_results(test_run_dir, testdef, stdout):
+    results_from_log_file = []
     fixupdict = {}
 
     if 'parse' in testdef:
@@ -141,53 +204,59 @@ 
                 if res['result'] not in ('pass', 'fail', 'skip', 'unknown'):
                     logging.error('bad test result line: %s' % line.strip())
                     continue
-            tc_id = res.get('test_case_id')
-            if tc_id is not None:
-                d = os.path.join(attachments_dir, tc_id)
-                if os.path.isdir(d):
-                    attachments = os.listdir(d)
-                    for filename in attachments:
-                        if filename.endswith('.mimetype'):
-                            continue
-                        filepath = os.path.join(d, filename)
-                        if os.path.exists(filepath + '.mimetype'):
-                            mime_type = open(filepath + '.mimetype').read().strip()
-                        else:
-                            mime_type = mimetypes.guess_type(filepath)[0]
-                            if mime_type is None:
-                                mime_type = 'application/octet-stream'
-                        attachment = create_attachment(filename, open(filepath).read(), mime_type)
-                        res.setdefault('attachments', []).append(attachment)
-
-            results.append(res)
-
-    return results
-
-
-def _get_attachments(results_dir, dirname, testdef, stdout):
-    files = ('stderr.log', 'return_code', 'run.sh', 'install.sh')
+
+            results_from_log_file.append(res)
+
+    results_from_directories = []
+    results_from_directories_by_id = {}
+
+    result_names_and_paths = _directory_names_and_paths(
+        os.path.join(test_run_dir, 'results'), ignore_missing=True)
+    result_names_and_paths = [
+        (name, path) for (name, path) in result_names_and_paths
+        if os.path.isdir(path)]
+    result_names_and_paths.sort(key=lambda (name, path): os.path.getmtime(path))
+
+    for name, path in result_names_and_paths:
+        r = _result_from_dir(path)
+        results_from_directories_by_id[name] = (r, len(results_from_directories))
+        results_from_directories.append(r)
+
+    for res in results_from_log_file:
+        if res.get('test_case_id') in results_from_directories_by_id:
+            dir_res, index = results_from_directories_by_id[res['test_case_id']]
+            results_from_directories[index] = None
+            _merge_results(res, dir_res)
+
+    for res in results_from_directories:
+        if res is not None:
+            results_from_log_file.append(res)
+
+    return results_from_log_file
+
+
+def _get_run_attachments(test_run_dir, testdef, stdout):
     attachments = []
 
     attachments.append(create_attachment('stdout.log', stdout))
     attachments.append(create_attachment('testdef.yaml', testdef))
+    return_code = _read_content(os.path.join(test_run_dir, 'return_code'), ignore_missing=True)
+    if return_code:
+        attachments.append(create_attachment('return_code', return_code))
 
-    for f in files:
-        fname = '%s/%s' % (dirname, f)
-        buf = _get_content(results_dir, fname, ignore_errors=True)
-        if buf:
-            attachments.append(create_attachment(f, buf))
+    attachments.extend(
+        _attachments_from_dir(os.path.join(test_run_dir, 'attachments')))
 
     return attachments
 
 
-def _get_test_run(results_dir, dirname, hwcontext, swcontext):
+def _get_test_run(test_run_dir, hwcontext, swcontext):
     now = datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
 
-    testdef = _get_content(results_dir, '%s/testdef.yaml' % dirname)
-    stdout = _get_content(results_dir, '%s/stdout.log' % dirname)
-    attachments = _get_attachments(results_dir, dirname, testdef, stdout)
-
-    attachments_dir = os.path.join(results_dir, dirname, 'attachments')
+    testdef = _read_content(os.path.join(test_run_dir, 'testdef.yaml'))
+    stdout = _read_content(os.path.join(test_run_dir, 'stdout.log'))
+    attachments = _get_run_attachments(test_run_dir, testdef, stdout)
+    attributes = _attributes_from_dir(os.path.join(test_run_dir, 'attributes'))
 
     testdef = yaml.load(testdef)
 
@@ -196,22 +265,26 @@ 
         'analyzer_assigned_date': now,
         'analyzer_assigned_uuid': str(uuid4()),
         'time_check_performed': False,
-        'test_results': _get_test_results(testdef, stdout, attachments_dir),
+        'test_results': _get_test_results(test_run_dir, testdef, stdout),
         'software_context': swcontext,
         'hardware_context': hwcontext,
         'attachments': attachments,
+        'attributes': attributes,
     }
 
 
-def _get_content(results_dir, fname, ignore_errors=False):
-    try:
-        with open(os.path.join(results_dir, fname), 'r') as f:
-            return f.read()
-    except IOError as e:
-        if e.errno != errno.ENOENT or not ignore_errors:
-            logging.exception('Error while reading %s' % fname)
-        if ignore_errors:
-            return ''
+def _read_content(filepath, ignore_missing=False):
+    if not os.path.exists(filepath) and ignore_missing:
+        return ''
+    with open(filepath, 'r') as f:
+        return f.read()
+
+
+def _directory_names_and_paths(dirpath, ignore_missing=False):
+    if not os.path.exists(dirpath) and ignore_missing:
+        return []
+    return [(filename, os.path.join(dirpath, filename))
+            for filename in os.listdir(dirpath)]
 
 
 def get_bundle(results_dir, sw_sources):
@@ -220,19 +293,21 @@ 
     the LAVA dashboard
     """
     testruns = []
-    cpuinfo = _get_content(results_dir, './cpuinfo.txt', ignore_errors=True)
-    meminfo = _get_content(results_dir, './meminfo.txt', ignore_errors=True)
+    cpuinfo = _read_content(os.path.join(results_dir, 'hwcontext/cpuinfo.txt'), ignore_missing=True)
+    meminfo = _read_content(os.path.join(results_dir, 'hwcontext/meminfo.txt'), ignore_missing=True)
     hwctx = _get_hw_context(cpuinfo, meminfo)
 
-    build = _get_content(results_dir, './build.txt')
-    pkginfo = _get_content(results_dir, './pkgs.txt', ignore_errors=True)
+    build = _read_content(os.path.join(results_dir, 'swcontext/build.txt'))
+    pkginfo = _read_content(os.path.join(results_dir, 'swcontext/pkgs.txt'), ignore_missing=True)
     swctx = _get_sw_context(build, pkginfo, sw_sources)
 
-    for d in os.listdir(results_dir):
-        if os.path.isdir(os.path.join(results_dir, d)):
+    for test_run_name, test_run_path in _directory_names_and_paths(results_dir):
+        if test_run_name in ('hwcontext', 'swcontext'):
+            continue
+        if os.path.isdir(test_run_path):
             try:
-                testruns.append(_get_test_run(results_dir, d, hwctx, swctx))
+                testruns.append(_get_test_run(test_run_path, hwctx, swctx))
             except:
-                logging.exception('error processing results for: %s' % d)
+                logging.exception('error processing results for: %s' % test_run_name)
 
     return {'test_runs': testruns, 'format': 'Dashboard Bundle Format 1.5'}

=== added file 'lava_test_shell/lava-test-case'
--- lava_test_shell/lava-test-case	1970-01-01 00:00:00 +0000
+++ lava_test_shell/lava-test-case	2012-11-21 01:21:56 +0000
@@ -0,0 +1,69 @@ 
+#NOTE the lava_test_shell_action fills in the proper interpreter path
+# above during target deployment
+
+usage () {
+    echo "Usage: lava-test-case TEST_CASE_ID --shell cmds ..."
+    echo "   or: lava-test-case TEST_CASE_ID --result RESULT [--units UNITS] "
+    echo "                                   [--measurement MEASUREMENT]"
+    echo ""
+    echo "Either run or record the results of a particular test case"
+}
+
+TEST_CASE_ID="$1"
+shift
+if [ -z "$TEST_CASE_ID" ]; then
+    usage
+    exit 1
+fi
+if [ "$1" = "--shell" ]; then
+    shift
+    $*
+    if [ $? -eq 0 ]; then
+        RESULT=pass
+    else
+        RESULT=fail
+    fi
+else
+    while [ $# -gt 0 ]; do
+        case $1 in
+            --result)
+                shift
+                RESULT=$1
+                shift
+                ;;
+            --units)
+                shift
+                UNITS=$1
+                shift
+                ;;
+            --measurement)
+                shift
+                MEASUREMENT=$1
+                shift
+                ;;
+            *)
+                usage
+                exit 1
+                ;;
+        esac
+    done
+fi
+
+# $LAVA_RESULT_DIR is set by lava-test-shell
+result_dir="$LAVA_RESULT_DIR/results/$TEST_CASE_ID"
+mkdir -p "$result_dir"
+
+if [ -z "${RESULT+x}" ]; then
+    echo "--result must be specified"
+    exit 1
+else
+    echo $RESULT > $result_dir/result
+fi
+
+if [ -n "${UNITS+x}" ]; then
+    echo $UNITS > $result_dir/units
+fi
+
+if [ -n "${MEASUREMENT+x}" ]; then
+    echo $MEASUREMENT > $result_dir/measurement
+fi
\ No newline at end of file

=== modified file 'lava_test_shell/lava-test-case-attach'
--- lava_test_shell/lava-test-case-attach	2012-11-19 19:53:13 +0000
+++ lava_test_shell/lava-test-case-attach	2012-11-21 01:21:56 +0000
@@ -1,15 +1,15 @@ 
 #NOTE the lava_test_shell_action fills in the proper interpreter path
 # above during target deployment
 
-set -x
-
 # basename is not present on AOSP builds, but the /*\// thing does not
 # work with dash (Ubuntu builds) or busybox (OpenEmbedded).  Both of
 # those have basename though.
 which basename || basename () { echo ${1/*\//}; }
 
 usage () {
-    echo "USAGE"
+    echo "Usage: lava-test-case-attach TEST_CASE_ID FILE [MIME_TYPE]"
+    echo ""
+    echo "Attach FILE to the test case TEST_CASE_ID."
 }
 
 TEST_CASE_ID="$1"
@@ -30,9 +30,11 @@ 
     usage
     exit 1
 fi
-# $LAVA_ATTACHMENT_DIR is set by lava-test-shell
-mkdir -p "$LAVA_ATTACHMENT_DIR/$TEST_CASE_ID"
-cp "$FILE" "$LAVA_ATTACHMENT_DIR/$TEST_CASE_ID"
+
+# $LAVA_RESULT_DIR is set by lava-test-shell
+case_attachment_dir="$LAVA_RESULT_DIR/results/$TEST_CASE_ID/attachments"
+mkdir -p "$case_attachment_dir"
+cp "$FILE" "$case_attachment_dir"
 if [ ! -z "$MIMETYPE" ]; then
-    echo "$MIMETYPE" > "$LAVA_ATTACHMENT_DIR/$TEST_CASE_ID/$(basename FILE).mimetype"
+    echo "$MIMETYPE" > "$case_attachment_dir/$(basename $FILE).mimetype"
 fi

=== modified file 'lava_test_shell/lava-test-runner-android'
--- lava_test_shell/lava-test-runner-android	2012-11-15 21:21:10 +0000
+++ lava_test_shell/lava-test-runner-android	2012-11-21 01:21:56 +0000
@@ -19,8 +19,9 @@ 
 
 hwcontext()
 {
-	cpuinfo=${RESULTSDIR}/cpuinfo.txt
-	meminfo=${RESULTSDIR}/meminfo.txt
+	mkdir -p ${RESULTSDIR}/hwcontext
+	cpuinfo=${RESULTSDIR}/hwcontext/cpuinfo.txt
+	meminfo=${RESULTSDIR}/hwcontext/meminfo.txt
 
 	[ -f ${cpuinfo} ] || cat /proc/cpuinfo > ${cpuinfo}
 	[ -f ${meminfo} ] || cat /proc/meminfo > ${meminfo}
@@ -28,8 +29,9 @@ 
 
 swcontext()
 {
-	build=${RESULTSDIR}/build.txt
-	pkgs=${RESULTSDIR}/pkgs.txt
+	mkdir -p ${RESULTSDIR}/swcontext
+	build=${RESULTSDIR}/swcontext/build.txt
+	pkgs=${RESULTSDIR}/swcontext/pkgs.txt
 
 	[ -f ${build} ] || getprop ro.build.display.id > ${build}
 	[ -f ${pkgs} ] || pm list packages -v > ${pkgs}
@@ -50,7 +52,7 @@ 
 
 	export PATH=${BINDIR}:${PATH}
 	echo "${PREFIX} started"
-	[ -d ${RESULTSDIR} ] || mkdir -p ${RESULTSDIR}
+	mkdir -p ${RESULTSDIR}
 
 	echo "${PREFIX} disabling suspend and waiting for home screen ..."
 	disablesuspend.sh
@@ -90,9 +92,10 @@ 
 		echo "${PREFIX} running ${test} under lava-test-shell..."
 		odir=${RESULTSDIR}/${test}-`date +%s`
 		mkdir ${odir}
+		mkdir ${odir}/attachments/
 		cp ${line}/testdef.yaml ${odir}/
-		cp ${line}/run.sh ${odir}/
-		[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/
+		cp ${line}/run.sh ${odir}/attachments/
+		[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/attachments/
 		lava-test-shell --output_dir ${odir} /system/bin/sh -e "${line}/run.sh"
 		echo "${PREFIX} ${test} exited with: `cat ${odir}/return_code`"
 	done < ${WORKFILE}

=== modified file 'lava_test_shell/lava-test-runner-ubuntu'
--- lava_test_shell/lava-test-runner-ubuntu	2012-11-15 21:21:10 +0000
+++ lava_test_shell/lava-test-runner-ubuntu	2012-11-19 20:09:19 +0000
@@ -10,8 +10,9 @@ 
 
 hwcontext()
 {
-	cpuinfo=${RESULTSDIR}/cpuinfo.txt
-	meminfo=${RESULTSDIR}/meminfo.txt
+	mkdir -p ${RESULTSDIR}/hwcontext
+	cpuinfo=${RESULTSDIR}/hwcontext/cpuinfo.txt
+	meminfo=${RESULTSDIR}/hwcontext/meminfo.txt
 
 	[ -f ${cpuinfo} ] || cat /proc/cpuinfo > ${cpuinfo}
 	[ -f ${meminfo} ] || cat /proc/meminfo > ${meminfo}
@@ -19,8 +20,9 @@ 
 
 swcontext()
 {
-	build=${RESULTSDIR}/build.txt
-	pkgs=${RESULTSDIR}/pkgs.txt
+	mkdir -p ${RESULTSDIR}/swcontext
+	build=${RESULTSDIR}/swcontext/build.txt
+	pkgs=${RESULTSDIR}/swcontext/pkgs.txt
 
 	[ -f ${build} ] || cat /etc/lsb-release | grep DESCRIPTION | cut -d\" -f2 > ${build}
 	# this does a query of installed packaged that will look similar to
@@ -41,7 +43,7 @@ 
 
 export PATH=${BINDIR}:${PATH}
 echo "${PREFIX} started"
-[ -d ${RESULTSDIR} ] || mkdir -p ${RESULTSDIR}
+mkdir -p ${RESULTSDIR}
 
 # move the workfile to something timestamped and run that. This
 # prevents us from running the same thing again after a reboot
@@ -74,9 +76,10 @@ 
 	echo "${PREFIX} running ${test} under lava-test-shell..."
 	odir=${RESULTSDIR}/${test}-`date +%s`
 	mkdir ${odir}
+	mkdir ${odir}/attachments/
 	cp ${line}/testdef.yaml ${odir}/
-	cp ${line}/run.sh ${odir}/
-	[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/
+	cp ${line}/run.sh ${odir}/attachments/
+	[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/attachments/
 	lava-test-shell --output_dir ${odir} /bin/sh -e "${line}/run.sh"
 	echo "${PREFIX} ${test} exited with: `cat ${odir}/return_code`"
 done < ${WORKFILE}

=== modified file 'lava_test_shell/lava-test-shell'
--- lava_test_shell/lava-test-shell	2012-11-16 00:51:25 +0000
+++ lava_test_shell/lava-test-shell	2012-11-18 22:20:57 +0000
@@ -5,8 +5,7 @@ 
 ODIR=$1
 shift
 TEST=$*
-RC=0
-export LAVA_ATTACHMENT_DIR=${ODIR}/attachments
+export LAVA_RESULT_DIR=${ODIR}
 {
 	$TEST
 	echo $? > ${ODIR}/return_code