diff mbox

[Branch,~linaro-validation/lava-dispatcher/trunk] Rev 468: support for host-side handling of signals sent from the DUT

Message ID 20121126193117.24910.25112.launchpad@ackee.canonical.com
State Accepted
Headers show

Commit Message

Michael-Doyle Hudson Nov. 26, 2012, 7:31 p.m. UTC
Merge authors:
  Michael Hudson-Doyle (mwhudson)
Related merge proposals:
  https://code.launchpad.net/~mwhudson/lava-dispatcher/signals/+merge/135566
  proposed by: Michael Hudson-Doyle (mwhudson)
------------------------------------------------------------
revno: 468 [merge]
committer: Michael Hudson-Doyle <michael.hudson@linaro.org>
branch nick: trunk
timestamp: Tue 2012-11-27 08:30:28 +1300
message:
  support for host-side handling of signals sent from the DUT
added:
  doc/external_measurement.rst
  lava_dispatcher/signals/
  lava_dispatcher/signals/__init__.py
  lava_dispatcher/signals/duration.py
modified:
  doc/index.rst
  doc/lava_test_shell.rst
  lava_dispatcher/actions/lava_test_shell.py
  lava_dispatcher/lava_test_shell.py
  lava_dispatcher/utils.py
  lava_test_shell/lava-test-case
  lava_test_shell/lava-test-runner-android
  lava_test_shell/lava-test-runner-ubuntu
  setup.py


--
lp:lava-dispatcher
https://code.launchpad.net/~linaro-validation/lava-dispatcher/trunk

You are subscribed to branch lp:lava-dispatcher.
To unsubscribe from this branch go to https://code.launchpad.net/~linaro-validation/lava-dispatcher/trunk/+edit-subscription
diff mbox

Patch

=== added file 'doc/external_measurement.rst'
--- doc/external_measurement.rst	1970-01-01 00:00:00 +0000
+++ doc/external_measurement.rst	2012-11-22 21:58:51 +0000
@@ -0,0 +1,110 @@ 
+Hooks, Signals and External Measurement
+=======================================
+
+.. warning::
+   This is work in progress!  Expect changes in details until at least early 2013.
+
+It is sometimes the case that an interesting test cannot be run solely
+on the device being tested: additional data from somewhere else is
+required.  For example, a test of the sound subsystem may want to
+generate audio, play it, capture it on another system and then compare
+the generated and captured audio.  A `lava-test-shell`_ test can be
+written to send **signals** to indicate when a test case starts and
+finishes which can be handled by a **handler** specified by the test
+definition.
+
+.. _`lava-test-shell`: lava_test_shell.html
+
+Signals
+-------
+
+A signal is a message from the system being tested ("device") to the
+system the dispatcher is running on ("host").  The messaging is
+synchronous and uni-directional: lava-test-shell on the device will
+wait for the signal to be processesed and there is no way for the
+device to receieve data from the host.
+
+Generally speaking, we expect a test author will only be interested in
+handling the "start test case" and "end test case" signals that are
+sent by ``lava-test-case --shell``.
+
+Handler
+-------
+
+A handler is a Python class that subclasses:
+
+.. autoclass:: lava_dispatcher.signals.SignalHandler
+
+This class defines three methods that you almost certainly want to
+override:
+
+ 1. ``start_testcase(self, test_case_id):``
+
+    Called when a testcase starts on the device.  The return value of
+    this method is passed to both ``end_testcase`` and
+    ``processes_test_run``.
+
+    The expected case is something like: starting a process that
+    captures some data from or about the device and returning a
+    dictionary that indicates the pid of that process and where its
+    output is going.
+
+ 2. ``end_testcase(self, test_case_id, case_data):``
+
+    Called when a testcase ends on the device.  ``case_data`` is
+    whatever the corresponding ``start_testcase`` call returned.
+
+    The expected case here is that you will terminate the process that
+    was started by ``start_testcase``.
+
+ 3. ``process_test_result(self, test_result, case_data):``
+
+    Here you are expected to add the data that was recorded during the
+    test run to the results.  You need to know about the bundle format
+    to do this.
+
+These methods are invoked with catch-all exception handlers around
+them so you don't have to be super careful in their implementation: it
+should not be possible to crash the whole dispatcher with a typo in
+one of them.
+
+Here is a very simple complete handler::
+
+  import datetime
+  import time
+
+  from json_schema_validator.extensions import timedelta_extension
+
+  from lava_dispatcher.signals import SignalHandler
+
+  class AddDuration(SignalHandler):
+
+      def start_testcase(self, test_case_id):
+          return {
+              'starttime': time.time()
+              }
+
+      def end_testcase(self, test_case_id, data):
+          data['endtime'] = time.time()
+
+      def postprocess_test_result(self, test_result, data):
+          delta = datetime.timedelta(seconds=data['endtime'] - data['starttime'])
+          test_result['duration'] = timedelta_extension.to_json(delta)
+
+Specifying a handler
+--------------------
+
+A handlers are named the test definition, for example::
+
+  handler:
+    handler-name: add-duration
+
+The name is the name of an `entry point`_ from the
+``lava.signal_handlers`` "group".  The entry point must be provided by
+a package installed into the instance that the dispatcher is running
+from.
+
+.. _`entry point`: http://packages.python.org/distribute/pkg_resources.html#entry-points
+
+We will soon provide a way to bundle the signal handler along with the
+test definition.

=== modified file 'doc/index.rst'
--- doc/index.rst	2012-10-11 02:51:21 +0000
+++ doc/index.rst	2012-11-22 02:03:49 +0000
@@ -33,6 +33,8 @@ 
    configuration.rst
    jobfile.rst
    usage.rst
+   lava_test_shell.rst
+   external_measurement.rst
    proxy.rst
 
 * :ref:`search`

=== modified file 'doc/lava_test_shell.rst'
--- doc/lava_test_shell.rst	2012-11-21 01:23:19 +0000
+++ doc/lava_test_shell.rst	2012-11-22 19:52:03 +0000
@@ -55,6 +55,11 @@ 
  * ``lava-test-case``
  * ``lava-test-case-attach``
 
+You need to use ``lava-test-case`` (specifically, ``lava-test-case
+--shell``) when you are working with `hooks, signals and external
+measurement`_.
+
+.. _`hooks, signals and external measurement`: external_measurement.html
 
 lava-test-case
 --------------
@@ -89,6 +94,9 @@ 
       - "lava-test-case fail-test --shell false"
       - "lava-test-case pass-test --shell true"
 
+The --shell form also sends the start test case and end test case
+signals that are described in `hooks, signals and external
+measurement`_.
 
 lava-test-case-attach
 ---------------------
@@ -107,6 +115,7 @@ 
  3. (optional) the MIME type of the file (if no MIME type is passed, a
     guess is made based on the filename)
 
+
 Handling Dependencies (Ubuntu)
 ==============================
 

=== modified file 'lava_dispatcher/actions/lava_test_shell.py'
--- lava_dispatcher/actions/lava_test_shell.py	2012-11-21 23:22:01 +0000
+++ lava_dispatcher/actions/lava_test_shell.py	2012-11-23 01:41:43 +0000
@@ -45,9 +45,13 @@ 
 #                                  for test authors.
 #       lava-test-runner           The job that runs the tests on boot.
 #       lava-test-shell            A helper to run a test suite.
+#       lava-test-case             A helper to record information about a test
+#                                  result.
 #       lava-test-case-attach      A helper to attach a file to a test result.
 #    tests/
 #       ${IDX}_${TEST_ID}/         One directory per test to be executed.
+#          uuid                    The "analyzer_assigned_uuid" of the
+#                                  test_run that is being generated.
 #          testdef.yml             The test definition.
 #          install.sh              The install steps.
 #          run.sh                  The run steps.
@@ -71,14 +75,15 @@ 
 #       ${IDX}_${TEST_ID}-${TIMESTAMP}/
 #          testdef.yml
 #          stdout.log
-#          return_code          The exit code of run.sh.
+#          return_code             The exit code of run.sh.
+#          analyzer_assigned_uuid
 #          attachments/
 #             install.sh
 #             run.sh
 #             ${FILENAME}          The attached data.
 #             ${FILENAME}.mimetype  The mime type of the attachment.
-#             attributes/
-#                ${ATTRNAME}    Content is value of attribute
+#           attributes/
+#              ${ATTRNAME}         Content is value of attribute
 #          tags/
 #             ${TAGNAME}           Content of file is ignored.
 #          results/
@@ -98,26 +103,35 @@ 
 # After the test run has completed, the /lava/results directory is pulled over
 # to the host and turned into a bundle for submission to the dashboard.
 
-import yaml
 from glob import glob
-import time
 import logging
 import os
 import pexpect
+import pkg_resources
 import shutil
 import stat
 import subprocess
 import tempfile
+import time
+from uuid import uuid4
+
+import yaml
 
 from linaro_dashboard_bundle.io import DocumentIO
 
 import lava_dispatcher.lava_test_shell as lava_test_shell
-import lava_dispatcher.utils as utils
+from lava_dispatcher.signals import SignalDirector
+from lava_dispatcher import utils
 
 from lava_dispatcher.actions import BaseAction
 from lava_dispatcher.device.target import Target
 from lava_dispatcher.downloader import download_image
 
+# Reading from STDIN in the lava-test-shell doesn't work well because its
+# STDIN is /dev/console which we are doing echo's on in our scripts. This
+# just makes a well known fifo we can read the ACK's with
+ACK_FIFO = '/lava_ack.fifo'
+
 LAVA_TEST_DIR = '%s/../../lava_test_shell' % os.path.dirname(__file__)
 LAVA_TEST_ANDROID = '%s/lava-test-runner-android' % LAVA_TEST_DIR
 LAVA_TEST_UBUNTU = '%s/lava-test-runner-ubuntu' % LAVA_TEST_DIR
@@ -139,7 +153,7 @@ 
 Target.ubuntu_deployment_data['lava_test_shell'] = LAVA_TEST_SHELL
 Target.ubuntu_deployment_data['lava_test_case'] = LAVA_TEST_CASE
 Target.ubuntu_deployment_data['lava_test_case_attach'] = LAVA_TEST_CASE_ATTACH
-Target.ubuntu_deployment_data['lava_test_sh_cmd'] = '/bin/sh'
+Target.ubuntu_deployment_data['lava_test_sh_cmd'] = '/bin/bash'
 Target.ubuntu_deployment_data['lava_test_dir'] = '/lava'
 Target.ubuntu_deployment_data['lava_test_results_part_attr'] = 'root_part'
 
@@ -183,6 +197,255 @@ 
 Target.android_deployment_data['lava_test_configure_startup'] = \
         _configure_android_startup
 
+def _get_testdef_git_repo(testdef_repo, tmpdir, revision):
+    cwd = os.getcwd()
+    gitdir = os.path.join(tmpdir, 'gittestrepo')
+    try:
+        subprocess.check_call(['git', 'clone', testdef_repo,
+                                  gitdir])
+        if revision:
+            os.chdir(gitdir)
+            subprocess.check_call(['git', 'checkout', revision])
+        return gitdir
+    except Exception as e:
+        logging.error('Unable to get test definition from git\n' + str(e))
+    finally:
+        os.chdir(cwd)
+
+
+def _get_testdef_bzr_repo(testdef_repo, tmpdir, revision):
+    bzrdir = os.path.join(tmpdir, 'bzrtestrepo')
+    try:
+        # As per bzr revisionspec, '-1' is "The last revision in a
+        # branch".
+        if revision is None:
+            revision = '-1'
+
+        subprocess.check_call(
+            ['bzr', 'branch', '-r', revision, testdef_repo, bzrdir],
+            env={'BZR_HOME': '/dev/null', 'BZR_LOG': '/dev/null'})
+        return bzrdir
+    except Exception as e:
+        logging.error('Unable to get test definition from bzr\n' + str(e))
+
+
+class TestDefinitionLoader(object):
+    """
+    A TestDefinitionLoader knows how to load test definitions from the data
+    provided in the job file.
+    """
+
+    def __init__(self, context, tmpbase):
+        self.testdefs = []
+        self.context = context
+        self.tmpbase = tmpbase
+        self.testdefs_by_uuid = {}
+
+    def _append_testdef(self, testdef_obj):
+        testdef_obj.load_signal_handler()
+        self.testdefs.append(testdef_obj)
+        self.testdefs_by_uuid[testdef_obj.uuid] = testdef_obj
+
+    def load_from_url(self, url):
+        tmpdir = utils.mkdtemp(self.tmpbase)
+        testdef_file = download_image(url, self.context, tmpdir)
+        with open(testdef_file, 'r') as f:
+            logging.info('loading test definition')
+            testdef = yaml.load(f)
+
+        idx = len(self.testdefs)
+
+        self._append_testdef(URLTestDefinition(idx, testdef))
+
+    def load_from_repo(self, testdef_repo):
+        tmpdir = utils.mkdtemp(self.tmpbase)
+        if 'git-repo' in testdef_repo:
+            repo = _get_testdef_git_repo(
+                testdef_repo['git-repo'], tmpdir, testdef_repo.get('revision'))
+            name = os.path.splitext(os.path.basename(testdef_repo['git-repo']))[0]
+            info = _git_info(testdef_repo['git-repo'], repo, name)
+
+        if 'bzr-repo' in testdef_repo:
+            repo = _get_testdef_bzr_repo(
+                testdef_repo['bzr-repo'], tmpdir, testdef_repo.get('revision'))
+            name = testdef_repo['bzr-repo'].replace('lp:', '').split('/')[-1]
+            info = _bzr_info(testdef_repo['bzr-repo'], repo, name)
+
+        test = testdef_repo.get('testdef', 'lavatest.yaml')
+        with open(os.path.join(repo, test), 'r') as f:
+            logging.info('loading test definition ...')
+            testdef = yaml.load(f)
+
+        idx = len(self.testdefs)
+        self._append_testdef(RepoTestDefinition(idx, testdef, repo, info))
+
+
+def _bzr_info(url, bzrdir, name):
+    cwd = os.getcwd()
+    try:
+        os.chdir('%s' % bzrdir)
+        revno = subprocess.check_output(['bzr', 'revno']).strip()
+        return {
+            'project_name': name,
+            'branch_vcs': 'bzr',
+            'branch_revision': revno,
+            'branch_url': url,
+            }
+    finally:
+        os.chdir(cwd)
+
+
+def _git_info(url, gitdir, name):
+    cwd = os.getcwd()
+    try:
+        os.chdir('%s' % gitdir)
+        commit_id = subprocess.check_output(
+            ['git', 'log', '-1', '--pretty=%H']).strip()
+        return {
+            'project_name': name,
+            'branch_vcs': 'git',
+            'branch_revision': commit_id,
+            'branch_url': url,
+            }
+    finally:
+        os.chdir(cwd)
+
+
+class URLTestDefinition(object):
+    """
+    A test definition that was loaded from a URL.
+    """
+
+    def __init__(self, idx, testdef):
+        self.testdef = testdef
+        self.idx = idx
+        self.test_run_id = '%s_%s' % (idx, self.testdef['metadata']['name'])
+        self.uuid = str(uuid4())
+        self._sw_sources = []
+        self.handler = None
+
+    def load_signal_handler(self):
+        hook_data = self.testdef.get('handler')
+        if not hook_data:
+            return
+        try:
+            handler_name = hook_data['handler-name']
+            logging.info("Loading handler named %s", handler_name)
+            handler_eps = list(
+                pkg_resources.iter_entry_points(
+                    'lava.signal_handlers', handler_name))
+            if len(handler_eps) == 0:
+                logging.error("No handler named %s found", handler_name)
+                return
+            elif len(handler_eps) > 1:
+                logging.warning(
+                    "Multiple handlers named %s found.  Picking one arbitrarily.",
+                    handler_name)
+            handler_ep = handler_eps[0]
+            logging.info("Loading handler from %s" % handler_ep.dist)
+            handler_cls = handler_ep.load()
+            self.handler = handler_cls(self, **hook_data.get('params', {}))
+        except Exception:
+            logging.exception("loading handler failed")
+
+    def _create_repos(self, testdir):
+        cwd = os.getcwd()
+        try:
+            os.chdir(testdir)
+
+            for repo in self.testdef['install'].get('bzr-repos', []):
+                logging.info("bzr branch %s" % repo)
+                # Pass non-existent BZR_HOME value, or otherwise bzr may
+                # have non-reproducible behavior because it may rely on
+                # bzr whoami value, presence of ssh keys, etc.
+                subprocess.check_call(['bzr', 'branch', repo],
+                    env={'BZR_HOME': '/dev/null', 'BZR_LOG': '/dev/null'})
+                name = repo.replace('lp:', '').split('/')[-1]
+                self._sw_sources.append(_bzr_info(repo, name, name))
+
+            for repo in self.testdef['install'].get('git-repos', []):
+                logging.info("git clone %s" % repo)
+                subprocess.check_call(['git', 'clone', repo])
+                name = os.path.splitext(os.path.basename(repo))[0]
+                self._sw_sources.append(_git_info(repo, name, name))
+        finally:
+            os.chdir(cwd)
+
+    def _create_target_install(self, hostdir, targetdir):
+        with open('%s/install.sh' % hostdir, 'w') as f:
+            f.write('set -ex\n')
+            f.write('cd %s\n' % targetdir)
+
+            # TODO how should we handle this for Android?
+            deps = self.testdef['install'].get('deps', [])
+            if deps:
+                f.write('sudo apt-get update\n')
+                f.write('sudo apt-get install -y ')
+                for dep in deps:
+                    f.write('%s ' % dep)
+                f.write('\n')
+
+            steps = self.testdef['install'].get('steps', [])
+            if steps:
+                for cmd in steps:
+                    f.write('%s\n' % cmd)
+
+    def copy_test(self, hostdir, targetdir):
+        """Copy the files needed to run this test to the device.
+
+        :param hostdir: The location on the device filesystem to copy too.
+        :param targetdir: The location `hostdir` will have when the device
+            boots.
+        """
+        utils.ensure_directory(hostdir)
+        with open('%s/testdef.yaml' % hostdir, 'w') as f:
+            f.write(yaml.dump(self.testdef))
+
+        with open('%s/uuid' % hostdir, 'w') as f:
+            f.write(self.uuid)
+
+        if 'install' in self.testdef:
+            self._create_repos(hostdir)
+            self._create_target_install(hostdir, targetdir)
+
+        with open('%s/run.sh' % hostdir, 'w') as f:
+            f.write('set -e\n')
+            f.write('export TESTRUN_ID=%s\n' % self.test_run_id)
+            f.write('[ -p %s ] && rm %s\n' % (ACK_FIFO, ACK_FIFO))
+            f.write('mkfifo %s\n' % ACK_FIFO)
+            f.write('cd %s\n' % targetdir)
+            f.write('UUID=`cat uuid`\n')
+            f.write('echo "<LAVA_SIGNAL_STARTRUN $TESTRUN_ID $UUID>"\n')
+            f.write('#wait up to 10 minutes for an ack from the dispatcher\n')
+            f.write('read -t 600 < %s\n' % ACK_FIFO)
+            steps = self.testdef['run'].get('steps', [])
+            if steps:
+              for cmd in steps:
+                  f.write('%s\n' % cmd)
+            f.write('echo "<LAVA_SIGNAL_ENDRUN $TESTRUN_ID $UUID>"\n')
+            f.write('#wait up to 10 minutes for an ack from the dispatcher\n')
+            f.write('read -t 600 < %s\n' % ACK_FIFO)
+
+
+class RepoTestDefinition(URLTestDefinition):
+    """
+    A test definition that was loaded from a VCS repository.
+
+    The difference is that the files from the repository are also copied to
+    the device.
+    """
+
+    def __init__(self, idx, testdef, repo, info):
+        URLTestDefinition.__init__(self, idx, testdef)
+        self.repo = repo
+        self._sw_sources.append(info)
+
+    def copy_test(self, hostdir, targetdir):
+        URLTestDefinition.copy_test(self, hostdir, targetdir)
+        for filepath in glob(os.path.join(self.repo, '*')):
+            shutil.copy2(filepath, hostdir)
+        logging.info('copied all test files')
+
 
 class cmd_lava_test_shell(BaseAction):
 
@@ -216,93 +479,44 @@ 
         target = self.client.target_device
         self._assert_target(target)
 
-        self._configure_target(target, testdef_urls, testdef_repos)
+        testdefs_by_uuid = self._configure_target(target, testdef_urls, testdef_repos)
+
+        signal_director = SignalDirector(self.client, testdefs_by_uuid)
 
         with target.runner() as runner:
-            patterns = [
+            start = time.time()
+            while self._keep_running(runner, timeout, signal_director):
+                elapsed = time.time() - start
+                timeout = int(timeout - elapsed)
+
+        self._bundle_results(target, signal_director, testdefs_by_uuid)
+
+    def _keep_running(self, runner, timeout, signal_director):
+        patterns = [
                 '<LAVA_TEST_RUNNER>: exiting',
                 pexpect.EOF,
                 pexpect.TIMEOUT,
+                '<LAVA_SIGNAL_(\S+) ([^>]+)>',
                 ]
-            idx = runner._connection.expect(patterns, timeout=timeout)
-            if idx == 0:
-                logging.info('lava_test_shell seems to have completed')
-            elif idx == 1:
-                logging.warn('lava_test_shell connection dropped')
-            elif idx == 2:
-                logging.warn('lava_test_shell has timed out')
-
-        self._bundle_results(target)
-
-    def _get_test_definition(self, d, ldir, testdef_src, tmpdir, isrepo=False):
-        repo = None
-        test = 'lavatest.yaml'
-
-        if isrepo:
-            if 'git-repo' in testdef_src:
-                repo = self._get_testdef_git_repo(testdef_src['git-repo'],
-                                                  tmpdir,
-                                                  testdef_src.get('revision'))
-
-            if 'bzr-repo' in testdef_src:
-                repo = self._get_testdef_bzr_repo(testdef_src['bzr-repo'],
-                                                  tmpdir,
-                                                  testdef_src.get('revision'))
-
-            if 'testdef' in testdef_src:
-                test = testdef_src['testdef']
-
-            with open(os.path.join(repo, test), 'r') as f:
-                testdef = yaml.load(f)
-        else:
-            test = download_image(testdef_src, self.context, tmpdir)
-            with open(test, 'r') as f:
-                testdef = yaml.load(f)
-
-        logging.info('loaded test definition ...')
-
-        # android mount the partition under /system, while ubuntu mounts under
-        # /, so we have hdir for where it is on the host and tdir for how the
-        # target will see the path
-        timestamp = str(time.time())
-        hdir = '%s/tests/%s_%s' % (d, timestamp, testdef['metadata']['name'])
-        tdir = '%s/tests/%s_%s' % (ldir, timestamp, testdef['metadata']['name'])
-        self._copy_test(hdir, tdir, testdef, repo)
-
-        return tdir
-
-    def _get_testdef_git_repo(self, testdef_repo, tmpdir, revision):
-        cwd = os.getcwd()
-        gitdir = os.path.join(tmpdir, 'gittestrepo')
-        try:
-            subprocess.check_call(['git', 'clone', testdef_repo, gitdir])
-            if revision:
-                os.chdir(gitdir)
-                subprocess.check_call(['git', 'checkout', revision])
-            return gitdir
-        except Exception as e:
-            logging.error('Unable to get test definition from git\n' + str(e))
-        finally:
-            os.chdir(cwd)
-
-    def _get_testdef_bzr_repo(self, testdef_repo, tmpdir, revision):
-        bzrdir = os.path.join(tmpdir, 'bzrtestrepo')
-        try:
-            # As per bzr revisionspec, '-1' is "The last revision in a
-            # branch".
-            if revision is None:
-                revision = '-1'
-
-            # Pass non-existent BZR_HOME value, or otherwise bzr may
-            # have non-reproducible behavior because it may rely on
-            # bzr whoami value, presence of ssh keys, etc.
-            subprocess.check_call(
-                ['bzr', 'branch', '-r', revision, testdef_repo, bzrdir],
-                env={'BZR_HOME': '/dev/null', 'BZR_LOG': '/dev/null'})
-
-            return bzrdir
-        except Exception as e:
-            logging.error('Unable to get test definition from bzr\n' + str(e))
+
+        idx = runner._connection.expect(patterns, timeout=timeout)
+        if idx == 0:
+            logging.info('lava_test_shell seems to have completed')
+        elif idx == 1:
+            logging.warn('lava_test_shell connection dropped')
+        elif idx == 2:
+            logging.warn('lava_test_shell has timed out')
+        elif idx == 3:
+            name, params = runner._connection.match.groups()
+            params = params.split()
+            try:
+                signal_director.signal(name, params)
+            except:
+                logging.exception("on_signal failed")
+            runner._connection.sendline('echo LAVA_ACK > %s' % ACK_FIFO)
+            return True
+
+        return False
 
     def _copy_runner(self, mntdir, target):
         runner = target.deployment_data['lava_test_runner']
@@ -311,7 +525,7 @@ 
 
         shcmd = target.deployment_data['lava_test_sh_cmd']
 
-        for key in ['lava_test_shell', 'lava_test_case', 'lava_test_case_attach']:
+        for key in ['lava_test_shell', 'lava_test_case_attach']:
             fname = target.deployment_data[key]
             with open(fname, 'r') as fin:
                 with open('%s/bin/%s' % (mntdir, os.path.basename(fname)), 'w') as fout:
@@ -319,97 +533,13 @@ 
                     fout.write(fin.read())
                     os.fchmod(fout.fileno(), XMOD)
 
-    def _bzr_info(self, url, bzrdir):
-        cwd = os.getcwd()
-        try:
-            os.chdir('%s' % bzrdir)
-            revno = subprocess.check_output(['bzr', 'revno']).strip()
-            return {
-                'project_name': bzrdir,
-                'branch_vcs': 'bzr',
-                'branch_revision': revno,
-                'branch_url': url,
-                }
-        finally:
-            os.chdir(cwd)
-
-    def _git_info(self, url, gitdir):
-        cwd = os.getcwd()
-        try:
-            os.chdir('%s' % gitdir)
-            commit_id = subprocess.check_output(
-                ['git', 'log', '-1', '--pretty=%H']).strip()
-            return {
-                'project_name': url.rsplit('/')[-1],
-                'branch_vcs': 'git',
-                'branch_revision': commit_id,
-                'branch_url': url,
-                }
-        finally:
-            os.chdir(cwd)
-
-    def _create_repos(self, testdef, testdir):
-        cwd = os.getcwd()
-        try:
-            os.chdir(testdir)
-            for repo in testdef['install'].get('bzr-repos', []):
-                logging.info("bzr branch %s" % repo)
-                # Pass non-existent BZR_HOME value, or otherwise bzr may
-                # have non-reproducible behavior because it may rely on
-                # bzr whoami value, presence of ssh keys, etc.
-                subprocess.check_call(['bzr', 'branch', repo],
-                    env={'BZR_HOME': '/dev/null', 'BZR_LOG': '/dev/null'})
-                name = repo.replace('lp:', '').split('/')[-1]
-                self._sw_sources.append(self._bzr_info(repo, name))
-            for repo in testdef['install'].get('git-repos', []):
-                logging.info("git clone %s" % repo)
-                subprocess.check_call(['git', 'clone', repo])
-                name = os.path.splitext(os.path.basename(repo))[0]
-                self._sw_sources.append(self._git_info(repo, name))
-        finally:
-            os.chdir(cwd)
-
-    def _create_target_install(self, testdef, hostdir, targetdir):
-        with open('%s/install.sh' % hostdir, 'w') as f:
-            f.write('set -ex\n')
-            f.write('cd %s\n' % targetdir)
-
-            # TODO how should we handle this for Android?
-            if 'deps' in testdef['install'] and \
-                    testdef['install']['deps'] is not None:
-                f.write('sudo apt-get update\n')
-                f.write('sudo apt-get install -y ')
-                for dep in testdef['install']['deps']:
-                    f.write('%s ' % dep)
-                f.write('\n')
-
-            if 'steps' in testdef['install'] and \
-                    testdef['install']['steps'] is not None:
-                for cmd in testdef['install']['steps']:
-                    f.write('%s\n' % cmd)
-
-    def _copy_test(self, hostdir, targetdir, testdef, testdef_repo=None):
-        self._sw_sources = []
-        utils.ensure_directory(hostdir)
-        with open('%s/testdef.yaml' % hostdir, 'w') as f:
-            f.write(yaml.dump(testdef))
-
-        if 'install' in testdef:
-            self._create_repos(testdef, hostdir)
-            self._create_target_install(testdef, hostdir, targetdir)
-
-        with open('%s/run.sh' % hostdir, 'w') as f:
-            f.write('set -e\n')
-            f.write('cd %s\n' % targetdir)
-            if 'steps' in testdef['run'] \
-                    and testdef['run']['steps'] is not None:
-                for cmd in testdef['run']['steps']:
-                    f.write('%s\n' % cmd)
-
-        if testdef_repo:
-            for filepath in glob(os.path.join(testdef_repo, '*')):
-                shutil.copy2(filepath, hostdir)
-            logging.info('copied all test files')
+        tc = target.deployment_data['lava_test_case']
+        with open(tc, 'r') as fin:
+            with open('%s/bin/lava-test-case' % mntdir, 'w') as fout:
+                fout.write('#!%s\n\n' % shcmd)
+                fout.write('ACK_FIFO=%s\n' % ACK_FIFO)
+                fout.write(fin.read())
+                os.fchmod(fout.fileno(), XMOD)
 
     def _mk_runner_dirs(self, mntdir):
         utils.ensure_directory('%s/bin' % mntdir)
@@ -424,34 +554,37 @@ 
         with target.file_system(results_part, 'lava') as d:
             self._mk_runner_dirs(d)
             self._copy_runner(d, target)
-            testdirs = []
+
+            testdef_loader = TestDefinitionLoader(self.context, target.scratch_dir)
 
             if testdef_urls:
                 for url in testdef_urls:
-                    tdir = self._get_test_definition(d,
-                                                     ldir,
-                                                     url,
-                                                     target.scratch_dir,
-                                                     isrepo=False)
-                    testdirs.append(tdir)
+                    testdef_loader.load_from_url(url)
 
             if testdef_repos:
                 for repo in testdef_repos:
-                    tdir = self._get_test_definition(d,
-                                                     ldir,
-                                                     repo,
-                                                     target.scratch_dir,
-                                                     isrepo=True)
-                    testdirs.append(tdir)
+                    testdef_loader.load_from_repo(repo)
+
+            tdirs = []
+            for testdef in testdef_loader.testdefs:
+                # android mount the partition under /system, while ubuntu
+                # mounts under /, so we have hdir for where it is on the
+                # host and tdir for how the target will see the path
+                hdir = '%s/tests/%s' % (d, testdef.test_run_id)
+                tdir = '%s/tests/%s' % (ldir, testdef.test_run_id)
+                testdef.copy_test(hdir, tdir)
+                tdirs.append(tdir)
 
             with open('%s/lava-test-runner.conf' % d, 'w') as f:
-                for testdir in testdirs:
+                for testdir in tdirs:
                     f.write('%s\n' % testdir)
 
         with target.file_system(target.config.root_part, 'etc') as d:
             target.deployment_data['lava_test_configure_startup'](d)
 
-    def _bundle_results(self, target):
+        return testdef_loader.testdefs_by_uuid
+
+    def _bundle_results(self, target, signal_director, testdefs_by_uuid):
         """ Pulls the results from the target device and builds a bundle
         """
         results_part = target.deployment_data['lava_test_results_part_attr']
@@ -459,13 +592,15 @@ 
         rdir = self.context.host_result_dir
 
         with target.file_system(results_part, 'lava/results') as d:
-            bundle = lava_test_shell.get_bundle(d, self._sw_sources)
+            bundle = lava_test_shell.get_bundle(d, testdefs_by_uuid)
             utils.ensure_directory_empty(d)
 
-            (fd, name) = tempfile.mkstemp(
-                prefix='lava-test-shell', suffix='.bundle', dir=rdir)
-            with os.fdopen(fd, 'w') as f:
-                DocumentIO.dump(f, bundle)
+        signal_director.postprocess_bundle(bundle)
+
+        (fd, name) = tempfile.mkstemp(
+            prefix='lava-test-shell', suffix='.bundle', dir=rdir)
+        with os.fdopen(fd, 'w') as f:
+            DocumentIO.dump(f, bundle)
 
     def _assert_target(self, target):
         """ Ensure the target has the proper deployment data required by this

=== modified file 'lava_dispatcher/lava_test_shell.py'
--- lava_dispatcher/lava_test_shell.py	2012-11-22 00:43:48 +0000
+++ lava_dispatcher/lava_test_shell.py	2012-11-22 23:34:43 +0000
@@ -18,6 +18,13 @@ 
 # along
 # with this program; if not, see <http://www.gnu.org/licenses>.
 
+"""
+Import test results from disk.
+
+This module contains functions to create a bundle from the disk files created
+by a lava-test-shell run.
+"""
+
 import datetime
 import decimal
 import mimetypes
@@ -26,8 +33,6 @@ 
 import os
 import re
 
-from uuid import uuid4
-
 from lava_dispatcher.test_data import create_attachment
 
 
@@ -250,20 +255,22 @@ 
     return attachments
 
 
-def _get_test_run(test_run_dir, hwcontext, swcontext):
+def _get_test_run(test_run_dir, hwcontext, build, pkginfo, testdefs_by_uuid):
     now = datetime.datetime.utcnow().strftime('%Y-%m-%dT%H:%M:%SZ')
 
     testdef = _read_content(os.path.join(test_run_dir, 'testdef.yaml'))
     stdout = _read_content(os.path.join(test_run_dir, 'stdout.log'))
+    uuid = _read_content(os.path.join(test_run_dir, 'analyzer_assigned_uuid'))
     attachments = _get_run_attachments(test_run_dir, testdef, stdout)
     attributes = _attributes_from_dir(os.path.join(test_run_dir, 'attributes'))
 
     testdef = yaml.load(testdef)
+    swcontext = _get_sw_context(build, pkginfo, testdefs_by_uuid[uuid]._sw_sources)
 
     return {
         'test_id': testdef.get('metadata').get('name'),
         'analyzer_assigned_date': now,
-        'analyzer_assigned_uuid': str(uuid4()),
+        'analyzer_assigned_uuid': uuid,
         'time_check_performed': False,
         'test_results': _get_test_results(test_run_dir, testdef, stdout),
         'software_context': swcontext,
@@ -287,7 +294,7 @@ 
             for filename in os.listdir(dirpath)]
 
 
-def get_bundle(results_dir, sw_sources):
+def get_bundle(results_dir, testdefs_by_uuid):
     """
     iterates through a results directory to build up a bundle formatted for
     the LAVA dashboard
@@ -299,14 +306,13 @@ 
 
     build = _read_content(os.path.join(results_dir, 'swcontext/build.txt'))
     pkginfo = _read_content(os.path.join(results_dir, 'swcontext/pkgs.txt'), ignore_missing=True)
-    swctx = _get_sw_context(build, pkginfo, sw_sources)
 
     for test_run_name, test_run_path in _directory_names_and_paths(results_dir):
         if test_run_name in ('hwcontext', 'swcontext'):
             continue
         if os.path.isdir(test_run_path):
             try:
-                testruns.append(_get_test_run(test_run_path, hwctx, swctx))
+                testruns.append(_get_test_run(test_run_path, hwctx, build, pkginfo, testdefs_by_uuid))
             except:
                 logging.exception('error processing results for: %s' % test_run_name)
 

=== added directory 'lava_dispatcher/signals'
=== added file 'lava_dispatcher/signals/__init__.py'
--- lava_dispatcher/signals/__init__.py	1970-01-01 00:00:00 +0000
+++ lava_dispatcher/signals/__init__.py	2012-11-22 21:33:41 +0000
@@ -0,0 +1,156 @@ 
+#!/usr/bin/python
+
+# Copyright (C) 2012 Linaro Limited
+#
+# Author: Andy Doan <andy.doan@linaro.org>
+#
+# This file is part of LAVA Dispatcher.
+#
+# LAVA Dispatcher is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# LAVA Dispatcher is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along
+# with this program; if not, see <http://www.gnu.org/licenses>.
+
+import logging
+
+
+class BaseSignalHandler(object):
+
+    def __init__(self, testdef_obj):
+        self.testdef_obj = testdef_obj
+
+    def start(self):
+        pass
+
+    def end(self):
+        pass
+
+    def starttc(self, test_case_id):
+        pass
+
+    def endtc(self, test_case_id):
+        pass
+
+    def custom_signal(self, signame, params):
+        pass
+
+    def postprocess_test_run(self, test_run):
+        pass
+
+
+class SignalHandler(BaseSignalHandler):
+
+    def __init__(self, testdef_obj):
+        BaseSignalHandler.__init__(self, testdef_obj)
+        self._case_data = {}
+        self._cur_case_id = None
+        self._cur_case_data = None
+
+    def starttc(self, test_case_id):
+        if self._cur_case_data:
+            logging.warning(
+                "unexpected cur_case_data %s", self._cur_case_data)
+        self._cur_case_id = test_case_id
+        data = None
+        try:
+            data = self.start_testcase(test_case_id)
+        except:
+            logging.exception("start_testcase failed for %s", test_case_id)
+        self._cur_case_data = self._case_data[test_case_id] = data
+
+    def endtc(self, test_case_id):
+        if self._cur_case_id != test_case_id:
+            logging.warning(
+                "stoptc for %s received but expecting %s",
+                test_case_id, self._cur_case_id)
+        else:
+            try:
+                self.end_testcase(test_case_id, self._cur_case_data)
+            except:
+                logging.exception(
+                    "stop_testcase failed for %s", test_case_id)
+        self._cur_case_data = None
+
+    def postprocess_test_run(self, test_run):
+        for test_result in test_run['test_results']:
+            tc_id = test_result.get('test_case_id')
+            if not tc_id:
+                continue
+            if tc_id not in self._case_data:
+                continue
+            data = self._case_data[tc_id]
+            try:
+                self.postprocess_test_result(test_result, data)
+            except:
+                logging.exception("postprocess_test_result failed for %s", tc_id)
+
+    def start_testcase(self, test_case_id):
+        return {}
+
+    def end_testcase(self, test_case_id, data):
+        pass
+
+    def postprocess_test_result(self, test_result, case_data):
+        pass
+
+
+
+class SignalDirector(object):
+
+    def __init__(self, client, testdefs_by_uuid):
+        self.client = client
+        self.testdefs_by_uuid = testdefs_by_uuid
+        self._test_run_data = []
+        self._cur_handler = None
+
+    def signal(self, name, params):
+        handler = getattr(self, '_on_' + name, None)
+        if not handler and self._cur_handler:
+            handler = self._cur_handler.custom_signal
+            params = [name] + list(params)
+        if handler:
+            try:
+                handler(*params)
+            except:
+                logging.exception("handling signal %s failed", name)
+
+    def _on_STARTRUN(self, test_run_id, uuid):
+        self._cur_handler = None
+        testdef_obj = self.testdefs_by_uuid.get(uuid)
+        if testdef_obj:
+            self._cur_handler = testdef_obj.handler
+        if self._cur_handler:
+            self._cur_handler.start()
+
+    def _on_ENDRUN(self, test_run_id, uuid):
+        if self._cur_handler:
+            self._cur_handler.end()
+
+    def _on_STARTTC(self, test_case_id):
+        if self._cur_handler:
+            self._cur_handler.starttc(test_case_id)
+
+    def _on_ENDTC(self, test_case_id):
+        if self._cur_handler:
+            self._cur_handler.endtc(test_case_id)
+
+    def postprocess_bundle(self, bundle):
+        for test_run in bundle['test_runs']:
+            uuid = test_run['analyzer_assigned_uuid']
+            testdef_obj = self.testdefs_by_uuid.get(uuid)
+            if testdef_obj.handler:
+                try:
+                    testdef_obj.handler.postprocess_test_run(test_run)
+                except:
+                    logging.exception(
+                        "postprocessing test run with uuid %s failed", uuid)
+

=== added file 'lava_dispatcher/signals/duration.py'
--- lava_dispatcher/signals/duration.py	1970-01-01 00:00:00 +0000
+++ lava_dispatcher/signals/duration.py	2012-11-22 21:33:41 +0000
@@ -0,0 +1,20 @@ 
+import datetime
+import time
+
+from json_schema_validator.extensions import timedelta_extension
+
+from lava_dispatcher.signals import SignalHandler
+
+class AddDuration(SignalHandler):
+
+    def start_testcase(self, test_case_id):
+        return {
+            'starttime': time.time()
+            }
+
+    def end_testcase(self, test_case_id, data):
+        data['endtime'] = time.time()
+
+    def postprocess_test_result(self, test_result, data):
+        delta = datetime.timedelta(seconds=data['endtime'] - data['starttime'])
+        test_result['duration'] = timedelta_extension.to_json(delta)

=== modified file 'lava_dispatcher/utils.py'
--- lava_dispatcher/utils.py	2012-11-22 03:03:40 +0000
+++ lava_dispatcher/utils.py	2012-11-26 19:30:28 +0000
@@ -163,9 +163,9 @@ 
 
 class logging_spawn(pexpect.spawn):
 
-    def sendline(self, *args, **kw):
-        logging.debug("sendline : %s" % args[0])
-        return super(logging_spawn, self).sendline(*args, **kw)
+    def sendline(self, s=''):
+        logging.debug("sendline : %s" % s)
+        return super(logging_spawn, self).sendline(s)
 
     def send(self, *args, **kw):
         logging.debug("send : %s" % args[0])

=== modified file 'lava_test_shell/lava-test-case'
--- lava_test_shell/lava-test-case	2012-11-21 01:21:56 +0000
+++ lava_test_shell/lava-test-case	2012-11-21 01:33:01 +0000
@@ -17,8 +17,13 @@ 
 fi
 if [ "$1" = "--shell" ]; then
     shift
+    echo "<LAVA_SIGNAL_STARTTC $TEST_CASE_ID>"
+    read -t 600 < $ACK_FIFO
     $*
-    if [ $? -eq 0 ]; then
+    rc=$?
+    echo "<LAVA_SIGNAL_ENDTC $TEST_CASE_ID>"
+    read -t 600 < $ACK_FIFO
+    if [ $rc -eq 0 ]; then
         RESULT=pass
     else
         RESULT=fail

=== modified file 'lava_test_shell/lava-test-runner-android'
--- lava_test_shell/lava-test-runner-android	2012-11-21 01:21:56 +0000
+++ lava_test_shell/lava-test-runner-android	2012-11-22 22:47:53 +0000
@@ -12,6 +12,11 @@ 
 # make sure we log to serial console
 exec >/dev/console
 
+# This is a total hack to make sure we wait until the shell prompt has
+# appeared before sending any signals.
+sleep 15
+
+
 PREFIX="<LAVA_TEST_RUNNER>:"
 WORKFILE="/data/lava/lava-test-runner.conf"
 RESULTSDIR="/data/lava/results"
@@ -94,6 +99,7 @@ 
 		mkdir ${odir}
 		mkdir ${odir}/attachments/
 		cp ${line}/testdef.yaml ${odir}/
+		cp ${line}/uuid ${odir}/analyzer_assigned_uuid
 		cp ${line}/run.sh ${odir}/attachments/
 		[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/attachments/
 		lava-test-shell --output_dir ${odir} /system/bin/sh -e "${line}/run.sh"

=== modified file 'lava_test_shell/lava-test-runner-ubuntu'
--- lava_test_shell/lava-test-runner-ubuntu	2012-11-19 20:09:19 +0000
+++ lava_test_shell/lava-test-runner-ubuntu	2012-11-22 01:43:14 +0000
@@ -1,8 +1,12 @@ 
-#!/bin/sh
+#!/bin/bash
 
 # make sure we log to serial console
 exec >/dev/console
 
+# This is a total hack to make sure we wait until the shell prompt has
+# appeared before sending any signals.
+sleep 15
+
 PREFIX="<LAVA_TEST_RUNNER>:"
 WORKFILE="/lava/lava-test-runner.conf"
 RESULTSDIR="/lava/results"
@@ -77,10 +81,13 @@ 
 	odir=${RESULTSDIR}/${test}-`date +%s`
 	mkdir ${odir}
 	mkdir ${odir}/attachments/
+	cp ${line}/uuid ${odir}/analyzer_assigned_uuid
 	cp ${line}/testdef.yaml ${odir}/
 	cp ${line}/run.sh ${odir}/attachments/
 	[ -f ${line}/install.sh ] && cp ${line}/install.sh ${odir}/attachments/
-	lava-test-shell --output_dir ${odir} /bin/sh -e "${line}/run.sh"
+	# run.sh includes a "read -t <timeout>" which isn't supported by dash
+	# so be sure to use bash
+	lava-test-shell --output_dir ${odir} /bin/bash -e "${line}/run.sh"
 	echo "${PREFIX} ${test} exited with: `cat ${odir}/return_code`"
 done < ${WORKFILE}
 

=== modified file 'setup.py'
--- setup.py	2012-11-19 20:59:58 +0000
+++ setup.py	2012-11-21 00:57:15 +0000
@@ -16,6 +16,8 @@ 
     dispatch = lava.dispatcher.commands:dispatch
     connect = lava.dispatcher.commands:connect
     power-cycle = lava.dispatcher.commands:power_cycle
+    [lava.signal_handlers]
+    add-duration = lava_dispatcher.signals.duration:AddDuration
     """,
     packages=find_packages(),
     package_data= {
@@ -33,6 +35,7 @@ 
             'lava_test_shell/lava-test-runner-android',
             'lava_test_shell/lava-test-runner-ubuntu',
             'lava_test_shell/lava-test-runner.conf',
+            'lava_test_shell/lava-test-case',
             'lava_test_shell/lava-test-runner.init.d',
             'lava_test_shell/lava-test-shell',
             ])