=== modified file 'doc/conf.py'
@@ -60,9 +60,9 @@
# built documents.
#
# The short X.Y version.
-version = '0.23'
+version = '0.32'
# The full version, including alpha/beta/rc tags.
-release = '0.23'
+release = '0.32'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
=== added file 'doc/debugging.rst'
@@ -0,0 +1,119 @@
+.. _debugging:
+
+Debugging LAVA test definitions
+*******************************
+
+.. _singlenode:
+
+Convert Multi-Node jobs to single node
+======================================
+
+The scripts available in the :ref:`multinode_api` are not installed for
+test jobs which are not part of a MultiNode group, so the job will simply
+fail that test as a ``command not found``.
+
+Therefore, by reversing the :ref:`changes_to_json`, a MultiNode JSON file
+can be converted to singlenode.
+
+Other calls which may require communication with other devices may need
+to be removed from your YAML. This can be extended to retain a set of
+singlenode YAML files in which new wrapper scripts and new builds are
+tested.
+
+The Job Definition of one job within a MultiNode group may be a good
+starting point for creating a singlenode equivalent.
+
+.. _set_x:
+
+Always use set -x in wrapper scripts
+====================================
+
+By viewing the complete log, the complete processing of the wrapper script
+becomes obvious.
+
+::
+
+ #!/bin/sh
+ set -e
+ set -x
+
+.. _shell_operators:
+
+Avoid using shell operators in YAML lines
+=========================================
+
+Pipes, redirects and nested sub shells will not work reliably when put
+directly into the YAML. Use a wrapper script (with :ref:`set -x <set_x>`).
+
+::
+
+ #!/bin/sh
+
+ set -e
+ set -x
+ ifconfig|grep "inet addr"|grep -v "127.0.0.1"|cut -d: -f2|cut -d' ' -f1
+
+Un-nested sub-shells do work::
+
+ - lava-test-case multinode-send-network --shell lava-send network hostname=$(hostname) fqdn=$(hostname -f)
+
+.. _check_messageid:
+
+Check that your message ID labels are consistent
+================================================
+
+A :ref:`lava_wait` must be preceded by a :ref:`lava_send` from at least
+one other device in the group or the waiting device will :ref:`timeout <timeouts>`
+
+This can be a particular problem if you remove test definitions from the
+JSON or edit a YAML file without checking other uses of the same file.
+
+``#`` can be used as a comment in YAML but JSON does not support
+comments, so take care.
+
+.. _parsers:
+
+Test your result parsers
+========================
+
+If the YAML uses a custom result parser, configure one of your YAML files
+to output the entire test result output to stdout so that you can
+reliably capture a representative block of output. Test your proposed
+result parser against the block using your favourite language.
+
+Comment out the parser from the YAML if there are particular problems,
+just to see what the default LAVA parsers can provide.
+
+.. _paths:
+
+Be obsessive about paths and scripts
+====================================
+
+* If you use ``cd`` in your YAML, always store where you were and where you end up using ``pwd``.
+* Output your location prior to calling local wrapper scripts.
+* Ensure that all wrapper scripts are executable in your VCS
+* Ensure that the relevant interpreter is installed. e.g. python is not necessarily part of the test image.
+* Consider installing ``realpath`` and use that to debug your directory structure.
+ * Avoid the temptation of using absolute paths - LAVA may need to change the absolute locations.
+
+.. _failed_tests:
+
+A failed test is not necessarily a bug in the test
+==================================================
+
+Always check whether the test result came back as failed due to some
+cause other than the test definition itself. Particularly with MultiNode,
+a test result can fail due to some problem on a different board within
+the group.
+
+.. _json_files:
+
+Check your JSON files
+=====================
+
+Syntax problems will be picked up by LAVA when you submit but also check
+that the URLs listed in the JSON are correct. Keep your YAML descriptions,
+names and filenames unique so that it is easier to pick up if the JSON
+simply calls the wrong YAML test definition.
+
+
=== modified file 'doc/index.rst'
@@ -37,6 +37,7 @@
external_measurement.rst
arm_energy_probe.rst
sdmux.rst
+ multinode.rst
proxy.rst
* :ref:`search`
=== added file 'doc/multinode-usecases.rst'
@@ -0,0 +1,8 @@
+MultiNode Use Cases
+###################
+
+.. toctree::
+ :maxdepth: 3
+
+ usecaseone.rst
+ usecasetwo.rst
=== added file 'doc/multinode.rst'
@@ -0,0 +1,291 @@
+Multi-Node LAVA
+###############
+
+LAVA multi-node support allows users to use LAVA to schedule, synchronise and
+combine the results from tests that span multiple targets. Jobs can be arranged
+as groups of devices (of any type) and devices within a group can operate
+independently or use the MultiNode API to communicate with other devices in the
+same group during tests.
+
+Within a MultiNode group, devices are assigned a role and a ``count`` of devices to
+include into that role. Each role has a ``device_type`` and any number of roles can
+have the same ``device_type``. Each role can be assigned ``tags``.
+
+Once roles are defined, actions (including test images and test definitions) can be marked
+as applying to specific roles (if no role is specified, all roles use the action).
+
+If insufficient boards exist to meet the combined requirements of all the roles specified
+in the job, the job will be rejected.
+
+If there are not enough idle boards of the relevant types to meet the combined requirements
+of all the roles specified in the job, the job waits in the Submitted queue until all
+devices can be allocated.
+
+Once each board has booted the test image, the MultiNode API will be available for use within
+the test definition in the default PATH.
+
+.. toctree::
+ :maxdepth: 3
+
+ multinodeapi.rst
+ multinode-usecases.rst
+ debugging.rst
+
+Hardware requirements and virtualisation
+****************************************
+
+Multi-Node is explicitly about synchronising test operations across multiple boards and running
+Multi-Node jobs on a particular instance will have implications for the workload of that instance.
+This can become a particular problem if the instance is running on virtualised hardware with
+shared I/O, a limited amount of RAM or a limited number of available cores.
+
+e.g. Downloading, preparing and deploying test images can result in a lot of synchronous I/O and
+if this instance is running the server and the dispatcher, this can cause the load on that machine
+to rise significantly, possibly causing the server to become unresponsive.
+
+It is strongly recommended that Multi-Node instances use a separate dispatcher running on
+non-virtualised hardware so that the (possibly virtualised) server can continue to operate.
+
+Also, consider the number of boards connected to any one dispatcher. MultiNode jobs will commonly
+compress and decompress several test image files of several hundred megabytes at precisely the same
+time. Even with a powerful multi-core machine, this has been shown to cause appreciable load. It
+is worth considering matching the number of boards to the number of cores for parallel decompression
+and matching the amount of available RAM to the number and size of test images which are likely to
+be in use.
+
+Extending existing LAVA submissions
+***********************************
+
+To extend an existing JSON file to start a MultiNode job, some changes are required to define the
+``device_group``. If all devices in the group are to use the same actions, simply create a single
+role with a count for how many devices are necessary. Usually, a MultiNode job will need to assign
+different test definitions to different boards and this is done by adding more roles, splitting the
+number of devices between the differing roles and assigning different test definitions to each role.
+
+If a MultiNode job includes devices of more than one ``device_type``, there needs to be a role for
+each different ``device_type`` so that an appropriate image can be deployed.
+
+Where all roles share the same action (e.g. ``submit_results_on_host``), omit the role parameter from
+that action.
+
+If more than one, but not all, roles share one particular action, that action will need to be repeated
+within the JSON file, once for each role using that action.
+
+.. _changes_to_json:
+
+Changes to submission JSON
+==========================
+
+1. ``device`` or ``device_type`` move into a **device_group** list
+2. Each device type has a ``count`` assigned
+ 1. If a ``device`` is specified directly, count needs to be one.
+ 2. If ``device_type`` is used and count is larger than one, enough
+ devices will be allocated to match the count and all such devices will
+ have the same role and use the same commands and the same actions.
+3. Add tags, if required, to each role.
+4. If specific actions should only be used for particular roles, add a
+ role field to the parameters of the action.
+5. If any action has no role specified, it will be actioned for all roles.
+
+Example JSON::
+
+ {
+ "timeout": 18000,
+ "job_name": "simple multinode job",
+ "logging_level": "INFO",
+ "device_group": [
+ {
+ "role": "omap4",
+ "count": 2,
+ "device_type": "panda",
+ "tags": [
+ "mytag1"
+ ]
+ },
+ {
+ "role": "omap3",
+ "count": 1,
+ "device_type": "beaglexm",
+ "tags": [
+ "mytag2"
+ ]
+ }
+ ],
+
+Using actions for particular roles
+==================================
+
+Example JSON::
+
+ "actions": [
+ {
+ "command": "deploy_linaro_image",
+ "parameters": {
+ "image": "file:///home/instance-manager/images/panda-raring_developer_20130529-347.img.gz",
+ "role": "omap4"
+ }
+ },
+ {
+ "command": "deploy_linaro_image",
+ "parameters": {
+ "image": "file:///home/instance-manager/images/beagle-ubuntu-desktop.img.gz",
+ "role": "omap3"
+ }
+ },
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/qa/test-definitions.git",
+ "testdef": "ubuntu/smoke-tests-basic.yaml"
+ }
+ ],
+ "timeout": 1800
+ }
+ }
+ }
+
+..
+
+.. note:: Consider using http://jsonlint.com to check your JSON before submission.
+
+
+LAVA Multi-Node timeout behaviour
+*********************************
+
+The submitted JSON includes a timeout value - in single node LAVA, this is applied to each individual action
+executed on the device under test (not for the entire job as a whole). i.e. the default timeout can be smaller
+than any one individual timeout used in the JSON or internally within LAVA.
+
+In Multi-Node LAVA, this timeout is also applied to individual polling operations, so an individual lava-sync
+or a lava-wait will fail on any node which waits longer than the default timeout. The node will receive a failure
+response.
+
+.. _timeouts:
+
+Recommendations on timeouts
+===========================
+
+MultiNode operations have implications for the timeout values used in JSON submissions. If one of the
+synchronisation primitives times out, the sync will fail and the job itself will then time out.
+One reason for a MultiNode job to timeout is if one or more boards in the group failed to boot the
+test image correctly. In this situation, all the other boards will continue until the first
+synchronisation call is made in the test definition for that board.
+
+The time limit applied to a synchronisation primitive starts when the board makes the first request
+to the Coordinator for that sync. Slower boards may well only get to that point in the test definition
+after faster devices (especially KVM devices) have started their part of the sync and timed out
+themselves.
+
+Always review the top level timeout in the JSON submission - a value of 900 seconds (15 minutes) has
+been common during testing. Excessive timeouts would prevent other jobs from using boards where the
+waiting jobs have already failed due to a problem elsewhere in the group. If timeouts are too short,
+jobs will fail unnecessarily.
+
+Balancing timeouts
+^^^^^^^^^^^^^^^^^^
+
+Individual actions and commands can have differing timeouts, so avoid the temptation to change the
+default timeout when a particular action times out in a Multi-Node job. If a particular ``lava-test-shell``
+takes a long time, set an explicit timeout for that particular action:
+
+::
+
+ {
+ "timeout": 900,
+ "job_name": "netperf multinode tests",
+ "logging_level": "DEBUG",
+ }
+
+
+::
+
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/people/guoqing.zhu/netperf-multinode.git",
+ "testdef": "netperf-multinode-c-network.yaml"
+ }
+ ],
+ "timeout": 2400,
+ "role": "client"
+ }
+ },
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/people/guoqing.zhu/netperf-multinode.git",
+ "testdef": "netperf-multinode-s-network.yaml"
+ }
+ ],
+ "timeout": 1800,
+ "role": "server"
+ }
+ },
+
+
+Running a server on the device-under-test
+*****************************************
+
+If this server process runs as a daemon, the test definition will need to define something for the device
+under test to actually do or it will simply get to the end of the tests and reboot. For example, if the
+number of operations is known, would be to batch up commands to the daemon, each batch being a test case.
+If the server program can run without being daemonised, it would need to be possible to close it down
+at the end of the test (normally this is the role of the sysadmin in charge of the server box itself).
+
+Making use of third party servers
+=================================
+
+A common part of a MultiNode setup is to download components from third party servers but once the test
+starts, latency and connectivity issues could interfere with the tests.
+
+Using wrapper scripts
+=====================
+
+Wrapper scripts make it easier to test your definitions before submitting to LAVA.
+The wrapper lives in a VCS repository which is specified as one of the testdef_repos and will be
+available in the same directory structure as the original repository. A wrapper script also
+helps the tests to fail early instead of trying to do the rest of the tests.
+
+MultiNode Result Bundles
+************************
+
+Results are generated by each device in the group. At submission time, one device in the group is
+selected to run the job which gets the aggregated result bundle for the entire group.
+
+LAVA Coordinator setup
+**********************
+
+Multi-Node LAVA requires a LAVA Coordinator which manages the messaging within a group of nodes involved in
+a Multi-Node job set according to this API. The LAVA Coordinator is a singleton to which nodes need to connect
+over a TCP port (default: 3079). A single LAVA Coordinator can manage groups from multiple instances.
+If the network configuration uses a firewall, ensure that this port is open for connections from Multi-Node dispatchers.
+
+If multiple coordinators are necessary on a single machine (e.g. to test different versions of the coordinator
+during development), each coordinator needs to be configured for a different port.
+
+If the dispatcher is installed on the same machine as the coordinator, the dispatcher can use the packaged
+configuration file with the default hostname of ``localhost``.
+
+Each dispatcher then needs a copy of the LAVA Coordinator configuration file, modified to point back to the
+hostname of the coordinator:
+
+Example JSON, modified for a coordinator on a machine with a fully qualified domain name::
+
+ {
+ "port": 3079,
+ "blocksize": 4096,
+ "poll_delay": 3,
+ "coordinator_hostname": "control.lab.org"
+ }
+
+An IP address can be specified instead, if appropriate.
+
+Each dispatcher needs to use the same port number and blocksize as is configured for the Coordinator
+on the specified machine. The poll_delay is the number of seconds each node will wait before polling
+the coordinator again.
=== added file 'doc/multinodeapi.rst'
@@ -0,0 +1,302 @@
+.. _multinode_api:
+
+MultiNode API
+=============
+
+The LAVA MultiNode API provides a simple way to pass messages using the serial port connection which
+is already available through LAVA. The API is not intended for transfers of large amounts of data. Test
+definitions which need to transfer files, long messages or other large amounts of data need to set up their
+own network configuration, access and download methods and do the transfer in the test definition.
+
+.. _lava_self:
+
+lava-self
+---------
+
+Prints the name of the current device.
+
+Usage: ``lava-self``
+
+.. _lava_role:
+
+lava-role
+---------
+
+Prints the role the current device is playing in a multi-node job.
+
+Usage: ``lava-role``
+
+*Example.* In a directory with several scripts, one for each role
+involved in the test::
+
+ $ ./run-$(lava-role)
+
+.. _lava-group:
+
+lava-group
+----------
+
+This command will produce in its standard output a representation of the
+device group that is participating in the multi-node test job.
+
+Usage: ``lava-group``
+
+The output format contains one line per device, and each line contains
+the hostname and the role that device is playing in the test, separated
+by a TAB character::
+
+ panda01 client
+ highbank01 loadbalancer
+ highbank02 backend
+ highbank03 backend
+
+.. _lava_send:
+
+lava-send
+---------
+
+Sends a message to the group, optionally passing associated key-value
+data pairs. Sending a message is a non-blocking operation. The message
+is guaranteed to be available to all members of the group, but some of
+them might never retrieve it.
+
+Usage: ``lava-send <message-id> [key1=val1 [key2=val2] ...]``
+
+Examples will be provided below, together with ``lava-wait`` and
+``lava-wait-all``.
+
+.. _lava_wait:
+
+lava-wait
+---------
+
+Waits until any other device in the group sends a message with the given
+ID. This call will block until such message is sent.
+
+Usage: ``lava-wait <message-id>``
+
+If there was data passed in the message, the key-value pairs will be
+printed in the cache file(/tmp/lava_multi_node_cache.txt in default),
+each in one line. If no key values were passed, nothing is printed.
+
+The message ID data is persistent for the life of the MultiNode group.
+The data can be retrieved at any later stage using ``lava-wait`` and as
+the data is already available, there will be no waiting time for repeat
+calls. If devices continue to send data with the associated message ID,
+that data will continue to be added to the data for that message ID and
+will be returned by subsequent calls to ``lava-wait`` for that message
+ID. Use a different message ID to collate different message data.
+
+.. _lava_wait_all:
+
+lava-wait-all
+-------------
+
+Waits until **all** other devices in the group send a message with the
+given message ID. IF ``<role>`` is passed, only wait until all devices
+with that given role send a message.
+
+``lava-wait-all <message-id> [<role>]``
+
+If data was sent by the other devices with the message, the key-value
+pairs will be printed in the cache file(/tmp/lava_multi_node_cache.txt
+in default),each in one line, prefixed with the target name and
+a colon.
+
+Some examples for ``lava-send``, ``lava-wait`` and
+``lava-wait-all`` are given below.
+
+Using ``lava-sync`` or ``lava-wait-all`` in a test definition effectively
+makes all boards in the group run at the speed of the slowest board in
+the group up to the point where the sync or wait is called.
+
+Ensure that the message-id matches an existing call to ``lava-send`` for
+each relevant test definition **before** that test definition calls
+``lava-wait-all`` or any device using that test definition will wait forever
+(and eventually timeout, failing the job).
+
+The message returned can include data from other devices which sent a
+message with the relevant message ID, only the wait is dependent on
+particular devices with a specified role.
+
+As with ``lava-wait``, the message ID is persistent for the duration of
+the MultiNode group.
+
+.. _lava_sync:
+
+lava-sync
+---------
+
+Global synchronization primitive. Sends a message, and waits for the
+same message from all of the other devices.
+
+Usage: ``lava-sync <message>``
+
+``lava-sync foo`` is effectively the same as ``lava-send foo`` followed
+by ``lava-wait-all foo``.
+
+.. _lava_network:
+
+lava-network
+------------
+
+Helper script to broadcast IP data from the test image, wait for data to be
+received by the rest of the group (or one role within the group) and then provide
+an interface to retrieve IP data about the group on the command line.
+
+Raising a suitable network interface is a job left for the designer of the test
+definition / image but once a network interface is available, ``lava-network``
+can be asked to broadcast this information to the rest of the group. At a later
+stage of the test, before the IP details of the group need to be used, call
+``lava-network collect`` to receive the same information about the rest of
+the group.
+
+All usage of lava-network needs to use a broadcast (which wraps a call to
+``lava-send``) and a collect (which wraps a call to ``lava-wait-all``). As a
+wrapper around ``lava-wait-all``, collect will block until the rest of the group
+(or devices in the group with the specified role) has made a broadcast.
+
+After the data has been collected, it can be queried for any board specified in
+the output of ``lava-group`` by specifying the parameter to query (as used in the
+broadcast)::
+
+ lava-network query panda19 ipv4
+ 192.168.3.56
+
+ lava-network query beaglexm04 ipv6
+ fe80::f2de:f1ff:fe46:8c21
+
+ lava-network query arndale02 hostname
+ server
+
+ lava-network query panda14 hostname-full
+ client.localdomain
+
+ lava-network query panda19 netmask
+ 255.255.255.0
+
+ lava-network query panda14 default-gateway
+ 192.168.1.1
+
+ lava-network query panda17 dns_2
+ 8.8.8.8
+
+``lava-network hosts`` can be used to output the list of all boards in the group
+which have returned a fully qualified domain name in a format suitable for
+``/etc/hosts``, appending to the specified file.
+
+Usage:
+
+ broadcast: ``lava-network broadcast [interface]``
+
+ collect: ``lava-network collect [interface] <role>``
+
+ query: ``lava-network query [hostname] [option]``
+
+ hosts: ``lava-network hosts [file]``
+
+Example 1: simple client-server multi-node test
+-----------------------------------------------
+
+Two devices, with roles ``client``, ``server``
+
+LAVA Test Shell test definition (say, ``example1.yaml``)::
+
+ run:
+ steps:
+ - ./run-`lava-role`.sh
+
+The test image or the test definition would then provide two scripts,
+with only one being run on each device, according to the role specified.
+
+``run-server.sh``::
+
+ #!/bin/sh
+
+ iperf -s &
+ lava-send server-ready username=testuser
+ lava-wait client-done
+
+Notes:
+
+* To make use of the server-ready message, some kind of client
+ needs to do a ``lava-wait server-ready``
+* There needs to be a support on a client to do the
+ ``lava-send client-done`` or the wait will fail on the server.
+* If there was more than one client, the server could call
+ ``lava-wait-all client-done`` instead.
+
+
+``run-client.sh``::
+
+ #!/bin/sh
+
+ lava-wait server-ready
+ server=$(cat /tmp/lava_multi_node_cache.txt | cut -d = -f 2)
+ iperf -c $server
+ # ... do something with output ...
+ lava-send client-done
+
+Notes:
+
+* The client waits for the server-ready message as it's first task,
+ then does some work, then sends a message so that the server can
+ move on and do other tests.
+
+Example 2: variable number of clients
+-------------------------------------
+
+``run-server.sh``::
+
+ #!/bin/sh
+
+ start-server
+ lava-sync ready
+ lava-sync done
+
+``run-client.sh``::
+
+ #!/bin/sh
+
+ # refer to the server by name, assume internal DNS works
+ server=$(lava-group | grep 'server$' | cut -f 1)
+
+ lava-sync ready
+ run-client
+ lava-sync done
+
+Example 3: peer-to-peer application
+-----------------------------------
+
+Single role: ``peer``, any number of devices
+
+``run-peer.sh``::
+
+ #!bin/sh
+
+ initialize-data
+ start-p2p-service
+ lava-sync running
+
+ push-data
+ for peer in $(lava-group | cut -f 1); then
+ if [ $peer != $(lava-self) ]; then
+ query-data $peer
+ fi
+ fi
+
+
+Example 4: using lava-network
+-----------------------------
+
+If the available roles include ''server'' and there is a board named
+''database''::
+
+ #!/bin/sh
+ ifconfig eth0 up
+ # possibly do your own check that this worked
+ lava-network broadcast eth0
+ # do whatever other tasks may be suitable here, then wait...
+ lava-network collect eth0 server
+ # continue with tests and get the information.
+ lava-network query database ipv4
=== added file 'doc/usecaseone.rst'
@@ -0,0 +1,521 @@
+.. _use_case_one:
+
+Use Case One - Setting up a simple client:server test definition.
+*****************************************************************
+
+One device needs to obtain / prepare some data and then make the data
+available to another device in the same group.
+
+Source Code
+===========
+
+* The YAML snippets in this example are not complete, for a working example of the code, see:
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=forwarder.yaml;hb=refs/heads/master
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=receiver.yaml;hb=refs/heads/master
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=json/beagleblack-use-case.json;hb=HEAD
+
+Requirements
+============
+
+1. A mechanism to obtain the data, presumably from some third-party source
+2. A sync to ensure that the file is ready to be offered to the other device
+
+ 2.1. This ensures that the attempt to receive does not start early
+
+3. A message to the original board that the data has been received and verified
+
+ 3.1. This ensures that any cleanup of the data does not happen before the transfer is complete.
+
+Methods
+=======
+
+* Install a package which can obtain the data from the third party source
+* Install a package which can provide the means to get the data to the other board
+
+Control flow
+============
+
++------------------------------+----------------------------------------+
+|sender starts | receiver starts |
++------------------------------+----------------------------------------+
+|sender obtains the data | receiver waits for sender to be ready |
++------------------------------+----------------------------------------+
+|sender modifies the data | wait |
++------------------------------+----------------------------------------+
+|sender notifies receiver | wait |
++------------------------------+----------------------------------------+
+|sender waits for completion | receiver initiates transfer |
++------------------------------+----------------------------------------+
+|wait | receiver notifies sender of completion |
++------------------------------+----------------------------------------+
+|sender cleans up | receiver processes the modified data |
++------------------------------+----------------------------------------+
+
+It is clear from the flow that the sender and the receiver are doing
+different things at different times and may well need different packages
+installed. The simplest way to manage this is to have two YAML files.
+
+In this example, sender is going to use wget to obtain the data and
+apache to offer it to the receiver. The receiver will only need wget.
+The example won't actually modify the data, but for the purposes of the
+example, the documentation will ignore the fact that the receiver could
+just get the data directly.
+
+Preparing the YAML
+==================
+
+The name field specified in the YAML will be used later as the basis
+of the filter. To start each YAML file, ensure that the metadata contains
+two metadata fields:
+
+* format : **Lava-Test Test Definition 1.0**
+* description : your own descriptive text
+
+It is useful to also add the maintainer field with your email address
+as this will be needed later if the test is to be added to one of the
+formal test sets.
+
+::
+
+ metadata:
+ format: Lava-Test Test Definition 1.0
+ name: multinode-usecaseone
+ description: "MultiNode network test commands"
+ maintainer:
+ - neil.williams@linaro.org
+
+Installing packages for use in a test
+-------------------------------------
+
+If your test image raises a usable network interface by default on boot,
+the YAML can specify a list of packages which need to be installed for
+this test definition:
+
+::
+
+ install:
+ deps:
+ - wget
+ - apache2
+
+If your test needs to raise the network interface itself, the package
+installation will need to be done in the run steps::
+
+ run:
+ steps:
+ - lava-test-case linux-linaro-ubuntu-route-ifconfig-up --shell ifconfig eth0 up
+ - lava-test-case apt-update --shell apt-get update
+ - lava-test-case install-deps --shell apt-get -y install wget apache2
+
+Note that although KVM devices can use apt, the network interface fails
+the LAVA test, so use the manual install steps for non-bridged KVM devices.
+
+Preparing the test to send data
+-------------------------------
+
+``modify-data.sh`` would, presumably, unpack the data, modify it in
+some way and pack it back up again. In this example, it would be a no-op
+but note that it still needs to exist in the top level directory of your
+VCS repo and be executable.
+
+Any packages required by ``modify-data.sh`` need to be added to the install
+deps of sender.yaml. Providing useful contents of ``modify-data.sh`` is
+left as an exercise for the reader.
+
+Modification happens before the :ref:`lava_sync` ``download`` which tells the
+receiver that the data is ready to be transferred.
+
+The sender then waits for the receiver to acknowledge a correct download
+using :ref:`lava_sync` ``received`` and cleans up.
+
+sender.yaml
+^^^^^^^^^^^
+
+::
+
+ install:
+ deps:
+ - wget
+ - apache2
+
+ run:
+ steps:
+ - lava-test-case multinode-network --shell lava-network broadcast eth0
+ - lava-test-case wget-file --shell wget -O /var/www/testfile http://releases.linaro.org/latest/android/arndale/userdata.tar.bz2
+ - ./modify-data.sh
+ - lava-test-case file-sync --shell lava-sync download
+ - lava-test-case done-sync --shell lava-sync received
+ - lava-test-case remove-tgz --shell rm /var/www/testfile
+
+Handling the transfer to the receiver
+-------------------------------------
+
+The receiver needs to know where to find the data. The sender can ensure that the
+file is in a particular location, it is up to the YAML to get the rest of the
+information of the network address of the sender. This example assumes that the
+data is modified in some undisclosed manner by the ``./modify-data.sh``
+script which is part of your testdef_repo before the receiver is notified.
+
+The LAVA :ref:`multinode_api` provides ways of querying the network information of devices
+within the group. In order to offer the data via apache, the sender needs to
+raise a suitable network interface, so it calls ifconfig as a lava test case
+first and then uses the lava-network API call to broadcast network information
+about itself.
+
+Equally, the receiver needs to raise a network interface, broadcast
+it's network information and then collect the network information for
+the group.
+
+Note that collect is a blocking call - each of the devices needs to
+broadcast before collect will return. (There is support for collecting
+data only for specific roles but that's outside the scope of this example.)
+
+receiver.yaml
+^^^^^^^^^^^^^
+
+::
+
+ install:
+ deps:
+ - wget
+
+ run:
+ steps:
+ - lava-test-case linux-linaro-ubuntu-route-ifconfig-up --shell ifconfig eth0 up
+ - lava-test-case multinode-network --shell lava-network broadcast eth0
+ - lava-test-case multinode-get-network --shell lava-network collect eth0
+ - lava-test-case file-sync --shell lava-sync download
+ - lava-test-case wget-from-group --shell ./get-data.sh
+ - lava-test-case get-sync --shell lava-sync received
+ - lava-test-case list-file --shell ls -l /tmp/testfile
+ - lava-test-case remove-file --shell rm /tmp/testfile
+
+
+The receiver then needs to obtain that network information and process
+it to get the full URL of the data. To do command line processing and
+pipes, a helper script is needed:
+
+get-data.sh
+^^^^^^^^^^^
+
+Always use **set -x** in any wrapper / helper scripts which you expect
+to use in a test run to be able to debug test failures.
+
+Ensure that the scripts are marked as executable in your VCS and
+that the appropriate interpreter is installed in your test image.
+
+::
+
+ #!/bin/sh
+ set -e
+ set -x
+ DEVICE=`lava-group | grep -m1 receiver|cut -f2`
+ SOURCE=`lava-network query $DEVICE ipv4|grep -v LAVA|cut -d: -f2`
+ wget -O /tmp/testfile http://${SOURCE}/testfile
+
+
+The ``$DEVICE`` simply matches the first device name in this group
+which contains the string 'receiver' (which comes from the ``role``
+specified in the JSON) and returns the full name of that device,
+e.g. multinode-kvm02 or staging-beagleblack03
+
+This device name is then passed to lava-network query to get the ipv4
+details of that device within this group. The value of ``$SOURCE``
+is an IPv4 address of the sender (assuming that your JSON has defined a
+role for the sender which would contain the 'receiver' string in the name.)
+
+Finally, ``get-data.sh`` does the work of receiving the data from
+the sender. The verification of the data is left as an exercise for
+the reader - one simple method would be for the sender to checksum the
+(modified) data and use ``lava-send`` to make that checksum available
+to devices within the group. The receiver can then use ``lava-wait``
+to get that checksum.
+
+Once ``get-data.sh`` returns, the receiver notifies the sender that
+the transfer is complete, processes the data as it sees fit and cleans up.
+
+Preparing the JSON
+===================
+
+The JSON ties the YAML test definition with the hardware and software to
+run the test definition. The JSON is also where multiple test
+definitions are combined into a single MultiNode test.
+
+General settings
+----------------
+
+.. warning:: **Timeout values need to be reduced from single node examples**
+
+ - each synchronisation primitive uses the timeout from the general settings,
+ - always check your timeout value - 900 is recommended.
+
+::
+
+ {
+ "health_check": false,
+ "logging_level": "DEBUG",
+ "timeout": 900,
+ "job_name": "client-server test",
+ }
+
+
+device_group
+^^^^^^^^^^^^
+
+The device_group collates the device-types and the role of each device
+type in the group along with the number of boards to allocate to each
+role.
+
+If count is larger than one, enough devices will be allocated to match
+the count and all such devices will have the same role and use the same
+commands and the same actions. (The job will be rejected if there are
+not enough devices available to satisfy the count.)
+
+::
+
+ {
+ "device_group": [
+ {
+ "role": "sender",
+ "count": 1,
+ "device_type": "beaglebone-black",
+ "tags": [
+ "use-case-one"
+ ]
+ },
+ {
+ "role": "receiver",
+ "count": 1,
+ "device_type": "kvm",
+ "tags": [
+ "use-case-one"
+ ]
+ }
+ ],
+ }
+
+
+actions
+-------
+
+When mixing different device_types in one group, the images to deploy
+will probably vary, so use the role parameter to determine which image
+gets used on which board(s).
+
+deploy_linaro_image
+^^^^^^^^^^^^^^^^^^^
+
+::
+
+ {
+ "actions": [
+ {
+ "command": "deploy_linaro_image",
+ "parameters": {
+ "image": "http://images.validation.linaro.org/kvm-debian-wheezy.img.gz",
+ "role": "receiver"
+ }
+ },
+ {
+ "command": "deploy_linaro_image",
+ "parameters": {
+ "image": "http://linaro-gateway/beaglebone/beaglebone_20130625-379.img.gz",
+ "role": "sender"
+ }
+ }
+ }
+
+
+lava_test_shell
+^^^^^^^^^^^^^^^
+
+If specific actions should only be used for particular roles, add a role
+field to the parameters of the action.
+
+If any action has no role specified, it will be actioned for all roles.
+
+For Use Case One, we have a different YAML file for each role, so
+we have two lava_test_shell commands.
+
+::
+
+ {
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/people/neilwilliams/multinode-yaml.git",
+ "testdef": "forwarder.yaml"
+ }
+ ],
+ "role": "sender"
+ }
+ },
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/people/neilwilliams/multinode-yaml.git",
+ "testdef": "receiver.yaml"
+ }
+ ],
+ "role": "receiver"
+ }
+ }
+ }
+
+
+submit_results
+^^^^^^^^^^^^^^
+
+The results for the entire group get aggregated into a single result
+bundle. Ensure that the bundle stream exists on the specified server
+and that you have permission to add to that stream.
+
+::
+
+ {
+ {
+ "command": "submit_results_on_host",
+ "parameters": {
+ "stream": "/anonymous/use-cases/",
+ "server": "http://validation.linaro.org/RPC2/"
+ }
+ }
+ }
+
+Prepare a filter for the results
+================================
+
+Now decide how you are going to analyse the results of tests using
+this definition, using the name of the test definition specified in
+the YAML metadata.
+
+Unique names versus shared names
+--------------------------------
+
+Each YAML file can have a different name or the name can be shared amongst
+many YAML files at which point those files form one test definition, irrespective
+of what each YAML file actually does. Sharing the name means that the results
+of the test definition always show up under the same test name. Whilst this
+can be useful, be aware that if you subsequently re-use one of the YAML files
+sharing a name in a test which does not use the other YAML files sharing
+the same name, there will be gaps in your data. When the filter is later
+used to prepare a graph, these gaps can make it look as if the test
+failed for a period of time when it was simply that the not all of the
+tests in the shared test definition were run.
+
+A single filter can combine the results of multiple tests, so it is
+generally more flexible to have a unique name in each YAML file and
+combine the tests in the filters.
+
+If you use a unique test definition name for every YAML file, ensure that
+each name is descriptive and relevant so that you can pick the right test
+definition from the list of all tests when preparing the filter. If you
+share test definition names, you will have a shorter list to search.
+
+Filters also allow results to be split by the device type and, in
+Multi-Node, by the role. Each of these parameters is defined by the JSON,
+not the YAML, so care is required when designing your filters to cover
+all uses of the test definition without hiding the data in a set of
+unrelated results.
+
+Create a filter
+---------------
+
+To create or modify filters (and the graphs which can be based on them)
+you will need appropriate permissions on the LAVA instance to which are
+you submitting your JSON.
+
+On the website for the instance running the tests, click on Dashboard
+and Filters. If you have permissions, there will be a link entitled
+*Add new filter...*.
+
+The filter name should include most of the data about what this filter
+is intended to do, without whitespace. This name will be preserved through
+to the name of the graph based on this filter and can be changed later if
+necessary. Choose whether to make the filter public and select the bundle
+stream(s) to add into the filter.
+
+If the filter is to aggregate all results for a test across all
+devices and all roles, simply leave the *Attributes* empty. Otherwise,
+*Add a required attribute* and start typing to see the available fields.
+
+To filter by a particular device_type, choose **target.device_type**.
+
+To filter by a particular role (Multi-Node only), choose **role**.
+
+Click *Add a test* to get the list of test definition names for which
+results are available.
+
+Within a test definition, a filter can also select only particular test
+cases. In this Use Case, for example, the filter could choose only the
+``multinode-network``, ``multinode-get-network`` or ``file-sync``
+test cases. Continue to add tests and/or test cases - the more tests
+and/or test cases are added to the filter, the fewer results will
+match.
+
+Click the *Preview* button to apply the filter to the current set of
+results **without saving the filter**.
+
+In the preview, if there are columns with no data or rows with no data
+for specific columns, these will show up as missing data in the filter
+and in graphs based on this filter. This is an indication that you need
+to refine either the filter or the test definitions to get a cohesive
+set of results.
+
+If you are happy with the filter, click on save.
+
+The suggested filter for this use case would simply have a suitable name,
+no required attributes and a single test defined - using a shared name
+specified in each of the YAML files.
+
+::
+
+ Bundle streams /anonymous/instance-manager/
+ Test cases multinode-network any
+
+Prepare a graph based on the filter
+===================================
+
+A graph needs an image and the image needs to be part of an image set to
+be visible in the dashboard image reports. Currently, these steps need
+to be done by an admin for the instance concerned.
+
+Once the image exists and it has been added to an image set, changes in
+the filter will be reflected in the graph without the need for
+administrator changes.
+
+Each graph is the result of a single image which itself is basde on a
+single filter. Multiple images are collated into image sets.
+
+Summary
+=======
+
+The full version of this use case are available:
+
+http://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=json/kvm-beagleblack-group.json;hb=HEAD
+
+Example test results are visible here:
+
+http://multinode.validation.linaro.org/dashboard/image-reports/kvm-multinode
+
+http://multinode.validation.linaro.org/dashboard/streams/anonymous/instance-manager/bundles/da117e83d7b137930f98d44b8989dbe0f0c827a4/
+
+This example uses a kvm device as the receiver only because the test environment
+did not have a bridged configuration, so the internal networking of the kvm meant
+that although the KVM could connect to the beaglebone-black, the beaglebone-black
+could not connect to the kvm.
+
+https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=json/beagleblack-use-case.json;hb=HEAD
+
+https://staging.validation.linaro.org/dashboard/image-reports/beagleblack-usecase
+
+https://staging.validation.linaro.org/dashboard/streams/anonymous/codehelp/bundles/cf4eb9e0022232e97aaec2737b3cd436cd37ab14/
+
+This example uses two beaglebone-black devices.
=== added file 'doc/usecasetwo.rst'
@@ -0,0 +1,224 @@
+.. _use_case_two:
+
+Use Case Two - Setting up the same job on multiple devices
+**********************************************************
+
+One test definition (or one set of test definitions) to be run on
+multiple devices of the same device type.
+
+Source Code
+===========
+
+The test definition itself could be an unchanged singlenode test definition, e.g.
+
+ https://git.linaro.org/gitweb?p=qa/test-definitions.git;a=blob_plain;f=ubuntu/smoke-tests-basic.yaml;hb=refs/heads/master
+
+Alternatively, it could use the MultiNode API to synchronise the devices, e.g.
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=multinode01.yaml;hb=refs/heads/master
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=multinode02.yaml;hb=refs/heads/master
+
+ https://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=multinode03.yaml;hb=refs/heads/master
+
+Requirements
+============
+
+ * Multiple devices running the same test definition.
+ * Running multiple test definitions at the same time on all devices in the group.
+ * Synchronising multiple devices during a test.
+ * Filter the results by device name.
+
+Preparing the YAML
+==================
+
+In the first part of this use case, the same YAML file is to be used to
+test multiple devices. Select your YAML file and, if appropriate, edit
+the name in the metadata.
+
+Preparing the JSON
+===================
+
+The change from a standard single-node JSON file is to expand the device_type
+or device field to a device_group.
+
+The change for multiple devices in MultiNode is within the ``device_group``. To run the test
+multiple devices of the same type, simply increase the ``count``:
+
+::
+
+ {
+ "device_group": [
+ {
+ "role": "bear",
+ "count": 2,
+ "device_type": "panda",
+ "tags": [
+ "use-case-two"
+ ]
+ }
+ }
+
+If the rest of the JSON refers to a ``role`` other than the one specified
+in the ``device_group``, those JSON sections are ignored.
+
+If other actions in the JSON do not mention a ``role``, the action will
+occur on all devices in the ``device_group``. So with a single role,
+it only matters that a role exists in the ``device_group``.
+
+actions
+-------
+
+::
+
+ {
+ {
+ "command": "deploy_linaro_image",
+ "parameters": {
+ "image": "https://releases.linaro.org/13.03/ubuntu/panda/panda-quantal_developer_20130328-278.img.gz"
+ }
+ "role": "bear"
+ }
+ }
+
+lava_test_shell
+^^^^^^^^^^^^^^^
+
+To run multiple test definitions from one or multiple testdef repositories,
+expand the testdef_repos array:
+
+.. tip:: Remember the JSON syntax.
+
+ - continuations need commas, completions do not.
+
+::
+
+ {
+ {
+ "command": "lava_test_shell",
+ "parameters": {
+ "testdef_repos": [
+ {
+ "git-repo": "git://git.linaro.org/people/neilwilliams/multinode-yaml.git",
+ "testdef": "multinode01.yaml"
+ },
+ {
+ "git-repo": "git://git.linaro.org/people/neilwilliams/multinode-yaml.git",
+ "testdef": "multinode02.yaml"
+ },
+ {
+ "git-repo": "git://git.linaro.org/people/neilwilliams/multinode-yaml.git",
+ "testdef": "multinode03.yaml"
+ }
+ ],
+ "role": "sender"
+ }
+ },
+ }
+
+submit_results
+^^^^^^^^^^^^^^
+
+The results for the entire group get aggregated into a single result
+bundle.
+
+::
+
+ {
+ {
+ "command": "submit_results_on_host",
+ "parameters": {
+ "stream": "/anonymous/instance-manager/",
+ "server": "http://validation.linaro.org/RPC2/"
+ }
+ }
+ }
+
+Prepare a filter for the results
+================================
+
+The filter for this use case uses a ``required attribute``
+of **target.device_type** to only show results for the specified
+devices (to cover reuse of the YAML on other boards later).
+
+It is also possible to add a second filter which matches a specific **target**
+device.
+
+Adding synchronisation
+======================
+
+So far, the multiple devices have been started together but then had no
+further interaction.
+
+The :ref:`multinode_api` supports communication between devices within
+a group and provides synchronisation primitives. The simplest of these
+primitives, :ref:`lava_sync` was used in :ref:`use_case_one` but there are more
+possibilities available.
+
+:ref:`lava_sync` is a special case of a :ref:`lava_send` followed by a
+:ref:`lava_wait_all`.
+
+Sending messages
+----------------
+
+Messages can be sent using :ref:`lava_send` which is a non-blocking call.
+At a later point, another device in the group can collect the message
+using ``lava-wait`` or ``lava-wait-all`` which will block until
+the message is available.
+
+The message can be a simple identifier (e.g. 'download' or 'ready') and
+is visible to all devices in the group.
+
+Key value pairs can also be sent using the API to broadcast particular
+information.
+
+If multiple devices send the same message ID, the data is collated by
+the LAVA Coordinator. Key value pairs sent with any message ID are
+tagged with the device name which sent the key value pairs.
+
+Receiving messages
+------------------
+
+Message reception will block until the message is available.
+
+For :ref:`lava_wait`, the message is deemed available as soon as any device
+in the group has sent a message with the matching ID. If no devices have
+sent such a message, any device asking for ``lava-wait`` on that ID
+will block until a different board uses ``lava-send`` with the expected
+message ID.
+
+For :ref:`lava_wait_all`, the message is only deemed available if **all
+devices in the group** have already sent a message with the expected message
+ID. Therefore, using ``lava-wait-all`` requires a preceding
+``lava-send``.
+
+When using ``lava-wait-all MESSAGEID ROLE``, the message is only deemed
+available if **all devices with the matching role in the group** have
+sent a message with the expected message ID. If the receiving device has
+the specified role, that device must use a ``lava-send`` for the same
+message ID before using ``lava-wait-all MESSAGEID ROLE``.
+
+::
+
+ - lava-test-case multinode-send-network --shell lava-send ready
+ - lava-test-case multinode-get-network --shell lava-wait ready
+
+It is up to the test writer to ensure that when :ref:`lava_wait` is used,
+that the message ID is sufficiently unique that the first use of that
+message ID denotes the correct point in the YAML.
+
+::
+
+ - lava-test-case multinode-send-message --shell lava-send sending source=$(lava-self) role=$(lava-role) hostname=$(hostname -f) kernver=$(uname -r) kernhost=$(uname -n)
+ - lava-test-case multinode-wait-message --shell lava-wait-all sending
+
+This example will wait until all devices in the group have sent the
+message ID ''sending'' (with or without the associated key value pairs).
+
+Summary
+=======
+
+http://git.linaro.org/gitweb?p=people/neilwilliams/multinode-yaml.git;a=blob_plain;f=json/panda-only-group.json;hb=refs/heads/master
+
+http://multinode.validation.linaro.org/dashboard/image-reports/panda-multinode
+
=== modified file 'lava/dispatcher/commands.py'
@@ -7,7 +7,7 @@
from json_schema_validator.errors import ValidationError
from lava.tool.command import Command
from lava.tool.errors import CommandError
-
+from lava.dispatcher.node import NodeDispatcher
import lava_dispatcher.config
from lava_dispatcher.config import get_config, get_device_config, get_devices
from lava_dispatcher.job import LavaTestJob, validate_job_data
@@ -112,6 +112,13 @@
jobdata = stream.read()
json_jobdata = json.loads(jobdata)
+ # detect multinode and start a NodeDispatcher to work with the LAVA Coordinator.
+ if not self.args.validate:
+ if 'target_group' in json_jobdata:
+ node = NodeDispatcher(json_jobdata, oob_file, self.args.output_dir)
+ node.run()
+ # the NodeDispatcher has started and closed.
+ exit(0)
if self.args.target is None:
if 'target' not in json_jobdata:
logging.error("The job file does not specify a target device. "
=== added file 'lava/dispatcher/node.py'
@@ -0,0 +1,411 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+#
+# node.py
+#
+# Copyright 2013 Linaro Limited
+# Author Neil Williams <neil.williams@linaro.org>
+#
+# This program is free software; you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation; either version 2 of the License, or
+# (at your option) any later version.
+#
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with this program; if not, write to the Free Software
+# Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston,
+# MA 02110-1301, USA.
+#
+#
+
+import socket
+from socket import gethostname
+import json
+import logging
+import os
+import copy
+import sys
+import time
+from lava_dispatcher.config import get_config
+from lava_dispatcher.job import LavaTestJob
+
+
+class Poller(object):
+ """
+ Blocking, synchronous socket poller which repeatedly tries to connect
+ to the Coordinator, get a very fast response and then implement the
+ wait.
+ If the node needs to wait, it will get a {"response": "wait"}
+ If the node should stop polling and send data back to the board, it will
+ get a {"response": "ack", "message": "blah blah"}
+ """
+
+ json_data = None
+ blocks = 4 * 1024
+ # how long between polls (in seconds)
+ poll_delay = 1
+ timeout = 0
+
+ def __init__(self, data_str):
+ try:
+ self.json_data = json.loads(data_str)
+ except ValueError:
+ logging.error("bad JSON")
+ exit(1)
+ if 'port' not in self.json_data:
+ logging.error("Misconfigured NodeDispatcher - port not specified")
+ if 'blocksize' not in self.json_data:
+ logging.error("Misconfigured NodeDispatcher - blocksize not specified")
+ self.blocks = int(self.json_data['blocksize'])
+ if "poll_delay" in self.json_data:
+ self.poll_delay = int(self.json_data["poll_delay"])
+ if 'timeout' in self.json_data:
+ self.timeout = self.json_data['timeout']
+
+ def poll(self, msg_str):
+ """
+ Blocking, synchronous polling of the Coordinator on the configured port.
+ Single send operations greater than 0xFFFF are rejected to prevent truncation.
+ :param msg_str: The message to send to the Coordinator, as a JSON string.
+ :return: a JSON string of the response to the poll
+ """
+ # starting value for the delay between polls
+ delay = 1
+ msg_len = len(msg_str)
+ if msg_len > 0xFFFE:
+ logging.error("Message was too long to send!")
+ return
+ c = 0
+ response = None
+ while True:
+ c += self.poll_delay
+ s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
+ s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ try:
+ s.connect((self.json_data['host'], self.json_data['port']))
+ logging.debug("Connecting to LAVA Coordinator on %s:%s" % (self.json_data['host'], self.json_data['port']))
+ delay = self.poll_delay
+ except socket.error as e:
+ logging.warn("socket error on connect: %d %s %s" %
+ (e.errno, self.json_data['host'], self.json_data['port']))
+ time.sleep(delay)
+ delay += 2
+ s.close()
+ continue
+ logging.debug("sending message: %s" % msg_str[:42])
+ # blocking synchronous call
+ try:
+ # send the length as 32bit hexadecimal
+ ret_bytes = s.send("%08X" % msg_len)
+ if ret_bytes == 0:
+ logging.debug("zero bytes sent for length - connection closed?")
+ continue
+ ret_bytes = s.send(msg_str)
+ if ret_bytes == 0:
+ logging.debug("zero bytes sent for message - connection closed?")
+ continue
+ except socket.error as e:
+ logging.warn("socket error '%d' on send" % e.message)
+ s.close()
+ continue
+ s.shutdown(socket.SHUT_WR)
+ try:
+ header = s.recv(8) # 32bit limit as a hexadecimal
+ if not header or header == '':
+ logging.debug("empty header received?")
+ continue
+ msg_count = int(header, 16)
+ recv_count = 0
+ response = ''
+ while recv_count < msg_count:
+ response += s.recv(self.blocks)
+ recv_count += self.blocks
+ except socket.error as e:
+ logging.warn("socket error '%d' on response" % e.errno)
+ s.close()
+ continue
+ s.close()
+ if not response:
+ time.sleep(delay)
+ # if no response, wait and try again
+ logging.debug("failed to get a response, setting a wait")
+ response = json.dumps({"response": "wait"})
+ try:
+ json_data = json.loads(response)
+ except ValueError:
+ logging.error("response starting '%s' was not JSON" % response[:42])
+ break
+ if json_data['response'] != 'wait':
+ break
+ else:
+ if not (c % int(10 * self.poll_delay)):
+ logging.info("Waiting ... %d of %d secs" % (c, self.timeout))
+ time.sleep(delay)
+ # apply the default timeout to each poll operation.
+ if c > self.timeout:
+ response = json.dumps({"response": "nack"})
+ break
+ return response
+
+
+def readSettings(filename):
+ """
+ NodeDispatchers need to use the same port and blocksize as the Coordinator,
+ so read the same conffile.
+ The protocol header is hard-coded into the server & here.
+ """
+ settings = {
+ "port": 3079,
+ "blocksize": 4 * 1024,
+ "poll_delay": 1,
+ "coordinator_hostname": "localhost"
+ }
+ with open(filename) as stream:
+ jobdata = stream.read()
+ json_default = json.loads(jobdata)
+ if "port" in json_default:
+ settings['port'] = json_default['port']
+ if "blocksize" in json_default:
+ settings['blocksize'] = json_default["blocksize"]
+ if "poll_delay" in json_default:
+ settings['poll_delay'] = json_default['poll_delay']
+ if "coordinator_hostname" in json_default:
+ settings['coordinator_hostname'] = json_default['coordinator_hostname']
+ return settings
+
+
+class NodeDispatcher(object):
+
+ group_name = ''
+ client_name = ''
+ group_size = 0
+ target = ''
+ role = ''
+ poller = None
+ oob_file = sys.stderr
+ output_dir = None
+ base_msg = None
+ json_data = None
+
+ def __init__(self, json_data, oob_file=sys.stderr, output_dir=None):
+ """
+ Parse the modified JSON to identify the group name,
+ requested port for the group - node comms
+ and get the designation for this node in the group.
+ """
+ settings = readSettings("/etc/lava-coordinator/lava-coordinator.conf")
+ self.json_data = json_data
+ # FIXME: do this with a schema once the API settles
+ if 'target_group' not in json_data:
+ raise ValueError("Invalid JSON to work with the MultiNode Coordinator: no target_group.")
+ self.group_name = json_data['target_group']
+ if 'group_size' not in json_data:
+ raise ValueError("Invalid JSON to work with the Coordinator: no group_size")
+ self.group_size = json_data["group_size"]
+ if 'target' not in json_data:
+ raise ValueError("Invalid JSON for a child node: no target designation.")
+ self.target = json_data['target']
+ if 'timeout' not in json_data:
+ raise ValueError("Invalid JSON - no default timeout specified.")
+ if "sub_id" not in json_data:
+ logging.info("Error in JSON - no sub_id specified. Results cannot be aggregated.")
+ json_data['sub_id'] = None
+ if 'port' in json_data:
+ # lava-coordinator provides a conffile for the port and blocksize.
+ logging.debug("Port is no longer supported in the incoming JSON. Using %d" % settings["port"])
+ if 'role' in json_data:
+ self.role = json_data['role']
+ # hostname of the server for the connection.
+ if 'hostname' in json_data:
+ # lava-coordinator provides a conffile for the group_hostname
+ logging.debug("Coordinator hostname is no longer supported in the incoming JSON. Using %s"
+ % settings['coordinator_hostname'])
+ self.base_msg = {"port": settings['port'],
+ "blocksize": settings['blocksize'],
+ "step": settings["poll_delay"],
+ "timeout": json_data['timeout'],
+ "host": settings['coordinator_hostname'],
+ "client_name": json_data['target'],
+ "group_name": json_data['target_group'],
+ # hostname here is the node hostname, not the server.
+ "hostname": gethostname(),
+ "role": self.role,
+ }
+ self.client_name = json_data['target']
+ self.poller = Poller(json.dumps(self.base_msg))
+ self.oob_file = oob_file
+ self.output_dir = output_dir
+
+ def run(self):
+ """
+ Initialises the node into the group, registering the group if necessary
+ (via group_size) and *waiting* until the rest of the group nodes also
+ register before starting the actual job,
+ """
+ init_msg = {"request": "group_data", "group_size": self.group_size}
+ init_msg.update(self.base_msg)
+ logging.info("Starting Multi-Node communications for group '%s'" % self.group_name)
+ logging.debug("init_msg %s" % json.dumps(init_msg))
+ response = json.loads(self.poller.poll(json.dumps(init_msg)))
+ logging.info("Starting the test run for %s in group %s" % (self.client_name, self.group_name))
+ self.run_tests(self.json_data, response)
+ # send a message to the GroupDispatcher to close the group (when all nodes have sent fin_msg)
+ fin_msg = {"request": "clear_group", "group_size": self.group_size}
+ fin_msg.update(self.base_msg)
+ logging.debug("fin_msg %s" % json.dumps(fin_msg))
+ self.poller.poll(json.dumps(fin_msg))
+
+ def __call__(self, args):
+ """ Makes the NodeDispatcher callable so that the test shell can send messages just using the
+ NodeDispatcher object.
+ This function blocks until the specified API call returns. Some API calls may involve a
+ substantial period of polling.
+ :param args: JSON string of the arguments of the API call to make
+ :return: A Python object containing the reply dict from the API call
+ """
+ try:
+ return self._select(json.loads(args))
+ except KeyError:
+ logging.warn("Unable to handle request for: %s" % args)
+
+ def _select(self, json_data):
+ """ Determines which API call has been requested, makes the call, blocks and returns the reply.
+ :param json_data: Python object of the API call
+ :return: Python object containing the reply dict.
+ """
+ reply_str = ''
+ if not json_data:
+ logging.debug("Empty args")
+ return
+ if 'request' not in json_data:
+ logging.debug("Bad call")
+ return
+ if json_data["request"] == "aggregate":
+ # no message processing here, just the bundles.
+ return self._aggregation(json_data)
+ messageID = json_data['messageID']
+ if json_data['request'] == "lava_sync":
+ logging.info("requesting lava_sync '%s'" % messageID)
+ reply_str = self.request_sync(messageID)
+ elif json_data['request'] == 'lava_wait':
+ logging.info("requesting lava_wait '%s'" % messageID)
+ reply_str = self.request_wait(messageID)
+ elif json_data['request'] == 'lava_wait_all':
+ if 'role' in json_data and json_data['role'] is not None:
+ reply_str = self.request_wait_all(messageID, json_data['role'])
+ logging.info("requesting lava_wait_all '%s' '%s'" % (messageID, json_data['role']))
+ else:
+ logging.info("requesting lava_wait_all '%s'" % messageID)
+ reply_str = self.request_wait_all(messageID)
+ elif json_data['request'] == "lava_send":
+ logging.info("requesting lava_send %s" % messageID)
+ reply_str = self.request_send(messageID, json_data['message'])
+ reply = json.loads(str(reply_str))
+ if 'message' in reply:
+ return reply['message']
+ else:
+ return reply['response']
+
+ def _aggregation(self, json_data):
+ """ Internal call to send the bundle message to the coordinator so that the node
+ with sub_id zero will get the complete bundle and everyone else a blank bundle.
+ :param json_data: Arbitrary data from the job which will form the result bundle
+ """
+ if json_data["bundle"] is None:
+ logging.info("Notifyng LAVA Controller of job completion")
+ else:
+ logging.info("Passing results bundle to LAVA Coordinator.")
+ reply_str = self._send(json_data)
+ reply = json.loads(str(reply_str))
+ if 'message' in reply:
+ return reply['message']
+ else:
+ return reply['response']
+
+ def _send(self, msg):
+ """ Internal call to perform the API call via the Poller.
+ :param msg: The call-specific message to be wrapped in the base_msg primitive.
+ :return: Python object of the reply dict.
+ """
+ new_msg = copy.deepcopy(self.base_msg)
+ new_msg.update(msg)
+ if 'bundle' in new_msg:
+ logging.debug("sending result bundle")
+ else:
+ logging.debug("sending Message %s" % json.dumps(new_msg))
+ return self.poller.poll(json.dumps(new_msg))
+
+ def request_wait_all(self, messageID, role=None):
+ """
+ Asks the Coordinator to send back a particular messageID
+ and blocks until that messageID is available for all nodes in
+ this group or all nodes with the specified role in this group.
+ """
+ # FIXME: if this node has not called request_send for the
+ # messageID used for a wait_all, the node should log a warning
+ # of a broken test definition.
+ if role:
+ return self._send({"request": "lava_wait_all",
+ "messageID": messageID,
+ "waitrole": role})
+ else:
+ return self._send({"request": "lava_wait_all",
+ "messageID": messageID})
+
+ def request_wait(self, messageID):
+ """
+ Asks the Coordinator to send back a particular messageID
+ and blocks until that messageID is available for this node
+ """
+ # use self.target as the node ID
+ wait_msg = {"request": "lava_wait",
+ "messageID": messageID,
+ "nodeID": self.target}
+ return self._send(wait_msg)
+
+ def request_send(self, messageID, message):
+ """
+ Sends a message to the group via the Coordinator. The
+ message is guaranteed to be available to all members of the
+ group. The message is only picked up when a client in the group
+ calls lava_wait or lava_wait_all.
+ The message needs to be formatted JSON, not a simple string.
+ { "messageID": "string", "message": { "key": "value"} }
+ The message can consist of just the messageID:
+ { "messageID": "string" }
+ """
+ send_msg = {"request": "lava_send",
+ "messageID": messageID,
+ "message": message}
+ return self._send(send_msg)
+
+ def request_sync(self, msg):
+ """
+ Creates and send a message requesting lava_sync
+ """
+ sync_msg = {"request": "lava_sync", "messageID": msg}
+ return self._send(sync_msg)
+
+ def run_tests(self, json_jobdata, group_data):
+ if 'response' in group_data and group_data['response'] == 'nack':
+ logging.error("Unable to initiliase a Multi-Node group - timed out waiting for other devices.")
+ return
+ config = get_config()
+ if 'logging_level' in json_jobdata:
+ logging.root.setLevel(json_jobdata["logging_level"])
+ else:
+ logging.root.setLevel(config.logging_level)
+ if 'target' not in json_jobdata:
+ logging.error("The job file does not specify a target device.")
+ exit(1)
+ jobdata = json.dumps(json_jobdata)
+ if self.output_dir and not os.path.isdir(self.output_dir):
+ os.makedirs(self.output_dir)
+ job = LavaTestJob(jobdata, self.oob_file, config, self.output_dir)
+ # pass this NodeDispatcher down so that the lava_test_shell can __call__ nodeTransport to write a message
+ job.run(self, group_data)
=== modified file 'lava_dispatcher/__init__.py'
@@ -18,4 +18,4 @@
# along
# with this program; if not, see <http://www.gnu.org/licenses>.
-__version__ = (0, 32, 2, "final", 0)
+__version__ = (0, 33, 1, "dev", 0)
=== modified file 'lava_dispatcher/actions/deploy.py'
@@ -55,6 +55,7 @@
'image': {'type': 'string', 'optional': True},
'rootfstype': {'type': 'string', 'optional': True},
'bootloader': {'type': 'string', 'optional': True, 'default': 'u_boot'},
+ 'role': {'type': 'string', 'optional': True},
},
'additionalProperties': False,
}
=== modified file 'lava_dispatcher/actions/launch_control.py'
@@ -24,7 +24,7 @@
import tempfile
import urlparse
import xmlrpclib
-
+import simplejson
from lava_tool.authtoken import AuthenticatingServerProxy, MemoryAuthBackend
from linaro_dashboard_bundle.io import DocumentIO
@@ -182,6 +182,10 @@
return bundles
def run(self, server, stream, result_disk="testrootfs", token=None):
+ main_bundle = self.collect_bundles(result_disk)
+ self.submit_bundle(main_bundle, server, stream, token)
+
+ def collect_bundles(self, server=None, stream=None, result_disk="testrootfs", token=None):
all_bundles = []
status = 'pass'
err_msg = ''
@@ -205,8 +209,7 @@
self.context.test_data.add_result('gather_results', status, err_msg)
main_bundle = self.combine_bundles(all_bundles)
-
- self.submit_bundle(main_bundle, server, stream, token)
+ return main_bundle
def combine_bundles(self, all_bundles):
if not all_bundles:
@@ -228,6 +231,10 @@
for test_run in main_bundle['test_runs']:
attributes = test_run.get('attributes', {})
attributes.update(self.context.test_data.get_metadata())
+ if "group_size" in attributes:
+ grp_size = attributes['group_size']
+ del attributes['group_size']
+ attributes['group_size'] = "%d" % grp_size
test_run['attributes'] = attributes
return main_bundle
@@ -247,6 +254,56 @@
logging.warning("Fault string: %s" % err.faultString)
raise OperationFailed("could not push to dashboard")
+ def submit_pending(self, bundle, server, stream, token, group_name):
+ """ Called from the dispatcher job when a MultiNode job requests to
+ submit results but the job does not have sub_id zero. The bundle is
+ cached in the dashboard until the coordinator allows sub_id zero to
+ call submit_group_list.
+ :param bundle: A single bundle which is part of the group
+ :param server: Where the bundle will be cached
+ :param token: token to allow access
+ :param group_name: MultiNode group unique ID
+ :raise: OperationFailed if the xmlrpclib call fails
+ """
+ dashboard = _get_dashboard(server, token)
+ json_bundle = simplejson.dumps(bundle)
+ try:
+ # make the put_pending xmlrpc call to store the bundle in the dashboard until the group is complete.
+ result = dashboard.put_pending(json_bundle, stream, group_name)
+ print >> self.context.oob_file, "dashboard-put-pending:", result
+ logging.info("Dashboard: bundle %s is pending in %s" % (result, group_name))
+ except xmlrpclib.Fault, err:
+ logging.warning("xmlrpclib.Fault occurred")
+ logging.warning("Fault code: %d" % err.faultCode)
+ logging.warning("Fault string: %s" % err.faultString)
+ raise OperationFailed("could not push pending bundle to dashboard")
+
+ def submit_group_list(self, bundle, server, stream, token, group_name):
+ """ Called from the dispatcher job when a MultiNode job has been
+ allowed by the coordinator to aggregate the group bundles as
+ all jobs in the group have registered bundle checksums with the coordinator.
+ :param bundle: The single bundle from this job to be added to the pending list.
+ :param server: Where the aggregated bundle will be submitted
+ :param stream: The bundle stream to use
+ :param token: The token to allow access
+ :param group_name: MultiNode group unique ID
+ :raise: OperationFailed if the xmlrpclib call fails
+ """
+ dashboard = _get_dashboard(server, token)
+ json_bundle = simplejson.dumps(bundle)
+ job_name = self.context.job_data.get("job_name", "LAVA Results")
+ try:
+ # make the put_group xmlrpc call to aggregate the bundles for the entire group & submit.
+ result = dashboard.put_group(json_bundle, job_name, stream, group_name)
+ print >> self.context.oob_file, "dashboard-group:", result, job_name
+ self.context.output.write_named_data('result-bundle', result)
+ logging.info("Dashboard: bundle %s is to be aggregated into %s" % (result, group_name))
+ except xmlrpclib.Fault, err:
+ logging.warning("xmlrpclib.Fault occurred")
+ logging.warning("Fault code: %d" % err.faultCode)
+ logging.warning("Fault string: %s" % err.faultString)
+ raise OperationFailed("could not push group bundle to dashboard")
+
class cmd_submit_results_on_host(cmd_submit_results):
pass
=== modified file 'lava_dispatcher/actions/lava_test_shell.py'
@@ -134,6 +134,16 @@
from lava_dispatcher.downloader import download_image
LAVA_TEST_DIR = '%s/../../lava_test_shell' % os.path.dirname(__file__)
+LAVA_MULTI_NODE_TEST_DIR = '%s/../../lava_test_shell/multi_node' % os.path.dirname(__file__)
+
+LAVA_GROUP_FILE = 'lava-group'
+LAVA_ROLE_FILE = 'lava-role'
+LAVA_SELF_FILE = 'lava-self'
+LAVA_SEND_FILE = 'lava-send'
+LAVA_SYNC_FILE = 'lava-sync'
+LAVA_WAIT_FILE = 'lava-wait'
+LAVA_WAIT_ALL_FILE = 'lava-wait-all'
+LAVA_MULTI_NODE_CACHE_FILE = '/tmp/lava_multi_node_cache.txt'
Target.android_deployment_data['distro'] = 'android'
Target.android_deployment_data['lava_test_sh_cmd'] = '/system/bin/mksh'
@@ -508,20 +518,21 @@
'items': {'type': 'object',
'properties':
{'git-repo': {'type': 'string',
- 'optional': True},
- 'bzr-repo': {'type': 'string',
- 'optional': True},
- 'tar-repo': {'type': 'string',
- 'optional': True},
- 'revision': {'type': 'string',
- 'optional': True},
- 'testdef': {'type': 'string',
- 'optional': True}
+ 'optional': True},
+ 'bzr-repo': {'type': 'string',
+ 'optional': True},
+ 'tar-repo': {'type': 'string',
+ 'optional': True},
+ 'revision': {'type': 'string',
+ 'optional': True},
+ 'testdef': {'type': 'string',
+ 'optional': True}
},
'additionalProperties': False},
'optional': True
},
'timeout': {'type': 'integer', 'optional': True},
+ 'role': {'type': 'string', 'optional': True},
},
'additionalProperties': False,
}
@@ -531,7 +542,7 @@
testdefs_by_uuid = self._configure_target(target, testdef_urls, testdef_repos)
- signal_director = SignalDirector(self.client, testdefs_by_uuid)
+ signal_director = SignalDirector(self.client, testdefs_by_uuid, self.context)
with target.runner() as runner:
runner.wait_for_prompt(timeout)
@@ -544,6 +555,7 @@
if timeout == -1:
timeout = runner._connection.timeout
initial_timeout = timeout
+ signal_director.set_connection(runner._connection)
while self._keep_running(runner, timeout, signal_director):
elapsed = time.time() - start
timeout = int(initial_timeout - elapsed)
@@ -556,6 +568,7 @@
pexpect.EOF,
pexpect.TIMEOUT,
'<LAVA_SIGNAL_(\S+) ([^>]+)>',
+ '<LAVA_MULTI_NODE> <LAVA_(\S+) ([^>]+)>',
]
idx = runner._connection.expect(patterns, timeout=timeout)
@@ -575,6 +588,16 @@
logging.exception("on_signal failed")
runner._connection.sendline('echo LAVA_ACK')
return True
+ elif idx == 4:
+ name, params = runner._connection.match.groups()
+ logging.debug("Received Multi_Node API <LAVA_%s>" % name)
+ params = params.split()
+ ret = False
+ try:
+ ret = signal_director.signal(name, params)
+ except:
+ logging.exception("on_signal(Multi_Node) failed")
+ return ret
return False
@@ -598,6 +621,37 @@
fout.write(fin.read())
os.fchmod(fout.fileno(), XMOD)
+ def _inject_multi_node_api(self, mntdir, target):
+ shell = target.deployment_data['lava_test_sh_cmd']
+
+ # Generic scripts
+ scripts_to_copy = glob(os.path.join(LAVA_MULTI_NODE_TEST_DIR, 'lava-*'))
+
+ for fname in scripts_to_copy:
+ with open(fname, 'r') as fin:
+ foutname = os.path.basename(fname)
+ with open('%s/bin/%s' % (mntdir, foutname), 'w') as fout:
+ fout.write("#!%s\n\n" % shell)
+ # Target-specific scripts (add ENV to the generic ones)
+ if foutname == LAVA_GROUP_FILE:
+ fout.write('LAVA_GROUP="\n')
+ if 'roles' in self.context.group_data:
+ for client_name in self.context.group_data['roles']:
+ fout.write(r"\t%s\t%s\n" % (client_name, self.context.group_data['roles'][client_name]))
+ else:
+ logging.debug("group data MISSING")
+ fout.write('"\n')
+ elif foutname == LAVA_ROLE_FILE:
+ fout.write("TARGET_ROLE='%s'\n" % self.context.test_data.metadata['role'])
+ elif foutname == LAVA_SELF_FILE:
+ fout.write("LAVA_HOSTNAME='%s'\n" % self.context.test_data.metadata['target.hostname'])
+ else:
+ fout.write("LAVA_TEST_BIN='%s/bin'\n" % target.deployment_data['lava_test_dir'])
+ fout.write("LAVA_MULTI_NODE_CACHE='%s'\n" % LAVA_MULTI_NODE_CACHE_FILE)
+ if self.context.test_data.metadata['logging_level'] == 'DEBUG':
+ fout.write("LAVA_MULTI_NODE_DEBUG='yes'\n")
+ fout.write(fin.read())
+ os.fchmod(fout.fileno(), XMOD)
def _mk_runner_dirs(self, mntdir):
utils.ensure_directory('%s/bin' % mntdir)
@@ -613,6 +667,8 @@
with target.file_system(results_part, 'lava') as d:
self._mk_runner_dirs(d)
self._copy_runner(d, target)
+ if 'target_group' in self.context.test_data.metadata:
+ self._inject_multi_node_api(d, target)
testdef_loader = TestDefinitionLoader(self.context, target.scratch_dir)
=== modified file 'lava_dispatcher/config.py'
@@ -235,7 +235,6 @@
if not config_files:
raise Exception("no config files named %r found" % (name + ".conf"))
config_files.reverse()
- logging.debug("About to read %s", str(config_files))
for path in config_files:
_read_into(path, cp)
return cp
=== modified file 'lava_dispatcher/context.py'
@@ -129,7 +129,7 @@
def run_command(self, command, failok=True):
"""run command 'command' with output going to output-dir if specified"""
if isinstance(command, (str, unicode)):
- command = ['sh', '-c', command]
+ command = ['nice', 'sh', '-c', command]
logging.debug("Executing on host : '%r'" % command)
output_args = {
'stdout': self.logfile_read,
@@ -151,3 +151,13 @@
def finish(self):
self.client.finish()
+ def assign_transport(self, transport):
+ self.transport = transport
+
+ def assign_group_data(self, group_data):
+ """
+ :param group_data: Arbitrary data related to the
+ group configuration, passed in via the GroupDispatcher
+ Used by lava-group
+ """
+ self.group_data = group_data
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/aa9.conf'
@@ -1,27 +0,0 @@
-client_type = bootloader
-
-boot_cmds =
- setenv bootcmd "'fatload mmc 0:3 0x40000000 uImage; fatload mmc 0:3 0x41100000 uInitrd; fatload mmc 0:3 0x41000000 board.dtb; bootm 0x40000000 0x41100000 0x41000000'",
- setenv bootargs "'console=ttyS0,115200n8 root=LABEL=testrootfs rootwait ro'",
- boot
-
-boot_cmds_android =
- setenv bootcmd "'fatload mmc 0:3 0x40000000 uImage; fatload mmc 0:3 0x41100000 uInitrd; fatload mmc 0:3 0x41000000 mb8ac0300eb.dtb; bootm 0x40000000 0x41100000 0x41000000'",
- setenv bootargs "'console=ttyS0,115200n8 init=/init rootwait rw androidboot.hardware=fujitsusemiconductormb8ac0300-e'",
- boot
-
-image_boot_msg = Starting kernel
-
-possible_partitions_files =
- init.partitions.rc
- fstab.partitions
- init.rc
- fstab.fujitsusemiconductormb8ac0300-e
-
-bootloader_prompt = u-boot
-
-boot_options =
- boot_cmds
-
-[boot_cmds]
-default = boot_cmds
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/aa9.conf'
@@ -0,0 +1,27 @@
+client_type = bootloader
+
+boot_cmds =
+ setenv bootcmd "'fatload mmc 0:3 0x40000000 uImage; fatload mmc 0:3 0x41100000 uInitrd; fatload mmc 0:3 0x41000000 board.dtb; bootm 0x40000000 0x41100000 0x41000000'",
+ setenv bootargs "'console=ttyS0,115200n8 root=LABEL=testrootfs rootwait ro'",
+ boot
+
+boot_cmds_android =
+ setenv bootcmd "'fatload mmc 0:3 0x40000000 uImage; fatload mmc 0:3 0x41100000 uInitrd; fatload mmc 0:3 0x41000000 mb8ac0300eb.dtb; bootm 0x40000000 0x41100000 0x41000000'",
+ setenv bootargs "'console=ttyS0,115200n8 init=/init rootwait rw androidboot.hardware=fujitsusemiconductormb8ac0300-e'",
+ boot
+
+image_boot_msg = Starting kernel
+
+possible_partitions_files =
+ init.partitions.rc
+ fstab.partitions
+ init.rc
+ fstab.fujitsusemiconductormb8ac0300-e
+
+bootloader_prompt = u-boot
+
+boot_options =
+ boot_cmds
+
+[boot_cmds]
+default = boot_cmds
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/capri.conf'
@@ -1,46 +0,0 @@
-client_type = capri
-
-# The ADB command line.
-#
-# In the case where there are multiple android devices plugged into a
-# single host, this connection command must be overriden on each device to
-# include the serial number of the device, e.g.
-#
-# serial_number = XXXXXXXXXXXXXXXX
-# adb_command = adb -s %(serial_number)s
-adb_command = adb
-
-# The fastboot command.
-#
-# The same as above: if you have more than one device, you will want to
-# override this in your device config to add a serial number, e.g.
-#
-# serial_number = XXXXXXXXXXXXXXXX
-# fastboot_command = fastboot -s %(serial_number)s
-#
-# Of course, in the case you override both adb_command *and* fastboot_command,
-# you don't need to specify `serial_number` twice.
-fastboot_command = fastboot
-
-# Working directory for temporary files. By default, the usual place for LAVA
-# images will be used.
-#
-# This is useful when the lava dispatcher is controlling the device under test which is
-# physically plugged to other machines by setting adb_command to something like
-# "ssh <phone-host> adb" and fastboot_command to something like "ssh
-# <phone-host> fastboot". adb and fastboot always operate on local files, so
-# you need your local files to also be seen as local files on the host where
-# adb/fastboot are executed.
-#
-# In this case, you should set shared_working_directory to a shared directory
-# between the machine running the dispatcher and the machine where the phone is
-# plugged. This shared directory must have the same path in both machines.
-# For example, you can have your /var/tmp/lava mounted at /var/tmp/lava at
-# <phone-host> (or the other way around).
-shared_working_directory =
-
-connection_command = %(adb_command)s shell
-
-enable_network_after_boot_android = false
-android_adb_over_usb = true
-android_adb_over_tcp = false
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/capri.conf'
@@ -0,0 +1,46 @@
+client_type = capri
+
+# The ADB command line.
+#
+# In the case where there are multiple android devices plugged into a
+# single host, this connection command must be overriden on each device to
+# include the serial number of the device, e.g.
+#
+# serial_number = XXXXXXXXXXXXXXXX
+# adb_command = adb -s %(serial_number)s
+adb_command = adb
+
+# The fastboot command.
+#
+# The same as above: if you have more than one device, you will want to
+# override this in your device config to add a serial number, e.g.
+#
+# serial_number = XXXXXXXXXXXXXXXX
+# fastboot_command = fastboot -s %(serial_number)s
+#
+# Of course, in the case you override both adb_command *and* fastboot_command,
+# you don't need to specify `serial_number` twice.
+fastboot_command = fastboot
+
+# Working directory for temporary files. By default, the usual place for LAVA
+# images will be used.
+#
+# This is useful when the lava dispatcher is controlling the device under test which is
+# physically plugged to other machines by setting adb_command to something like
+# "ssh <phone-host> adb" and fastboot_command to something like "ssh
+# <phone-host> fastboot". adb and fastboot always operate on local files, so
+# you need your local files to also be seen as local files on the host where
+# adb/fastboot are executed.
+#
+# In this case, you should set shared_working_directory to a shared directory
+# between the machine running the dispatcher and the machine where the phone is
+# plugged. This shared directory must have the same path in both machines.
+# For example, you can have your /var/tmp/lava mounted at /var/tmp/lava at
+# <phone-host> (or the other way around).
+shared_working_directory =
+
+connection_command = %(adb_command)s shell
+
+enable_network_after_boot_android = false
+android_adb_over_usb = true
+android_adb_over_tcp = false
=== modified file 'lava_dispatcher/default-config/lava-dispatcher/device-types/mx53loco.conf'
@@ -35,3 +35,4 @@
[boot_cmds]
default = boot_cmds
+read_boot_cmds_from_image = 0
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/nexus10.conf'
@@ -1,46 +0,0 @@
-client_type = nexus10
-
-# The ADB command line.
-#
-# In the case where there are multiple android devices plugged into a
-# single host, this connection command must be overriden on each device to
-# include the serial number of the device, e.g.
-#
-# serial_number = XXXXXXXXXXXXXXXX
-# adb_command = adb -s %(serial_number)s
-adb_command = adb
-
-# The fastboot command.
-#
-# The same as above: if you have more than one device, you will want to
-# override this in your device config to add a serial number, e.g.
-#
-# serial_number = XXXXXXXXXXXXXXXX
-# fastboot_command = fastboot -s %(serial_number)s
-#
-# Of course, in the case you override both adb_command *and* fastboot_command,
-# you don't need to specify `serial_number` twice.
-fastboot_command = fastboot
-
-# Working directory for temporary files. By default, the usual place for LAVA
-# images will be used.
-#
-# This is useful when the lava dispatcher is controlling the device under test which is
-# physically plugged to other machines by setting adb_command to something like
-# "ssh <phone-host> adb" and fastboot_command to something like "ssh
-# <phone-host> fastboot". adb and fastboot always operate on local files, so
-# you need your local files to also be seen as local files on the host where
-# adb/fastboot are executed.
-#
-# In this case, you should set shared_working_directory to a shared directory
-# between the machine running the dispatcher and the machine where the phone is
-# plugged. This shared directory must have the same path in both machines.
-# For example, you can have your /var/tmp/lava mounted at /var/tmp/lava at
-# <phone-host> (or the other way around).
-shared_working_directory =
-
-connection_command = %(adb_command)s shell
-
-enable_network_after_boot_android = false
-android_adb_over_usb = true
-android_adb_over_tcp = false
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/nexus10.conf'
@@ -0,0 +1,46 @@
+client_type = nexus10
+
+# The ADB command line.
+#
+# In the case where there are multiple android devices plugged into a
+# single host, this connection command must be overriden on each device to
+# include the serial number of the device, e.g.
+#
+# serial_number = XXXXXXXXXXXXXXXX
+# adb_command = adb -s %(serial_number)s
+adb_command = adb
+
+# The fastboot command.
+#
+# The same as above: if you have more than one device, you will want to
+# override this in your device config to add a serial number, e.g.
+#
+# serial_number = XXXXXXXXXXXXXXXX
+# fastboot_command = fastboot -s %(serial_number)s
+#
+# Of course, in the case you override both adb_command *and* fastboot_command,
+# you don't need to specify `serial_number` twice.
+fastboot_command = fastboot
+
+# Working directory for temporary files. By default, the usual place for LAVA
+# images will be used.
+#
+# This is useful when the lava dispatcher is controlling the device under test which is
+# physically plugged to other machines by setting adb_command to something like
+# "ssh <phone-host> adb" and fastboot_command to something like "ssh
+# <phone-host> fastboot". adb and fastboot always operate on local files, so
+# you need your local files to also be seen as local files on the host where
+# adb/fastboot are executed.
+#
+# In this case, you should set shared_working_directory to a shared directory
+# between the machine running the dispatcher and the machine where the phone is
+# plugged. This shared directory must have the same path in both machines.
+# For example, you can have your /var/tmp/lava mounted at /var/tmp/lava at
+# <phone-host> (or the other way around).
+shared_working_directory =
+
+connection_command = %(adb_command)s shell
+
+enable_network_after_boot_android = false
+android_adb_over_usb = true
+android_adb_over_tcp = false
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_foundation-armv8.conf'
@@ -0,0 +1,20 @@
+client_type=fastmodel
+
+# how long the disablesuspend script should take to complete
+# fm takes longer than other android images do
+disablesuspend_timeout = 500
+
+# how long ubuntu takes to boot to prompt
+boot_linaro_timeout = 500
+
+# if you do dhcp on boot, adb will not work (asac) on fastmodels
+enable_network_after_boot_android = 0
+
+# we do usermode networking over the loopback
+default_network_interface = lo
+
+simulator_axf_files = img-foundation.axf
+
+simulator_version_command = /opt/arm/Foundation_v8pkg/Foundation_v8 --version | grep "ARM V8 Foundation Model" | sed 's/ARM V8 Foundation Model //'
+
+simulator_command = sudo -u www-data /opt/arm/Foundation_v8pkg/Foundation_v8 --image={AXF} --block-device={IMG} --network=nat
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_foundation-armv8.conf'
@@ -1,20 +0,0 @@
-client_type=fastmodel
-
-# how long the disablesuspend script should take to complete
-# fm takes longer than other android images do
-disablesuspend_timeout = 500
-
-# how long ubuntu takes to boot to prompt
-boot_linaro_timeout = 500
-
-# if you do dhcp on boot, adb will not work (asac) on fastmodels
-enable_network_after_boot_android = 0
-
-# we do usermode networking over the loopback
-default_network_interface = lo
-
-simulator_axf_files = img-foundation.axf
-
-simulator_version_command = /opt/arm/Foundation_v8pkg/Foundation_v8 --version | grep "ARM V8 Foundation Model" | sed 's/ARM V8 Foundation Model //'
-
-simulator_command = sudo -u www-data /opt/arm/Foundation_v8pkg/Foundation_v8 --image={AXF} --block-device={IMG} --network=nat
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-a15x1-a7x1.conf'
@@ -0,0 +1,117 @@
+client_type=fastmodel
+
+# how long the disablesuspend script should take to complete
+# fm takes longer than other android images do
+disablesuspend_timeout = 500
+
+# how long ubuntu takes to boot to prompt
+boot_linaro_timeout = 800
+
+# if you do dhcp on boot, adb will not work (asac) on fastmodels
+enable_network_after_boot_android = 0
+
+# we do usermode networking over the loopback
+default_network_interface = lo
+
+bootloader_prompt = Start:
+
+interrupt_boot_prompt = The default boot selection will start in
+
+interrupt_boot_command = break
+
+# UEFI boot commands
+boot_cmds = sendline a,
+ expect Choice:,
+ sendline 1,
+ expect Select the Boot Device:,
+ sendline 2,
+ expect File path of the EFI Application or the kernel:,
+ sendline uImage,
+ expect [a/g/l],
+ sendline l,
+ expect Add an initrd: [y/n],
+ sendline y,
+ expect File path of the initrd:,
+ sendline uInitrd,
+ expect Arguments to pass to the binary:,
+ sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
+ expect File path of the local FDT:,
+ sendline rtsm\\rtsm_ve-ca15x1-ca7x1.dtb,
+ expect Description for this new Entry:,
+ sendline Test Image,
+ expect Choice:,
+ sendline 5,
+ expect Start:,
+ sendline 2
+
+simulator_axf_files =
+ img.axf
+ linux-system-ISW.axf
+ linux-system-semi.axf
+
+simulator_kernel_files =
+ uImage
+ vmlinuz.*
+
+simulator_initrd_files =
+ uInitrd
+ initrd.*
+
+simulator_dtb = rtsm_ve-ca15x1-ca7x1.dtb
+simulator_uefi = uefi_rtsm_ve-ca15.bin
+
+license_file = 8224@localhost
+sim_bin = /opt/arm/RTSM_A15-A7x14_VE/Linux64_RTSM_VE_Cortex-A15x1-A7x1/RTSM_VE_Cortex-A15x1-A7x1
+android_adb_port = 6555
+
+simulator_version_command = %(sim_bin)s --version | grep "Fast Models" | sed 's/Fast Models \[//' | sed 's/\]//'
+
+simulator_boot_wrapper = -a coretile.cluster0.*={AXF}
+
+simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s
+
+boot_options =
+ motherboard.mmc.p_mmc_file
+ motherboard.hostbridge.userNetPorts
+ motherboard.smsc_91c111.enabled
+ motherboard.hostbridge.userNetworking
+ motherboard.flashloader0.fname
+ motherboard.flashloader1.fname
+ motherboard.flashloader1.fnameWrite
+ coretile.cache_state_modelled
+ coretile.cluster0.cpu0.semihosting-enable
+ coretile.cluster0.cpu0.semihosting-cmd_line
+
+[motherboard.mmc.p_mmc_file]
+default = {IMG}
+
+[motherboard.hostbridge.userNetPorts]
+default="%(android_adb_port)s=%(android_adb_port)s"
+
+[motherboard.smsc_91c111.enabled]
+default = 1
+allowed = 0,1
+
+[motherboard.hostbridge.userNetworking]
+default = 1
+allowed = 0,1
+
+[motherboard.flashloader0.fname]
+default = {UEFI}
+
+[motherboard.flashloader1.fname]
+default = uefi-vars.fd
+
+[motherboard.flashloader1.fnameWrite]
+default = uefi-vars.fd
+
+[coretile.cache_state_modelled]
+default = 0
+allowed = 0,1
+
+[coretile.cluster0.cpu0.semihosting-enable]
+default = 1
+allowed = 0,1
+
+[coretile.cluster0.cpu0.semihosting-cmd_line]
+default = "--kernel {KERNEL} --dtb {DTB} --initrd {INITRD} -- console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M"
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-a15x1-a7x1.conf'
@@ -1,117 +0,0 @@
-client_type=fastmodel
-
-# how long the disablesuspend script should take to complete
-# fm takes longer than other android images do
-disablesuspend_timeout = 500
-
-# how long ubuntu takes to boot to prompt
-boot_linaro_timeout = 800
-
-# if you do dhcp on boot, adb will not work (asac) on fastmodels
-enable_network_after_boot_android = 0
-
-# we do usermode networking over the loopback
-default_network_interface = lo
-
-bootloader_prompt = Start:
-
-interrupt_boot_prompt = The default boot selection will start in
-
-interrupt_boot_command = break
-
-# UEFI boot commands
-boot_cmds = sendline a,
- expect Choice:,
- sendline 1,
- expect Select the Boot Device:,
- sendline 2,
- expect File path of the EFI Application or the kernel:,
- sendline uImage,
- expect [a/g/l],
- sendline l,
- expect Add an initrd: [y/n],
- sendline y,
- expect File path of the initrd:,
- sendline uInitrd,
- expect Arguments to pass to the binary:,
- sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
- expect File path of the local FDT:,
- sendline rtsm\\rtsm_ve-ca15x1-ca7x1.dtb,
- expect Description for this new Entry:,
- sendline Test Image,
- expect Choice:,
- sendline 5,
- expect Start:,
- sendline 2
-
-simulator_axf_files =
- img.axf
- linux-system-ISW.axf
- linux-system-semi.axf
-
-simulator_kernel_files =
- uImage
- vmlinuz.*
-
-simulator_initrd_files =
- uInitrd
- initrd.*
-
-simulator_dtb = rtsm_ve-ca15x1-ca7x1.dtb
-simulator_uefi = uefi_rtsm_ve-ca15.bin
-
-license_file = 8224@localhost
-sim_bin = /opt/arm/RTSM_A15-A7x14_VE/Linux64_RTSM_VE_Cortex-A15x1-A7x1/RTSM_VE_Cortex-A15x1-A7x1
-android_adb_port = 6555
-
-simulator_version_command = %(sim_bin)s --version | grep "Fast Models" | sed 's/Fast Models \[//' | sed 's/\]//'
-
-simulator_boot_wrapper = -a coretile.cluster0.*={AXF}
-
-simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s
-
-boot_options =
- motherboard.mmc.p_mmc_file
- motherboard.hostbridge.userNetPorts
- motherboard.smsc_91c111.enabled
- motherboard.hostbridge.userNetworking
- motherboard.flashloader0.fname
- motherboard.flashloader1.fname
- motherboard.flashloader1.fnameWrite
- coretile.cache_state_modelled
- coretile.cluster0.cpu0.semihosting-enable
- coretile.cluster0.cpu0.semihosting-cmd_line
-
-[motherboard.mmc.p_mmc_file]
-default = {IMG}
-
-[motherboard.hostbridge.userNetPorts]
-default="%(android_adb_port)s=%(android_adb_port)s"
-
-[motherboard.smsc_91c111.enabled]
-default = 1
-allowed = 0,1
-
-[motherboard.hostbridge.userNetworking]
-default = 1
-allowed = 0,1
-
-[motherboard.flashloader0.fname]
-default = {UEFI}
-
-[motherboard.flashloader1.fname]
-default = uefi-vars.fd
-
-[motherboard.flashloader1.fnameWrite]
-default = uefi-vars.fd
-
-[coretile.cache_state_modelled]
-default = 0
-allowed = 0,1
-
-[coretile.cluster0.cpu0.semihosting-enable]
-default = 1
-allowed = 0,1
-
-[coretile.cluster0.cpu0.semihosting-cmd_line]
-default = "--kernel {KERNEL} --dtb {DTB} --initrd {INITRD} -- console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M"
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-a15x4-a7x4.conf'
@@ -1,117 +0,0 @@
-client_type=fastmodel
-
-# how long the disablesuspend script should take to complete
-# fm takes longer than other android images do
-disablesuspend_timeout = 500
-
-# how long ubuntu takes to boot to prompt
-boot_linaro_timeout = 800
-
-# if you do dhcp on boot, adb will not work (asac) on fastmodels
-enable_network_after_boot_android = 0
-
-# we do usermode networking over the loopback
-default_network_interface = lo
-
-bootloader_prompt = Start:
-
-interrupt_boot_prompt = The default boot selection will start in
-
-interrupt_boot_command = break
-
-# UEFI boot commands
-boot_cmds = sendline a,
- expect Choice:,
- sendline 1,
- expect Select the Boot Device:,
- sendline 2,
- expect File path of the EFI Application or the kernel:,
- sendline uImage,
- expect [a/g/l],
- sendline l,
- expect Add an initrd: [y/n],
- sendline y,
- expect File path of the initrd:,
- sendline uInitrd,
- expect Arguments to pass to the binary:,
- sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
- expect File path of the local FDT:,
- sendline rtsm\\rtsm_ve-ca15x4-ca7x4.dtb,
- expect Description for this new Entry:,
- sendline Test Image,
- expect Choice:,
- sendline 5,
- expect Start:,
- sendline 2
-
-simulator_axf_files =
- img.axf
- linux-system-ISW.axf
- linux-system-semi.axf
-
-simulator_kernel_files =
- uImage
- vmlinuz.*
-
-simulator_initrd_files =
- uInitrd
- initrd.*
-
-simulator_dtb = rtsm_ve-ca15x4-ca7x4.dtb
-simulator_uefi = uefi_rtsm_ve-ca15.bin
-
-license_file = 8224@localhost
-sim_bin = /opt/arm/RTSM_A15-A7x14_VE/Linux64_RTSM_VE_Cortex-A15x4-A7x4/RTSM_VE_Cortex-A15x4-A7x4
-android_adb_port = 6555
-
-simulator_version_command = %(sim_bin)s --version | grep "Fast Models" | sed 's/Fast Models \[//' | sed 's/\]//'
-
-simulator_boot_wrapper = -a coretile.cluster0.*={AXF}
-
-simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s
-
-boot_options =
- motherboard.mmc.p_mmc_file
- motherboard.hostbridge.userNetPorts
- motherboard.smsc_91c111.enabled
- motherboard.hostbridge.userNetworking
- motherboard.flashloader0.fname
- motherboard.flashloader1.fname
- motherboard.flashloader1.fnameWrite
- coretile.cache_state_modelled
- coretile.cluster0.cpu0.semihosting-enable
- coretile.cluster0.cpu0.semihosting-cmd_line
-
-[motherboard.mmc.p_mmc_file]
-default = {IMG}
-
-[motherboard.hostbridge.userNetPorts]
-default="%(android_adb_port)s=%(android_adb_port)s"
-
-[motherboard.smsc_91c111.enabled]
-default = 1
-allowed = 0,1
-
-[motherboard.hostbridge.userNetworking]
-default = 1
-allowed = 0,1
-
-[motherboard.flashloader0.fname]
-default = {UEFI}
-
-[motherboard.flashloader1.fname]
-default = uefi-vars.fd
-
-[motherboard.flashloader1.fnameWrite]
-default = uefi-vars.fd
-
-[coretile.cache_state_modelled]
-default = 0
-allowed = 0,1
-
-[coretile.cluster0.cpu0.semihosting-enable]
-default = 1
-allowed = 0,1
-
-[coretile.cluster0.cpu0.semihosting-cmd_line]
-default = "--kernel {KERNEL} --dtb {DTB} --initrd {INITRD} -- console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M"
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-a15x4-a7x4.conf'
@@ -0,0 +1,117 @@
+client_type=fastmodel
+
+# how long the disablesuspend script should take to complete
+# fm takes longer than other android images do
+disablesuspend_timeout = 500
+
+# how long ubuntu takes to boot to prompt
+boot_linaro_timeout = 800
+
+# if you do dhcp on boot, adb will not work (asac) on fastmodels
+enable_network_after_boot_android = 0
+
+# we do usermode networking over the loopback
+default_network_interface = lo
+
+bootloader_prompt = Start:
+
+interrupt_boot_prompt = The default boot selection will start in
+
+interrupt_boot_command = break
+
+# UEFI boot commands
+boot_cmds = sendline a,
+ expect Choice:,
+ sendline 1,
+ expect Select the Boot Device:,
+ sendline 2,
+ expect File path of the EFI Application or the kernel:,
+ sendline uImage,
+ expect [a/g/l],
+ sendline l,
+ expect Add an initrd: [y/n],
+ sendline y,
+ expect File path of the initrd:,
+ sendline uInitrd,
+ expect Arguments to pass to the binary:,
+ sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
+ expect File path of the local FDT:,
+ sendline rtsm\\rtsm_ve-ca15x4-ca7x4.dtb,
+ expect Description for this new Entry:,
+ sendline Test Image,
+ expect Choice:,
+ sendline 5,
+ expect Start:,
+ sendline 2
+
+simulator_axf_files =
+ img.axf
+ linux-system-ISW.axf
+ linux-system-semi.axf
+
+simulator_kernel_files =
+ uImage
+ vmlinuz.*
+
+simulator_initrd_files =
+ uInitrd
+ initrd.*
+
+simulator_dtb = rtsm_ve-ca15x4-ca7x4.dtb
+simulator_uefi = uefi_rtsm_ve-ca15.bin
+
+license_file = 8224@localhost
+sim_bin = /opt/arm/RTSM_A15-A7x14_VE/Linux64_RTSM_VE_Cortex-A15x4-A7x4/RTSM_VE_Cortex-A15x4-A7x4
+android_adb_port = 6555
+
+simulator_version_command = %(sim_bin)s --version | grep "Fast Models" | sed 's/Fast Models \[//' | sed 's/\]//'
+
+simulator_boot_wrapper = -a coretile.cluster0.*={AXF}
+
+simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s
+
+boot_options =
+ motherboard.mmc.p_mmc_file
+ motherboard.hostbridge.userNetPorts
+ motherboard.smsc_91c111.enabled
+ motherboard.hostbridge.userNetworking
+ motherboard.flashloader0.fname
+ motherboard.flashloader1.fname
+ motherboard.flashloader1.fnameWrite
+ coretile.cache_state_modelled
+ coretile.cluster0.cpu0.semihosting-enable
+ coretile.cluster0.cpu0.semihosting-cmd_line
+
+[motherboard.mmc.p_mmc_file]
+default = {IMG}
+
+[motherboard.hostbridge.userNetPorts]
+default="%(android_adb_port)s=%(android_adb_port)s"
+
+[motherboard.smsc_91c111.enabled]
+default = 1
+allowed = 0,1
+
+[motherboard.hostbridge.userNetworking]
+default = 1
+allowed = 0,1
+
+[motherboard.flashloader0.fname]
+default = {UEFI}
+
+[motherboard.flashloader1.fname]
+default = uefi-vars.fd
+
+[motherboard.flashloader1.fnameWrite]
+default = uefi-vars.fd
+
+[coretile.cache_state_modelled]
+default = 0
+allowed = 0,1
+
+[coretile.cluster0.cpu0.semihosting-enable]
+default = 1
+allowed = 0,1
+
+[coretile.cluster0.cpu0.semihosting-cmd_line]
+default = "--kernel {KERNEL} --dtb {DTB} --initrd {INITRD} -- console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M"
=== removed file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-armv8.conf'
@@ -1,128 +0,0 @@
-client_type=fastmodel
-
-# how long the disablesuspend script should take to complete
-# fm takes longer than other android images do
-disablesuspend_timeout = 500
-
-# how long ubuntu takes to boot to prompt
-boot_linaro_timeout = 500
-
-#after enabled the network, we can set it to true
-enable_network_after_boot_android = 1
-
-# change to use eth0 after we enabled the network
-default_network_interface = eth0
-
-bootloader_prompt = Start:
-
-interrupt_boot_prompt = The default boot selection will start in
-
-interrupt_boot_command = break
-
-# UEFI boot commands
-boot_cmds = sendline a,
- expect Choice:,
- sendline 1,
- expect Select the Boot Device:,
- sendline 2,
- expect File path of the EFI Application or the kernel:,
- sendline uImage,
- expect [a/g/l],
- sendline l,
- expect Add an initrd: [y/n],
- sendline y,
- expect File path of the initrd:,
- sendline uInitrd,
- expect Arguments to pass to the binary:,
- sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
- expect File path of the local FDT:,
- sendline rtsm\\rtsm_ve-ca15x1-ca7x1.dtb,
- expect Description for this new Entry:,
- sendline Test Image,
- expect Choice:,
- sendline 5,
- expect Start:,
- sendline 2
-
-simulator_axf_files = linux-system.axf
-
-license_file = 8224@localhost
-sim_bin = /opt/arm/RTSMv8_VE/bin/model_shell64
-sim_model = /opt/arm/RTSMv8_VE/models/Linux64_GCC-4.1/RTSM_VE_AEMv8A.so
-android_adb_port = 5555
-interfaceName = armv8_01
-
-simulator_version_command = %(sim_bin)s --version | grep "Model Shell" | sed 's/Model Shell //'
-
-simulator_boot_wrapper = -a {AXF}
-
-simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s %(sim_model)s
-
-boot_options =
- motherboard.mmc.p_mmc_file
- motherboard.smsc_91c111.enabled
- cluster.NUM_CORES
- cluster.cpu0.unpredictable_WPMASKANDBAS
- cluster.cpu0.unpredictable_non-contigous_BAS
- cluster.cpu1.unpredictable_WPMASKANDBAS
- cluster.cpu1.unpredictable_non-contigous_BAS
- cluster.cpu2.unpredictable_WPMASKANDBAS
- cluster.cpu2.unpredictable_non-contigous_BAS
- cluster.cpu3.unpredictable_WPMASKANDBAS
- cluster.cpu3.unpredictable_non-contigous_BAS
- cluster.take_ccfail_undef
- motherboard.hostbridge.interfaceName
- motherboard.smsc_91c111.mac_address
-
-[motherboard.smsc_91c111.mac_address]
-default="auto"
-
-[motherboard.hostbridge.interfaceName]
-default="%(interfaceName)s"
-
-[motherboard.mmc.p_mmc_file]
-default = {IMG}
-
-[motherboard.smsc_91c111.enabled]
-default = 1
-allowed = 0,1
-
-[cluster.NUM_CORES]
-default = 1
-allowed = 0,1
-
-[cluster.cpu0.unpredictable_WPMASKANDBAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu0.unpredictable_non-contigous_BAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu1.unpredictable_WPMASKANDBAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu1.unpredictable_non-contigous_BAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu2.unpredictable_WPMASKANDBAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu2.unpredictable_non-contigous_BAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu3.unpredictable_WPMASKANDBAS]
-default = 0
-allowed = 0,1
-
-[cluster.cpu3.unpredictable_non-contigous_BAS]
-default = 0
-allowed = 0,1
-
-[cluster.take_ccfail_undef]
-default = 0
-allowed = 0,1
=== added file 'lava_dispatcher/default-config/lava-dispatcher/device-types/rtsm_ve-armv8.conf'
@@ -0,0 +1,128 @@
+client_type=fastmodel
+
+# how long the disablesuspend script should take to complete
+# fm takes longer than other android images do
+disablesuspend_timeout = 500
+
+# how long ubuntu takes to boot to prompt
+boot_linaro_timeout = 500
+
+#after enabled the network, we can set it to true
+enable_network_after_boot_android = 1
+
+# change to use eth0 after we enabled the network
+default_network_interface = eth0
+
+bootloader_prompt = Start:
+
+interrupt_boot_prompt = The default boot selection will start in
+
+interrupt_boot_command = break
+
+# UEFI boot commands
+boot_cmds = sendline a,
+ expect Choice:,
+ sendline 1,
+ expect Select the Boot Device:,
+ sendline 2,
+ expect File path of the EFI Application or the kernel:,
+ sendline uImage,
+ expect [a/g/l],
+ sendline l,
+ expect Add an initrd: [y/n],
+ sendline y,
+ expect File path of the initrd:,
+ sendline uInitrd,
+ expect Arguments to pass to the binary:,
+ sendline 'console=ttyAMA0,38400n8 root=/dev/mmcblk0p2 rootwait ro mem=1024M',
+ expect File path of the local FDT:,
+ sendline rtsm\\rtsm_ve-ca15x1-ca7x1.dtb,
+ expect Description for this new Entry:,
+ sendline Test Image,
+ expect Choice:,
+ sendline 5,
+ expect Start:,
+ sendline 2
+
+simulator_axf_files = linux-system.axf
+
+license_file = 8224@localhost
+sim_bin = /opt/arm/RTSMv8_VE/bin/model_shell64
+sim_model = /opt/arm/RTSMv8_VE/models/Linux64_GCC-4.1/RTSM_VE_AEMv8A.so
+android_adb_port = 5555
+interfaceName = armv8_01
+
+simulator_version_command = %(sim_bin)s --version | grep "Model Shell" | sed 's/Model Shell //'
+
+simulator_boot_wrapper = -a {AXF}
+
+simulator_command = sudo -u www-data ARMLMD_LICENSE_FILE="%(license_file)s" %(sim_bin)s %(sim_model)s
+
+boot_options =
+ motherboard.mmc.p_mmc_file
+ motherboard.smsc_91c111.enabled
+ cluster.NUM_CORES
+ cluster.cpu0.unpredictable_WPMASKANDBAS
+ cluster.cpu0.unpredictable_non-contigous_BAS
+ cluster.cpu1.unpredictable_WPMASKANDBAS
+ cluster.cpu1.unpredictable_non-contigous_BAS
+ cluster.cpu2.unpredictable_WPMASKANDBAS
+ cluster.cpu2.unpredictable_non-contigous_BAS
+ cluster.cpu3.unpredictable_WPMASKANDBAS
+ cluster.cpu3.unpredictable_non-contigous_BAS
+ cluster.take_ccfail_undef
+ motherboard.hostbridge.interfaceName
+ motherboard.smsc_91c111.mac_address
+
+[motherboard.smsc_91c111.mac_address]
+default="auto"
+
+[motherboard.hostbridge.interfaceName]
+default="%(interfaceName)s"
+
+[motherboard.mmc.p_mmc_file]
+default = {IMG}
+
+[motherboard.smsc_91c111.enabled]
+default = 1
+allowed = 0,1
+
+[cluster.NUM_CORES]
+default = 1
+allowed = 0,1
+
+[cluster.cpu0.unpredictable_WPMASKANDBAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu0.unpredictable_non-contigous_BAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu1.unpredictable_WPMASKANDBAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu1.unpredictable_non-contigous_BAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu2.unpredictable_WPMASKANDBAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu2.unpredictable_non-contigous_BAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu3.unpredictable_WPMASKANDBAS]
+default = 0
+allowed = 0,1
+
+[cluster.cpu3.unpredictable_non-contigous_BAS]
+default = 0
+allowed = 0,1
+
+[cluster.take_ccfail_undef]
+default = 0
+allowed = 0,1
=== modified file 'lava_dispatcher/device/master.py'
@@ -245,10 +245,10 @@
def _format_testpartition(self, runner, fstype):
logging.info("Format testboot and testrootfs partitions")
runner.run('umount /dev/disk/by-label/testrootfs', failok=True)
- runner.run('mkfs -t %s -q /dev/disk/by-label/testrootfs -L testrootfs'
+ runner.run('nice mkfs -t %s -q /dev/disk/by-label/testrootfs -L testrootfs'
% fstype, timeout=1800)
runner.run('umount /dev/disk/by-label/testboot', failok=True)
- runner.run('mkfs.vfat /dev/disk/by-label/testboot -n testboot')
+ runner.run('nice mkfs.vfat /dev/disk/by-label/testboot -n testboot')
def _generate_tarballs(self, image_file):
self._customize_linux(image_file)
@@ -335,7 +335,7 @@
parent_dir, target_name = os.path.split(targetdir)
- runner.run('tar -czf /tmp/fs.tgz -C %s %s' %
+ runner.run('nice tar -czf /tmp/fs.tgz -C %s %s' %
(parent_dir, target_name))
runner.run('cd /tmp') # need to be in same dir as fs.tgz
self.proc.sendline('python -m SimpleHTTPServer 0 2>/dev/null')
@@ -355,7 +355,7 @@
tfdir = os.path.join(self.scratch_dir, str(time.time()))
try:
os.mkdir(tfdir)
- self.context.run_command('tar -C %s -xzf %s' % (tfdir, tf))
+ self.context.run_command('nice tar -C %s -xzf %s' % (tfdir, tf))
yield os.path.join(tfdir, target_name)
finally:
@@ -387,7 +387,7 @@
runner.run('umount /mnt')
def _wait_for_master_boot(self):
- self.proc.expect(self.config.image_boot_msg, timeout=300)
+ self.proc.expect(self.config.image_boot_msg, timeout=30)
self._wait_for_prompt(self.proc, self.config.master_str, timeout=300)
def boot_master_image(self):
@@ -665,9 +665,9 @@
session.run('mv /mnt/lava/boot/uInitrd ~/tmp')
session.run('cd ~/tmp/')
- session.run('dd if=uInitrd of=uInitrd.data ibs=64 skip=1')
+ session.run('nice dd if=uInitrd of=uInitrd.data ibs=64 skip=1')
session.run('mv uInitrd.data ramdisk.cpio.gz')
- session.run('gzip -d -f ramdisk.cpio.gz; cpio -i -F ramdisk.cpio')
+ session.run('nice gzip -d -f ramdisk.cpio.gz; cpio -i -F ramdisk.cpio')
session.run(
'sed -i "/export PATH/a \ \ \ \ export PS1 \'%s\'" init.rc' %
@@ -684,11 +684,11 @@
_update_uInitrd_partitions(session, f)
session.run("cat %s" % f, failok=True)
- session.run('cpio -i -t -F ramdisk.cpio | cpio -o -H newc | \
+ session.run('nice cpio -i -t -F ramdisk.cpio | cpio -o -H newc | \
gzip > ramdisk_new.cpio.gz')
session.run(
- 'mkimage -A arm -O linux -T ramdisk -n "Android Ramdisk Image" \
+ 'nice mkimage -A arm -O linux -T ramdisk -n "Android Ramdisk Image" \
-d ramdisk_new.cpio.gz uInitrd')
session.run('cd -')
@@ -745,7 +745,7 @@
def _purge_linaro_android_sdcard(session):
logging.info("Reformatting Linaro Android sdcard filesystem")
- session.run('mkfs.vfat /dev/disk/by-label/sdcard -n sdcard')
+ session.run('nice mkfs.vfat /dev/disk/by-label/sdcard -n sdcard')
session.run('udevadm trigger')
@@ -760,7 +760,7 @@
def _deploy_linaro_android_data(session, datatbz2):
data_label = _android_data_label(session)
session.run('umount /dev/disk/by-label/%s' % data_label, failok=True)
- session.run('mkfs.ext4 -q /dev/disk/by-label/%s -L %s' %
+ session.run('nice mkfs.ext4 -q /dev/disk/by-label/%s -L %s' %
(data_label, data_label))
session.run('udevadm trigger')
session.run('mkdir -p /mnt/lava/data')
=== modified file 'lava_dispatcher/downloader.py'
@@ -41,7 +41,7 @@
process = None
try:
process = subprocess.Popen(
- ['ssh', url.netloc, 'cat', url.path],
+ ['nice', 'ssh', url.netloc, 'cat', url.path],
shell=False,
stdout=subprocess.PIPE
)
=== modified file 'lava_dispatcher/job.py'
@@ -23,7 +23,8 @@
import pexpect
import time
import traceback
-
+import hashlib
+import simplejson
from json_schema_validator.schema import Schema
from json_schema_validator.validator import Validator
@@ -64,6 +65,34 @@
'type': 'string',
'optional': True,
},
+ 'device_group': {
+ 'type': 'array',
+ 'additionalProperties': False,
+ 'optional': True,
+ 'items': {
+ 'type': 'object',
+ 'properties': {
+ 'role': {
+ 'optional': False,
+ 'type': 'string',
+ },
+ 'count': {
+ 'optional': False,
+ 'type': 'integer',
+ },
+ 'device_type': {
+ 'optional': False,
+ 'type': 'string',
+ },
+ 'tags': {
+ 'type': 'array',
+ 'uniqueItems': True,
+ 'items': {'type': 'string'},
+ 'optional': True,
+ },
+ },
+ },
+ },
'job_name': {
'type': 'string',
'optional': True,
@@ -76,6 +105,26 @@
'type': 'string',
'optional': True,
},
+ 'target_group': {
+ 'type': 'string',
+ 'optional': True,
+ },
+ 'port': {
+ 'type': 'integer',
+ 'optional': True,
+ },
+ 'hostname': {
+ 'type': 'string',
+ 'optional': True,
+ },
+ 'role': {
+ 'type': 'string',
+ 'optional': True,
+ },
+ 'group_size': {
+ 'type': 'integer',
+ 'optional': True,
+ },
'timeout': {
'type': 'integer',
'optional': False,
@@ -136,7 +185,9 @@
except:
return None
- def run(self):
+ def run(self, transport=None, group_data=None):
+ self.context.assign_transport(transport)
+ self.context.assign_group_data(group_data)
validate_job_data(self.job_data)
self._set_logging_level()
lava_commands = get_all_cmds()
@@ -157,6 +208,31 @@
self.context.test_data.add_tags(self.tags)
+ if 'target' in self.job_data:
+ metadata['target'] = self.job_data['target']
+ self.context.test_data.add_metadata(metadata)
+
+ if 'logging_level' in self.job_data:
+ metadata['logging_level'] = self.job_data['logging_level']
+ self.context.test_data.add_metadata(metadata)
+
+ if 'target_group' in self.job_data:
+ metadata['target_group'] = self.job_data['target_group']
+ self.context.test_data.add_metadata(metadata)
+
+ if 'role' in self.job_data:
+ metadata['role'] = self.job_data['role']
+ self.context.test_data.add_metadata(metadata)
+
+ if 'group_size' in self.job_data:
+ metadata['group_size'] = self.job_data['group_size']
+ self.context.test_data.add_metadata(metadata)
+
+ logging.info("[ACTION-B] Multi Node test!")
+ logging.info("[ACTION-B] target_group is (%s)." % self.context.test_data.metadata['target_group'])
+ else:
+ logging.info("[ACTION-B] Single node test!")
+
try:
job_length = len(self.job_data['actions'])
job_num = 0
@@ -177,6 +253,7 @@
status = 'fail'
action.run(**params)
except ADBConnectError as err:
+ logging.info("ADBConnectError")
if cmd.get('command') == 'boot_linaro_android_image':
logging.warning(('[ACTION-E] %s failed to create the'
' adb connection') % (cmd['command']))
@@ -195,6 +272,7 @@
## mark it as pass if the second boot works
status = 'pass'
except TimeoutError as err:
+ logging.info("TimeoutError")
if cmd.get('command').startswith('lava_android_test'):
logging.warning("[ACTION-E] %s times out." %
(cmd['command']))
@@ -214,15 +292,23 @@
self.context.client.proc.sendline("")
time.sleep(5)
self.context.client.boot_linaro_android_image()
+ else:
+ logging.warn("Unhandled timeout condition")
+ continue
except CriticalError as err:
+ logging.info("CriticalError")
raise
except (pexpect.TIMEOUT, GeneralError) as err:
+ logging.warn("pexpect timed out, pass with status %s" % status)
pass
except Exception as err:
+ logging.info("General Exception")
raise
else:
+ logging.info("setting status pass")
status = 'pass'
finally:
+ logging.info("finally status %s" % status)
err_msg = ""
if status == 'fail':
# XXX mwhudson, 2013-01-17: I have no idea what this
@@ -255,7 +341,10 @@
self.context.test_data.add_metadata({
'target.device_version': device_version
})
- if submit_results:
+ if 'target_group' in self.job_data:
+ # all nodes call aggregate, even if there is no submit_results command
+ self._aggregate_bundle(transport, lava_commands, submit_results)
+ elif submit_results:
params = submit_results.get('parameters', {})
action = lava_commands[submit_results['command']](
self.context)
@@ -270,6 +359,57 @@
raise
self.context.finish()
+ def _aggregate_bundle(self, transport, lava_commands, submit_results):
+ if "sub_id" not in self.job_data:
+ raise ValueError("Invalid MultiNode JSON - missing sub_id")
+ # all nodes call aggregate, even if there is no submit_results command
+ base_msg = {
+ "request": "aggregate",
+ "bundle": None,
+ "sub_id": self.job_data['sub_id']
+ }
+ if not submit_results:
+ transport(json.dumps(base_msg))
+ return
+ # need to collate this bundle before submission, then send to the coordinator.
+ params = submit_results.get('parameters', {})
+ action = lava_commands[submit_results['command']](self.context)
+ token = None
+ group_name = self.job_data['target_group']
+ if 'token' in params:
+ token = params['token']
+ # the transport layer knows the client_name for this bundle.
+ bundle = action.collect_bundles(**params)
+ # catch parse errors in bundles
+ try:
+ bundle_str = simplejson.dumps(bundle)
+ except Exception as e:
+ logging.error("Unable to parse bundle '%s' - %s" % (bundle, e))
+ transport(json.dumps(base_msg))
+ return
+ sha1 = hashlib.sha1()
+ sha1.update(bundle_str)
+ base_msg['bundle'] = sha1.hexdigest()
+ reply = transport(json.dumps(base_msg))
+ # if this is sub_id zero, this will wait until the last call to aggregate
+ # and then the reply is the full list of bundle checksums.
+ if reply == "ack":
+ # coordinator has our checksum for this bundle, submit as pending to launch_control
+ action.submit_pending(bundle, params['server'], params['stream'], token, group_name)
+ logging.info("Result bundle %s has been submitted to Dashboard as pending." % base_msg['bundle'])
+ return
+ elif reply == "nack":
+ logging.error("Unable to submit result bundle checksum to coordinator")
+ return
+ else:
+ if self.job_data["sub_id"].endswith(".0"):
+ # submit this bundle, add it to the pending list which is indexed by group_name and post the set
+ logging.info("Submitting bundle '%s' and aggregating with pending group results." % base_msg['bundle'])
+ action.submit_group_list(bundle, params['server'], params['stream'], token, group_name)
+ return
+ else:
+ raise ValueError("API error - collated bundle has been sent to the wrong node.")
+
def _set_logging_level(self):
# set logging level is optional
level = self.logging_level
=== modified file 'lava_dispatcher/signals/__init__.py'
@@ -21,6 +21,7 @@
import contextlib
import logging
import tempfile
+import json
from lava_dispatcher.utils import rmtree
@@ -123,13 +124,25 @@
pass
+class FailedCall(Exception):
+ """
+ Just need a plain Exception to trigger the failure of the
+ signal handler and set keep_running to False.
+ """
+
+ def __init__(self, call):
+ Exception.__init__(self, "%s call failed" % call)
+
+
class SignalDirector(object):
- def __init__(self, client, testdefs_by_uuid):
+ def __init__(self, client, testdefs_by_uuid, context):
self.client = client
self.testdefs_by_uuid = testdefs_by_uuid
self._test_run_data = []
self._cur_handler = None
+ self.context = context
+ self.connection = None
def signal(self, name, params):
handler = getattr(self, '_on_' + name, None)
@@ -141,6 +154,11 @@
handler(*params)
except:
logging.exception("handling signal %s failed", name)
+ return False
+ return True
+
+ def set_connection(self, connection):
+ self.connection = connection
def _on_STARTRUN(self, test_run_id, uuid):
self._cur_handler = None
@@ -162,6 +180,75 @@
if self._cur_handler:
self._cur_handler.endtc(test_case_id)
+ def _on_SEND(self, *args):
+ arg_length = len(args)
+ if arg_length == 1:
+ msg = {"request": "lava_send", "messageID": args[0], "message": None}
+ else:
+ message_id = args[0]
+ remainder = args[1:arg_length]
+ logging.debug("%d key value pair(s) to be sent." % int(len(remainder)))
+ data = {}
+ for message in remainder:
+ detail = str.split(message, "=")
+ if len(detail) == 2:
+ data[detail[0]] = detail[1]
+ msg = {"request": "lava_send", "messageID": message_id, "message": data}
+ logging.debug("Handling signal <LAVA_SEND %s>" % msg)
+ reply = self.context.transport(json.dumps(msg))
+ if reply == "nack":
+ raise FailedCall("LAVA_SEND nack")
+
+ def _on_SYNC(self, message_id):
+ if not self.connection:
+ logging.error("No connection available for on_SYNC")
+ return
+ logging.debug("Handling signal <LAVA_SYNC %s>" % message_id)
+ msg = {"request": "lava_sync", "messageID": message_id, "message": None}
+ reply = self.context.transport(json.dumps(msg))
+ message_str = ""
+ if reply == "nack":
+ message_str = " nack"
+ else:
+ message_str = ""
+ ret = self.connection.sendline("<LAVA_SYNC_COMPLETE%s>" % message_str)
+ logging.debug("runner._connection.sendline wrote %d bytes" % ret)
+
+ def _on_WAIT(self, message_id):
+ if not self.connection:
+ logging.error("No connection available for on_WAIT")
+ return
+ logging.debug("Handling signal <LAVA_WAIT %s>" % message_id)
+ msg = {"request": "lava_wait", "messageID": message_id, "message": None}
+ reply = self.context.transport(json.dumps(msg))
+ message_str = ""
+ if reply == "nack":
+ message_str = " nack"
+ else:
+ for target, messages in reply.items():
+ for key, value in messages.items():
+ message_str += " %s:%s=%s" % (target, key, value)
+ self.connection.sendline("<LAVA_WAIT_COMPLETE%s>" % message_str)
+
+ def _on_WAIT_ALL(self, message_id, role=None):
+ if not self.connection:
+ logging.error("No connection available for on_WAIT_ALL")
+ return
+ logging.debug("Handling signal <LAVA_WAIT_ALL %s>" % message_id)
+ msg = {"request": "lava_wait_all", "messageID": message_id, "role": role}
+ reply = self.context.transport(json.dumps(msg))
+ message_str = ""
+ if reply == "nack":
+ message_str = " nack"
+ else:
+ #the reply format is like this :
+ #"{target:{key1:value, key2:value2, key3:value3},
+ # target2:{key1:value, key2:value2, key3:value3}}"
+ for target, messages in reply.items():
+ for key, value in messages.items():
+ message_str += " %s:%s=%s" % (target, key, value)
+ self.connection.sendline("<LAVA_WAIT_ALL_COMPLETE%s>" % message_str)
+
def postprocess_bundle(self, bundle):
for test_run in bundle['test_runs']:
uuid = test_run['analyzer_assigned_uuid']
=== added directory 'lava_dispatcher/tests/test-config/bin'
=== removed directory 'lava_dispatcher/tests/test-config/bin'
=== removed file 'lava_dispatcher/tests/test-config/bin/fake-qemu'
@@ -1,3 +0,0 @@
-#!/bin/sh
-
-echo 'QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-4), Copyright (c) 2003-2008 Fabrice Bellard'
=== added file 'lava_dispatcher/tests/test-config/bin/fake-qemu'
@@ -0,0 +1,3 @@
+#!/bin/sh
+
+echo 'QEMU emulator version 1.5.0 (Debian 1.5.0+dfsg-4), Copyright (c) 2003-2008 Fabrice Bellard'
=== modified file 'lava_dispatcher/tests/test_device_version.py'
@@ -18,7 +18,6 @@
# along with this program; if not, see <http://www.gnu.org/licenses>.
import re
-import lava_dispatcher.config
from lava_dispatcher.tests.helper import LavaDispatcherTestCase, create_device_config, create_config
import os
@@ -28,6 +27,7 @@
from lava_dispatcher.context import LavaContext
from lava_dispatcher.config import get_config
+
def _create_fastmodel_target():
config = create_device_config('fastmodel01', {'device_type': 'fastmodel',
'simulator_binary': '/path/to/fastmodel',
@@ -57,6 +57,6 @@
def test_qemu(self):
fake_qemu = os.path.join(os.path.dirname(__file__), 'test-config', 'bin', 'fake-qemu')
- target = _create_qemu_target({ 'qemu_binary': fake_qemu })
+ target = _create_qemu_target({'qemu_binary': fake_qemu})
device_version = target.get_device_version()
assert(re.search('^[0-9.]+', device_version))
=== modified file 'lava_dispatcher/utils.py'
@@ -78,7 +78,7 @@
"""
cmd = 'tar -C %s -czf %s %s' % (rootdir, tfname, basedir)
if asroot:
- cmd = 'sudo %s' % cmd
+ cmd = 'nice sudo %s' % cmd
if logging_system(cmd):
raise CriticalError('Unable to make tarball of: %s' % rootdir)
@@ -99,7 +99,7 @@
a list of all the files (full path). This is being used to get around
issues that python's tarfile seems to have with unicode
"""
- if logging_system('tar -C %s -xzf %s' % (tmpdir, tfname)):
+ if logging_system('nice tar -C %s -xzf %s' % (tmpdir, tfname)):
raise CriticalError('Unable to extract tarball: %s' % tfname)
return _list_files(tmpdir)
=== added directory 'lava_test_shell/multi_node'
=== added file 'lava_test_shell/multi_node/lava-group'
@@ -0,0 +1,19 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#This command will produce in its standard output a representation of the
+#device group that is participating in the multi-node test job.
+#
+#Usage: ``lava-group``
+#
+#The output format contains one line per device, and each line contains
+#the hostname and the role that device is playing in the test, separated
+#by a TAB character::
+#
+# panda01 client
+# highbank01 loadbalancer
+# highbank02 backend
+# highbank03 backend
+
+printf ${LAVA_GROUP}
=== added file 'lava_test_shell/multi_node/lava-multi-node.lib'
@@ -0,0 +1,210 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+
+MESSAGE_PREFIX="<LAVA_MULTI_NODE>"
+MESSAGE_COMMAND="<${LAVA_MULTI_NODE_API}"
+MESSAGE_HEAD="$MESSAGE_PREFIX $MESSAGE_COMMAND"
+#MESSAGE_ID="<$1>"
+MESSAGE_ACK="<${LAVA_MULTI_NODE_API}_ACK>"
+
+MESSAGE_REPLY="<${LAVA_MULTI_NODE_API}_COMPLETE"
+MESSAGE_REPLY_ACK="<${LAVA_MULTI_NODE_API}_COMPLETE_ACK>"
+
+LAVA_MULTI_NODE_EXIT_ERROR=1
+
+_get_key_value_pattern () {
+ echo $@|\
+ tr ' ' '\n' |\
+ sed -n '/\b\w\w*[=]\w\w*\b/p'|\
+ tr '\n' ' '
+}
+
+_lava_multi_node_debug () {
+
+if [ -n "$LAVA_MULTI_NODE_DEBUG" ] ; then
+ echo "${MESSAGE_COMMAND}_DEBUG $@ $(date)>"
+fi
+
+}
+
+_lava_multi_node_send () {
+
+_lava_multi_node_debug "$FUNCNAME started"
+
+result=$(echo $1 | grep "..*=..*")
+
+if [ -n "$1" -a "${result}x" = "x" ] ; then
+ echo ${MESSAGE_HEAD} $@">"
+else
+ _lava_multi_node_debug "$FUNCNAME error messageID : " "$result"
+ exit $LAVA_MULTI_NODE_EXIT_ERROR
+fi
+
+_lava_multi_node_debug "$FUNCNAME finished"
+
+}
+
+_lava_multi_node_process_message () {
+
+_lava_multi_node_debug "$FUNCNAME save message to $LAVA_MULTI_NODE_CACHE"
+#clean old cache file
+rm $LAVA_MULTI_NODE_CACHE 2>/dev/null
+
+until [ -z "$1" ] ; do
+ result=$(echo $1 | grep "..*=..*")
+ if [ "${result}x" != "x" ] ; then
+ echo $1 >> $LAVA_MULTI_NODE_CACHE
+ elif [ "${1}x" = "nackx" ] ; then
+ echo "Error:no-response $1, Exit from $LAVA_MULTI_NODE_API!"
+ exit $LAVA_MULTI_NODE_EXIT_ERROR
+ else
+ echo "Warning:unrecognized message $1"
+ fi
+ shift
+done
+}
+
+lava_multi_node_send () {
+
+_lava_multi_node_debug "$FUNCNAME preparing"
+
+_lava_multi_node_send $@
+
+while [ -n "$MESSAGE_NEED_ACK" -a "${SHELL}x" = "/bin/bashx" ] ; do
+_lava_multi_node_debug "$FUNCNAME waiting for ack"
+ read -t $MESSAGE_TIMEOUT line
+ result=$(echo $line | grep "${MESSAGE_ACK}")
+ if [ "${result}x" != "x" ] ; then
+# echo ${MESSAGE_ACK}
+ break
+ fi
+ _lava_multi_node_send $@
+done
+
+_lava_multi_node_debug "$FUNCNAME finished"
+
+}
+
+lava_multi_node_wait_for_signal () {
+
+_lava_multi_node_debug "$FUNCNAME starting to wait"
+
+while read line; do
+ result=$(echo $line | grep "${MESSAGE_REPLY}>")
+ if [ "${result}x" != "x" ] ; then
+ if [ -n "$MESSAGE_NEED_ACK" ] ; then
+ echo ${MESSAGE_REPLY_ACK}
+ fi
+ break
+ fi
+done
+
+_lava_multi_node_debug "$FUNCNAME waiting over"
+
+}
+
+lava_multi_node_wait_for_message () {
+
+_lava_multi_node_debug "$FUNCNAME starting to wait"
+
+if [ -n "$1" ] ; then
+ export LAVA_MULTI_NODE_CACHE=$1
+fi
+
+while read line; do
+ result=$(echo $line | grep "${MESSAGE_REPLY}")
+ if [ "${result}x" != "x" ] ; then
+ line=${line##*${MESSAGE_REPLY}}
+ _lava_multi_node_process_message ${line%%>*}
+ if [ -n "$MESSAGE_NEED_ACK" ] ; then
+ echo ${MESSAGE_REPLY_ACK}
+ fi
+ break
+ fi
+done
+
+_lava_multi_node_debug "$FUNCNAME waiting over"
+
+}
+
+lava_multi_node_get_network_info () {
+
+_NETWORK_INTERFACE=$1
+_RAW_STREAM_V4=`ifconfig $_NETWORK_INTERFACE |grep "inet "`
+_RAW_STREAM_V6=`ifconfig $_NETWORK_INTERFACE |grep "inet6 "`
+_RAW_STREAM_MAC=`ifconfig $_NETWORK_INTERFACE |grep "ether "`
+
+_IPV4_STREAM_IP=`echo $_RAW_STREAM_V4 | cut -f2 -d" "`
+_IPV4_STREAM_NM=`echo $_RAW_STREAM_V4 | cut -f4 -d" "`
+_IPV4_STREAM_BC=`echo $_RAW_STREAM_V4 | cut -f6 -d" "`
+_IPV4_STREAM="ipv4="$_IPV4_STREAM_IP" netmask="$_IPV4_STREAM_NM" \
+broadcast="$_IPV4_STREAM_BC
+
+_IPV6_STREAM_IP=`echo $_RAW_STREAM_V6 | cut -f2 -d" "`
+_IPV6_STREAM="ipv6="$_IPV6_STREAM_IP
+
+_MAC_STREAM="mac="`echo $_RAW_STREAM_MAC | cut -f2 -d" "`
+
+_HOSTNAME_STREAM="hostname="`hostname`
+
+_HOSTNAME_FULL_STREAM="hostname-full="`hostname -f`
+
+_DEF_GATEWAY_STREAM="default-gateway="`route -n |grep "UG "| cut -f10 -d" "`
+
+#get DNS configure
+_Counter=1
+for line in `cat /etc/resolv.conf | grep "nameserver"| cut -d " " -f 2` ; do
+ export _DNS_${_Counter}_STREAM=$line
+ _Counter=`expr ${_Counter} + 1`
+done
+_DNS_STREAM="dns_1=${_DNS_1_STREAM} dns_2=${_DNS_2_STREAM} \
+dns_3=${_DNS_3_STREAM}"
+
+_get_key_value_pattern $_IPV4_STREAM $_IPV6_STREAM $_MAC_STREAM \
+$_HOSTNAME_STREAM $_HOSTNAME_FULL_STREAM $_DEF_GATEWAY_STREAM $_DNS_STREAM
+
+}
+
+lava_multi_node_check_cache () {
+
+if [ -n "$1" ] ; then
+ export LAVA_MULTI_NODE_CACHE=$1
+fi
+
+if [ ! -f $LAVA_MULTI_NODE_CACHE ] ; then
+ _lava_multi_node_debug "$FUNCNAME not cache file $LAVA_MULTI_NODE_CACHE !"
+ exit $LAVA_MULTI_NODE_EXIT_ERROR
+fi
+
+}
+
+lava_multi_node_print_host_info () {
+
+_HOSTNAME=$1
+_INFO=$2
+_RAW_STREAM=`cat $LAVA_MULTI_NODE_NETWORK_CACHE |grep "$_HOSTNAME:$_INFO="`
+
+if [ -n "$_RAW_STREAM" ] ; then
+ echo $_RAW_STREAM|cut -d'=' -f2
+fi
+
+}
+
+lava_multi_node_make_hosts () {
+
+for line in `grep ":ipv4" $LAVA_MULTI_NODE_NETWORK_CACHE` ; do
+ _IP_STREAM=`echo $line | cut -d'=' -f2`
+ _TARGET_STREAM=`echo $line | cut -d':' -f1`
+ _HOSTNAME_STREAM=`grep "$_TARGET_STREAM:hostname=" \
+$LAVA_MULTI_NODE_NETWORK_CACHE | cut -d'=' -f2`
+ if [ -n "$_HOSTNAME_STREAM" ]; then
+ printf "$_IP_STREAM\t$_HOSTNAME_STREAM\n" >> $1
+ else
+ printf "$_IP_STREAM\t$_TARGET_STREAM\n" >> $1
+ fi
+done
+
+}
+
=== added file 'lava_test_shell/multi_node/lava-network'
@@ -0,0 +1,104 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#lava-network
+#-----------------
+#Helper script to broadcast IP data from the test image, wait for data
+#to be received by the rest of the group (or one role within the group)
+#and then provide an interface to retrieve IP data about the group on
+#the command line.
+#
+#Raising a suitable network interface is a job left for the designer of
+#the test definition / image but once a network interface is available,
+#lava-network can be asked to broadcast this information to the rest of
+#the group. At a later stage of the test, before the IP details of the
+#group need to be used, call lava-network collect to receive the same
+#information about the rest of the group.
+#
+#All usage of lava-network needs to use a broadcast (which wraps a call
+#to lava-send) and a collect (which wraps a call to lava-wait-all). As
+#a wrapper around lava-wait-all, collect will block until the rest of
+#the group (or devices in the group with the specified role) has made a
+#broadcast.
+#
+#After the data has been collected, it can be queried for any board
+#specified in the output of lava-group:
+#
+#lava-network query server
+#192.168.3.56
+#
+#Usage:
+# broadcast network info:
+# lava-network broadcast [interface]
+# collect network info:
+# lava-network collect [interface] <role>
+# query specific host info:
+# lava-network query [hostname] [info]
+# export hosts file:
+# lava-network hosts [path of hosts]
+#
+#So interface would be mandatory for broadcast and collect, hostname
+#would be mandatory for query, "path of hosts" would be mandatory for
+#hosts, role is optional for collect.
+
+
+LAVA_MULTI_NODE_API="LAVA_NETWORK"
+#MESSAGE_TIMEOUT=5
+#MESSAGE_NEED_ACK=yes
+
+_LAVA_NETWORK_ID="network_info"
+_LAVA_NETWORK_ARG_MIN=2
+
+. $LAVA_TEST_BIN/lava-multi-node.lib
+
+LAVA_MULTI_NODE_NETWORK_CACHE="/tmp/lava_multi_node_network_cache.txt"
+
+_lava_multi_node_debug "$LAVA_MULTI_NODE_API checking arguments..."
+if [ $# -lt $_LAVA_NETWORK_ARG_MIN ]; then
+ _lava_multi_node_debug "$FUNCNAME Not enough arguments."
+ exit $LAVA_MULTI_NODE_EXIT_ERROR
+fi
+
+_lava_multi_node_debug "$LAVA_MULTI_NODE_API handle sub-command..."
+case "$1" in
+ "broadcast")
+ _lava_multi_node_debug "$LAVA_MULTI_NODE_API handle broadcast command..."
+ LAVA_MULTI_NODE_API="LAVA_SEND"
+ MESSAGE_COMMAND="<${LAVA_MULTI_NODE_API}"
+ export MESSAGE_ACK="<${LAVA_MULTI_NODE_API}_ACK>"
+ export MESSAGE_REPLY="<${LAVA_MULTI_NODE_API}_COMPLETE"
+ export MESSAGE_REPLY_ACK="<${LAVA_MULTI_NODE_API}_COMPLETE_ACK>"
+ export MESSAGE_HEAD="$MESSAGE_PREFIX $MESSAGE_COMMAND"
+ NETWORK_INFO_STREAM=`lava_multi_node_get_network_info $2`
+ lava_multi_node_send $_LAVA_NETWORK_ID $NETWORK_INFO_STREAM
+ ;;
+
+ "collect")
+ _lava_multi_node_debug "$LAVA_MULTI_NODE_API handle collect command..."
+ LAVA_MULTI_NODE_API="LAVA_WAIT_ALL"
+ MESSAGE_COMMAND="<${LAVA_MULTI_NODE_API}"
+ export MESSAGE_ACK="<${LAVA_MULTI_NODE_API}_ACK>"
+ export MESSAGE_REPLY="<${LAVA_MULTI_NODE_API}_COMPLETE"
+ export MESSAGE_REPLY_ACK="<${LAVA_MULTI_NODE_API}_COMPLETE_ACK>"
+ export MESSAGE_HEAD="$MESSAGE_PREFIX $MESSAGE_COMMAND"
+ lava_multi_node_send $_LAVA_NETWORK_ID $3
+ lava_multi_node_wait_for_message $LAVA_MULTI_NODE_NETWORK_CACHE
+ ;;
+
+ "query")
+ _lava_multi_node_debug "$LAVA_MULTI_NODE_API handle query command..."
+ lava_multi_node_check_cache $LAVA_MULTI_NODE_NETWORK_CACHE
+ lava_multi_node_print_host_info $2 $3
+ ;;
+
+ "hosts")
+ _lava_multi_node_debug "$LAVA_MULTI_NODE_API handle hosts command..."
+ lava_multi_node_check_cache $LAVA_MULTI_NODE_NETWORK_CACHE
+ lava_multi_node_make_hosts $2
+ ;;
+
+ *)
+ _lava_multi_node_debug "$LAVA_MULTI_NODE_API command $1 is not supported."
+ exit $LAVA_MULTI_NODE_EXIT_ERROR
+ ;;
+esac
=== added file 'lava_test_shell/multi_node/lava-role'
@@ -0,0 +1,14 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Prints the role the current device is playing in a multi-node job.
+#
+#Usage: ``lava-role``
+#
+#*Example.* In a directory with several scripts, one for each role
+#involved in the test::
+#
+# $ ./run-`lava-role`.sh
+
+echo ${TARGET_ROLE}
=== added file 'lava_test_shell/multi_node/lava-self'
@@ -0,0 +1,9 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Prints the name of the current device.
+#
+#Usage: ``lava-self``
+
+echo ${LAVA_HOSTNAME}
=== added file 'lava_test_shell/multi_node/lava-send'
@@ -0,0 +1,17 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Sends a message to the group, optionally passing associated key-value
+#data pairs. Sending a message is a non-blocking operation. The message
+#is guaranteed to be available to all members of the group, but some of
+#them might never retrieve it.
+#
+#Usage: ``lava-send <message-id> [key1=val1 [key2=val2] ...]``
+LAVA_MULTI_NODE_API="LAVA_SEND"
+#MESSAGE_TIMEOUT=5
+#MESSAGE_NEED_ACK=yes
+
+. $LAVA_TEST_BIN/lava-multi-node.lib
+
+lava_multi_node_send $1 $(_get_key_value_pattern $@)
=== added file 'lava_test_shell/multi_node/lava-sync'
@@ -0,0 +1,20 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Global synchronization primitive. Sends a message, and waits for the
+#same message from all of the other devices.
+#
+#Usage: ``lava-sync <message>``
+#
+#``lava-sync foo`` is effectively the same as ``lava-send foo`` followed
+#by ``lava-wait-all foo``.
+LAVA_MULTI_NODE_API="LAVA_SYNC"
+#MESSAGE_TIMEOUT=5
+#MESSAGE_NEED_ACK=yes
+
+. $LAVA_TEST_BIN/lava-multi-node.lib
+
+lava_multi_node_send $1
+
+lava_multi_node_wait_for_message
=== added file 'lava_test_shell/multi_node/lava-wait'
@@ -0,0 +1,21 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Waits until any other device in the group sends a message with the given
+#ID. This call will block until such message is sent.
+#
+#Usage: ``lava-wait <message-id>``
+#
+#If there was data passed in the message, the key-value pairs will be
+#printed in the standard output, each in one line. If no key values were
+#passed, nothing is printed.
+LAVA_MULTI_NODE_API="LAVA_WAIT"
+#MESSAGE_TIMEOUT=5
+#MESSAGE_NEED_ACK=yes
+
+. $LAVA_TEST_BIN/lava-multi-node.lib
+
+lava_multi_node_send $1
+
+lava_multi_node_wait_for_message
=== added file 'lava_test_shell/multi_node/lava-wait-all'
@@ -0,0 +1,23 @@
+#!/bin/sh
+#
+#This file is for Multi-Node test
+#
+#Waits until **all** other devices in the group send a message with the
+#given message ID. IF ``<role>`` is passed, only wait until all devices
+#with that given role send a message.
+#
+#``lava-wait-all <message-id> [<role>]``
+#
+#If data was sent by the other devices with the message, the key-value
+#pairs will be printed one per line, prefixed with the device name and
+#whitespace.
+LAVA_MULTI_NODE_API="LAVA_WAIT_ALL"
+#MESSAGE_TIMEOUT=5
+#MESSAGE_NEED_ACK=yes
+
+. $LAVA_TEST_BIN/lava-multi-node.lib
+
+lava_multi_node_send $1 $2
+
+lava_multi_node_wait_for_message
+
=== modified file 'requirements.txt'
@@ -1,6 +1,7 @@
django
django-openid-auth
-linaro-django-jsonfield
+pexpect
python-openid
lockfile
python-daemon
+setproctitle
=== modified file 'setup.py'
@@ -42,7 +42,7 @@
'lava_test_shell/lava-test-runner-android',
'lava_test_shell/lava-test-runner-ubuntu',
'lava_test_shell/lava-test-shell',
- ])
+ ])
],
install_requires=[
"json-schema-validator >= 2.3",