diff mbox

[ARCH] Add testing.dox

Message ID 1414156665-11712-1-git-send-email-mike.holmes@linaro.org
State Not Applicable
Headers show

Commit Message

Mike Holmes Oct. 24, 2014, 1:17 p.m. UTC
Detail how ODP API testing will be performed.

Signed-off-by: Mike Holmes <mike.holmes@linaro.org>
---

This patch is best reviewed by setting API_DOC_PATH in the environment before
building this document because it makes several links to the API doc

 testing.dox | 311 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 311 insertions(+)
 create mode 100644 testing.dox

Comments

Anders Roxell Oct. 28, 2014, 2:28 p.m. UTC | #1
On 2014-10-24 09:17, Mike Holmes wrote:
> Detail how ODP API testing will be performed.
> 
> Signed-off-by: Mike Holmes <mike.holmes@linaro.org>
> ---
> 
> This patch is best reviewed by setting API_DOC_PATH in the environment before
> building this document because it makes several links to the API doc
> 
>  testing.dox | 311 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 311 insertions(+)
>  create mode 100644 testing.dox
> 
> diff --git a/testing.dox b/testing.dox
> new file mode 100644
> index 0000000..54bb073
> --- /dev/null
> +++ b/testing.dox
> @@ -0,0 +1,311 @@
> +/**

Copyright header?

> +
> +@page testing Testing, Verification & Validation
> +
> +@tableofcontents
> +
> +@section  overview Overview
> +The goals for ODP API testing are:
> + - Ensure that all platform implementations operate identically from the perspective of the application, except for optional APIs or optional features of an API
> + - Ensure that a given API performs to the specification defined in the published ODP Architecture & API document in source code control.
> + - Catch differences in the API definition between the version of the test suite and the implementation
> + - GCOV or similar tool be used to ensure complete coverage of the API, note that the implementation may have less coverage than 100% but the API must be covered 100%. Both branch and line coverage will be reported.
> + - New APIs are considered delivered when there is a test suite to compliment the API definition.
> + - Framework should be very light and configurable.
> + - Provide a mechanism to publish compliance to a released API version
> +
> +There are three elements to the overall test strategy.
> +
> +- @ref compliance
> +The simplest is the ODP Compliance test suite, this is updated with every commit that adds or modifies an ODP API.
> +It is intended to be very lightweight and is implemented using CUnit for the APIs, with simple scripting for the documentation.
> +
> +- @ref functional
> +Functional tests that attempt to perform some more in depth processing which has its output verified, for example encrypt and then decrypt a packet, checking that the result matches the input (similar to LTP). These may require external equipment.

newline after full stop.

> +
> +- @ref benchmarking
> +Provides no pass fail indication but instead returns a metric on the performance of some portion of the implementation of the API
> +
> +@section logical_groups Test suite API groups
> +
> +The following logical areas for the API need to be addressed for the 1.0 release cycle.
> +The API logical groups will be used in the test directory structure  and in defining the unit test suites
> +to gather the tests in to groups of related functionality.
> +
> +API logical group | Headers
> + :--------------- | :---
> +Buffers           | odp_buffer.h odp_buffer_pool.h
> +Packets           | odp_packet_flags.h odp_packet.h
> +Classification    | odp_classification.h
> +Crypto            | odp_crypto.h
> +IPC               | not defined
> +Pktio             | not defined
> +Queues            | odp_queue.h
> +Scheduler         | odp_schedule.h  odp_coremask.h
> +Shared Memory     | odp_shared_memory.h
> +Synchronizers / Barriers | odp_rwlock.h odp_sync.h odp_atomic.h odp_barrier.h odp_spinlock.h odp_ticketlock.h
> +Threads           | odp_thread.h
> +Timers            | odp_timer.h 
> +System            | odp_time.h odp_perf.h
> +Logging / Abort   | odp_version.h odp_debug.h odp_system_info.h
> +Initialization    | odp_init.h
> +
> +@section directory Directory structure
> +
> +Test contains all the tests that must be passed for a given revision of the API to be declared compliant.
> +The tests cover:
> + - the unit tests
> + - documentation tests
> + - functional & benchmark tests
> +
> +@verbatim
> +test
> +.
> +├── functional
> +├    ├── <test executable>
> +├    ├── <API group>/<test executable>
> +├
> +├── documentation
> +├   ├── <test case>
> +├
> +└── cunit
> +    ├── <test executable>
> +    ├── <API group>/<test executable>
> +
> +@endverbatim
> +
> +@section compliance ODP-Compliance test suite
> +Because this suite is extended with every API extension it needs to be very light weight.
> +This suite would be run by any new platform attempting to be API compliant, it will try to pass in the obvious combinations of arguments that arise from reading the API definition.
> +For example a unit test is of the following form.
> +
> +@code
> +/* a is in the range 0 to 10
> +returns a *2 or -1 if a is out of range */
> +int foo (unsigned int a)
> +@endcode
> +
> +Tests for foo arguments would be
> + - a=-1  check that  return = -1
> + - a=0  check that  return = 0
> + - a=5  check that  return = 10
> + - a=10  check that  return = 20
> + - a=11 check that return = -1
> +
> +The frame work to support generating these test cases is CUnit which provides batch and interactive operation.
> +Unit test cases do not use any external equipment.
> +
> +@section cunit CUnit
> +CUnit http://cunit.sourceforge.net/doc/index.html is a system for writing, administering, and running unit tests in C.
> +It is built as a static library which is linked with the user's testing code.
> +CUnit uses a simple framework for building test structures, and provides a rich set of assertions for testing common data types.
> +In addition, several different interfaces are provided for running tests and reporting results.
> +These include automated interfaces for code-controlled testing and reporting, as well as interactive interfaces allowing the user to run tests and view results dynamically.
> +
> +@subsection structure Structure
> +CUnit is a combination of a platform-independent framework with various user interfaces.
> +The core framework provides basic support for managing a test registry, suites, and test cases.
> +The user interfaces facilitate interaction with the framework to run tests and view results.
> +CUnit is organized like a conventional unit testing framework:
> +
> +@verbatim
> +                     Test Registry
> +                            |
> +             ------------------------------
> +             |                            |
> +          Suite '1'      . . . .       Suite 'N'
> +             |                            |
> +       ---------------             ---------------
> +       |             |             |             |
> +    Test '11' ... Test '1M'     Test 'N1' ... Test 'NM'
> +@endverbatim
> +
> +Individual test cases are packaged into suites, which are registered with the active test registry.
> +Suites can have setup and tear down functions which are automatically called before and after running the suite's tests.
> +All suites/tests in the registry may be run using a single function call, or selected suites or tests can be run.
> +
> +@subsection general_usage General Usage
> + - A typical sequence of steps for using the CUnit framework is:
> + - Write functions for tests (and suite init/cleanup if necessary).
> + - Initialize the test registry - CU_initialize_registry()
> + - Add suites to the test registry - CU_add_suite()
> + - Add tests to the suites - CU_add_test()
> + - Run tests using an appropriate interface, e.g. CU_console_run_tests
> + - Cleanup the test registry - CU_cleanup_registry
> +
> +@code
> +/* Copyright (c) 2014, Linaro Limited
> + * All rights reserved.
> + *
> + * SPDX-License-Identifier:     BSD-3-Clause
> + */
> +
> +#include "odp.h"
> +#include "CUnit/Basic.h"
> +
> +#define DEFAULT_MSG_POOL_SIZE         (4*1024*1024)
> +#define DEFAULT_MSG_SIZE         (8)
> +
> +static void test_odp_init_global(void)
> +{
> +
> +        int status;
> +        status = odp_init_global();
> +        CU_ASSERT (status == 0) ;
> +}
> +
> +static int init(void)
> +{
> +        printf("ODP version:%s\n",odp_version_api_str());
> +        return 0;
> +}
> +
> +
> +int main(void)
> +{
> +        CU_pSuite pSuite = NULL;
> +        /* initialize the CUnit test registry */
> +        if (CUE_SUCCESS != CU_initialize_registry())
> +                return CU_get_error();
> +        /* add a suite to the registry */
> +        pSuite = CU_add_suite("odp_init", init, NULL);
> +        if (NULL == pSuite) {
> +                CU_cleanup_registry();
> +                return CU_get_error();
> +        }
> +        /* add the tests to the suite */
> +        if (NULL == CU_ADD_TEST(pSuite, test_odp_init_global)) {
> +                CU_cleanup_registry();
> +                return CU_get_error();
> +        }
> +        /* Run all tests using the CUnit Basic interface */
> +        CU_basic_set_mode(CU_BRM_VERBOSE);
> +        CU_basic_run_tests();
> +        CU_cleanup_registry();
> +        return CU_get_error();
> +}
> +@endcode
> +
> +@subsection results Normal results
> +
> +@code
> +     CUnit - A unit testing framework for C - Version 2.1-2

Version 2.1-3 are we using in CI.

> +     http://cunit.sourceforge.net/
> +
> +ODP version:0.0.1
> +
> +Suite: odp_init
> +  Test: test_odp_init_global ...
> +Buffer pool init global
> +  pool_entry_s size     192
> +  pool_entry_t size     192
> +  odp_buffer_hdr_t size 120
> +
> +Queue init ... done
> +Queue init global
> +  struct queue_entry_s size 192
> +  queue_entry_t size        192
> +
> +Schedule init ... done
> +Timer init ...done
> +passed
> +
> +Run Summary:    Type  Total    Ran Passed Failed Inactive
> +              suites      1      1    n/a      0        0
> +               tests      1      1      1      0        0
> +             asserts      1      1      1      0      n/a
> +@endcode
> +
> +
> +@subsection compliance_xml Compliance results (XML)
> +The CUnit results may also be gathered in XML format, this format may be use to submit an implementations CUnit test case completion for inclusion on the ODP website as compliant to a given release.
> +
> +~~~{.xml}
> +<?xml version="1.0" ?>.
> +<?xml-stylesheet type="text/xsl" href="CUnit-Run.xsl" ?>.
> +<!DOCTYPE CUNIT_TEST_RUN_REPORT SYSTEM "CUnit-Run.dtd">.
> +<CUNIT_TEST_RUN_REPORT>.
> +  <CUNIT_HEADER/>.
> +  <CUNIT_RESULT_LISTING>.
> +    <CUNIT_RUN_SUITE>.
> +      <CUNIT_RUN_SUITE_SUCCESS>.
> +        <SUITE_NAME> odp intalization </SUITE_NAME>.
> +        <CUNIT_RUN_TEST_RECORD>.
> +          <CUNIT_RUN_TEST_SUCCESS>.
> +            <TEST_NAME> test_odp_init_global </TEST_NAME>.
> +          </CUNIT_RUN_TEST_SUCCESS>.
> +        </CUNIT_RUN_TEST_RECORD>.
> +      </CUNIT_RUN_SUITE_SUCCESS>.
> +    </CUNIT_RUN_SUITE>.
> +  </CUNIT_RESULT_LISTING>
> +  <CUNIT_RUN_SUMMARY>.
> +    <CUNIT_RUN_SUMMARY_RECORD>.
> +      <TYPE> Suites </TYPE>.
> +      <TOTAL> 1 </TOTAL>.
> +      <RUN> 1 </RUN>.
> +      <SUCCEEDED> - NA - </SUCCEEDED>.
> +      <FAILED> 0 </FAILED>.
> +      <INACTIVE> 0 </INACTIVE>.
> +    </CUNIT_RUN_SUMMARY_RECORD>.
> +    <CUNIT_RUN_SUMMARY_RECORD>.
> +      <TYPE> Test Cases </TYPE>.
> +      <TOTAL> 1 </TOTAL>.
> +      <RUN> 1 </RUN>.
> +      <SUCCEEDED> 1 </SUCCEEDED>.
> +      <FAILED> 0 </FAILED>.
> +      <INACTIVE> 0 </INACTIVE>.
> +    </CUNIT_RUN_SUMMARY_RECORD>.
> +    <CUNIT_RUN_SUMMARY_RECORD>.
> +      <TYPE> Assertions </TYPE>.
> +      <TOTAL> 1 </TOTAL>.
> +      <RUN> 1 </RUN>.
> +      <SUCCEEDED> 1 </SUCCEEDED>.
> +      <FAILED> 0 </FAILED>.
> +      <INACTIVE> n/a </INACTIVE>.
> +    </CUNIT_RUN_SUMMARY_RECORD>.
> +  </CUNIT_RUN_SUMMARY>.
> +  <CUNIT_FOOTER> File Generated By CUnit v2.1-2 - Thu Oct 23 15:02:46 2014

same version number here... need to update right?

> + </CUNIT_FOOTER>.
> +</CUNIT_TEST_RUN_REPORT>
> +~~~
> +
> +@section functional Functional tests
> +
> +These tests are generally more elaborate with a goal of proving a complex functionality.
> +@verbatim
> +test
> +.
> +├── functional
> +├    ├── <test executable>
> +├    ├── <API group>/<test executable>
> +@endverbatim
> + - There is no framework for these tests and they generally take command line arguments to modify their behaviour.
> + - These tests do provide a pass fail indication.
> + - These tests may require network interfaces and external equipment.
> +
> +@section benchmarking Benchmarks
> +
> +Benchmarks are currently also stored with the functional tests, this may be re assessed depending on the test volume.
> +@verbatim
> +test
> +.
> +├── functional
> +├    ├── <test executable>
> +├    ├── <API group>/<test executable>
> +@endverbatim
> + - These tests are generally more elaborate with a goal of quantifying the capacity of a complex functionality.
> + - There is no framework for these benchmarks and they generally take command line arguments to modify their behaviour.
> + - They do not provide a pass fail indication.
> +
> +@section test_versioning Test versioning relationship to ODP and versioning
> +
> +For ODP to declare a for example 1.0 tag in its git repository it must have passed the corresponding test suit.
> +It is expected that the test suite will be improved asynchronously to the API and it will be tagged to indicate the ODP API version that it provides compliance for.
> +
> +For example.
> +
> +When ODP v3.0 is released it will be validated against the test suite at the version tagged ODP v3.0
> +
> +It is possible that a version of the ODP API is released and that the test suite used has a flaw, in this case the ODP_API will be re released listing the correct test suite version to use. This change does not affect the ODP API and so the release number will change only in it minor digits.

newline after a full stop.

Cheers,
Anders

> +
> +*/
> -- 
> 2.1.0
> 
> 
> _______________________________________________
> lng-odp mailing list
> lng-odp@lists.linaro.org
> http://lists.linaro.org/mailman/listinfo/lng-odp
diff mbox

Patch

diff --git a/testing.dox b/testing.dox
new file mode 100644
index 0000000..54bb073
--- /dev/null
+++ b/testing.dox
@@ -0,0 +1,311 @@ 
+/**
+
+@page testing Testing, Verification & Validation
+
+@tableofcontents
+
+@section  overview Overview
+The goals for ODP API testing are:
+ - Ensure that all platform implementations operate identically from the perspective of the application, except for optional APIs or optional features of an API
+ - Ensure that a given API performs to the specification defined in the published ODP Architecture & API document in source code control.
+ - Catch differences in the API definition between the version of the test suite and the implementation
+ - GCOV or similar tool be used to ensure complete coverage of the API, note that the implementation may have less coverage than 100% but the API must be covered 100%. Both branch and line coverage will be reported.
+ - New APIs are considered delivered when there is a test suite to compliment the API definition.
+ - Framework should be very light and configurable.
+ - Provide a mechanism to publish compliance to a released API version
+
+There are three elements to the overall test strategy.
+
+- @ref compliance
+The simplest is the ODP Compliance test suite, this is updated with every commit that adds or modifies an ODP API.
+It is intended to be very lightweight and is implemented using CUnit for the APIs, with simple scripting for the documentation.
+
+- @ref functional
+Functional tests that attempt to perform some more in depth processing which has its output verified, for example encrypt and then decrypt a packet, checking that the result matches the input (similar to LTP). These may require external equipment.
+
+- @ref benchmarking
+Provides no pass fail indication but instead returns a metric on the performance of some portion of the implementation of the API
+
+@section logical_groups Test suite API groups
+
+The following logical areas for the API need to be addressed for the 1.0 release cycle.
+The API logical groups will be used in the test directory structure  and in defining the unit test suites
+to gather the tests in to groups of related functionality.
+
+API logical group | Headers
+ :--------------- | :---
+Buffers           | odp_buffer.h odp_buffer_pool.h
+Packets           | odp_packet_flags.h odp_packet.h
+Classification    | odp_classification.h
+Crypto            | odp_crypto.h
+IPC               | not defined
+Pktio             | not defined
+Queues            | odp_queue.h
+Scheduler         | odp_schedule.h  odp_coremask.h
+Shared Memory     | odp_shared_memory.h
+Synchronizers / Barriers | odp_rwlock.h odp_sync.h odp_atomic.h odp_barrier.h odp_spinlock.h odp_ticketlock.h
+Threads           | odp_thread.h
+Timers            | odp_timer.h 
+System            | odp_time.h odp_perf.h
+Logging / Abort   | odp_version.h odp_debug.h odp_system_info.h
+Initialization    | odp_init.h
+
+@section directory Directory structure
+
+Test contains all the tests that must be passed for a given revision of the API to be declared compliant.
+The tests cover:
+ - the unit tests
+ - documentation tests
+ - functional & benchmark tests
+
+@verbatim
+test
+.
+├── functional
+├    ├── <test executable>
+├    ├── <API group>/<test executable>
+├
+├── documentation
+├   ├── <test case>
+├
+└── cunit
+    ├── <test executable>
+    ├── <API group>/<test executable>
+
+@endverbatim
+
+@section compliance ODP-Compliance test suite
+Because this suite is extended with every API extension it needs to be very light weight.
+This suite would be run by any new platform attempting to be API compliant, it will try to pass in the obvious combinations of arguments that arise from reading the API definition.
+For example a unit test is of the following form.
+
+@code
+/* a is in the range 0 to 10
+returns a *2 or -1 if a is out of range */
+int foo (unsigned int a)
+@endcode
+
+Tests for foo arguments would be
+ - a=-1  check that  return = -1
+ - a=0  check that  return = 0
+ - a=5  check that  return = 10
+ - a=10  check that  return = 20
+ - a=11 check that return = -1
+
+The frame work to support generating these test cases is CUnit which provides batch and interactive operation.
+Unit test cases do not use any external equipment.
+
+@section cunit CUnit
+CUnit http://cunit.sourceforge.net/doc/index.html is a system for writing, administering, and running unit tests in C.
+It is built as a static library which is linked with the user's testing code.
+CUnit uses a simple framework for building test structures, and provides a rich set of assertions for testing common data types.
+In addition, several different interfaces are provided for running tests and reporting results.
+These include automated interfaces for code-controlled testing and reporting, as well as interactive interfaces allowing the user to run tests and view results dynamically.
+
+@subsection structure Structure
+CUnit is a combination of a platform-independent framework with various user interfaces.
+The core framework provides basic support for managing a test registry, suites, and test cases.
+The user interfaces facilitate interaction with the framework to run tests and view results.
+CUnit is organized like a conventional unit testing framework:
+
+@verbatim
+                     Test Registry
+                            |
+             ------------------------------
+             |                            |
+          Suite '1'      . . . .       Suite 'N'
+             |                            |
+       ---------------             ---------------
+       |             |             |             |
+    Test '11' ... Test '1M'     Test 'N1' ... Test 'NM'
+@endverbatim
+
+Individual test cases are packaged into suites, which are registered with the active test registry.
+Suites can have setup and tear down functions which are automatically called before and after running the suite's tests.
+All suites/tests in the registry may be run using a single function call, or selected suites or tests can be run.
+
+@subsection general_usage General Usage
+ - A typical sequence of steps for using the CUnit framework is:
+ - Write functions for tests (and suite init/cleanup if necessary).
+ - Initialize the test registry - CU_initialize_registry()
+ - Add suites to the test registry - CU_add_suite()
+ - Add tests to the suites - CU_add_test()
+ - Run tests using an appropriate interface, e.g. CU_console_run_tests
+ - Cleanup the test registry - CU_cleanup_registry
+
+@code
+/* Copyright (c) 2014, Linaro Limited
+ * All rights reserved.
+ *
+ * SPDX-License-Identifier:     BSD-3-Clause
+ */
+
+#include "odp.h"
+#include "CUnit/Basic.h"
+
+#define DEFAULT_MSG_POOL_SIZE         (4*1024*1024)
+#define DEFAULT_MSG_SIZE         (8)
+
+static void test_odp_init_global(void)
+{
+
+        int status;
+        status = odp_init_global();
+        CU_ASSERT (status == 0) ;
+}
+
+static int init(void)
+{
+        printf("ODP version:%s\n",odp_version_api_str());
+        return 0;
+}
+
+
+int main(void)
+{
+        CU_pSuite pSuite = NULL;
+        /* initialize the CUnit test registry */
+        if (CUE_SUCCESS != CU_initialize_registry())
+                return CU_get_error();
+        /* add a suite to the registry */
+        pSuite = CU_add_suite("odp_init", init, NULL);
+        if (NULL == pSuite) {
+                CU_cleanup_registry();
+                return CU_get_error();
+        }
+        /* add the tests to the suite */
+        if (NULL == CU_ADD_TEST(pSuite, test_odp_init_global)) {
+                CU_cleanup_registry();
+                return CU_get_error();
+        }
+        /* Run all tests using the CUnit Basic interface */
+        CU_basic_set_mode(CU_BRM_VERBOSE);
+        CU_basic_run_tests();
+        CU_cleanup_registry();
+        return CU_get_error();
+}
+@endcode
+
+@subsection results Normal results
+
+@code
+     CUnit - A unit testing framework for C - Version 2.1-2
+     http://cunit.sourceforge.net/
+
+ODP version:0.0.1
+
+Suite: odp_init
+  Test: test_odp_init_global ...
+Buffer pool init global
+  pool_entry_s size     192
+  pool_entry_t size     192
+  odp_buffer_hdr_t size 120
+
+Queue init ... done
+Queue init global
+  struct queue_entry_s size 192
+  queue_entry_t size        192
+
+Schedule init ... done
+Timer init ...done
+passed
+
+Run Summary:    Type  Total    Ran Passed Failed Inactive
+              suites      1      1    n/a      0        0
+               tests      1      1      1      0        0
+             asserts      1      1      1      0      n/a
+@endcode
+
+
+@subsection compliance_xml Compliance results (XML)
+The CUnit results may also be gathered in XML format, this format may be use to submit an implementations CUnit test case completion for inclusion on the ODP website as compliant to a given release.
+
+~~~{.xml}
+<?xml version="1.0" ?>.
+<?xml-stylesheet type="text/xsl" href="CUnit-Run.xsl" ?>.
+<!DOCTYPE CUNIT_TEST_RUN_REPORT SYSTEM "CUnit-Run.dtd">.
+<CUNIT_TEST_RUN_REPORT>.
+  <CUNIT_HEADER/>.
+  <CUNIT_RESULT_LISTING>.
+    <CUNIT_RUN_SUITE>.
+      <CUNIT_RUN_SUITE_SUCCESS>.
+        <SUITE_NAME> odp intalization </SUITE_NAME>.
+        <CUNIT_RUN_TEST_RECORD>.
+          <CUNIT_RUN_TEST_SUCCESS>.
+            <TEST_NAME> test_odp_init_global </TEST_NAME>.
+          </CUNIT_RUN_TEST_SUCCESS>.
+        </CUNIT_RUN_TEST_RECORD>.
+      </CUNIT_RUN_SUITE_SUCCESS>.
+    </CUNIT_RUN_SUITE>.
+  </CUNIT_RESULT_LISTING>
+  <CUNIT_RUN_SUMMARY>.
+    <CUNIT_RUN_SUMMARY_RECORD>.
+      <TYPE> Suites </TYPE>.
+      <TOTAL> 1 </TOTAL>.
+      <RUN> 1 </RUN>.
+      <SUCCEEDED> - NA - </SUCCEEDED>.
+      <FAILED> 0 </FAILED>.
+      <INACTIVE> 0 </INACTIVE>.
+    </CUNIT_RUN_SUMMARY_RECORD>.
+    <CUNIT_RUN_SUMMARY_RECORD>.
+      <TYPE> Test Cases </TYPE>.
+      <TOTAL> 1 </TOTAL>.
+      <RUN> 1 </RUN>.
+      <SUCCEEDED> 1 </SUCCEEDED>.
+      <FAILED> 0 </FAILED>.
+      <INACTIVE> 0 </INACTIVE>.
+    </CUNIT_RUN_SUMMARY_RECORD>.
+    <CUNIT_RUN_SUMMARY_RECORD>.
+      <TYPE> Assertions </TYPE>.
+      <TOTAL> 1 </TOTAL>.
+      <RUN> 1 </RUN>.
+      <SUCCEEDED> 1 </SUCCEEDED>.
+      <FAILED> 0 </FAILED>.
+      <INACTIVE> n/a </INACTIVE>.
+    </CUNIT_RUN_SUMMARY_RECORD>.
+  </CUNIT_RUN_SUMMARY>.
+  <CUNIT_FOOTER> File Generated By CUnit v2.1-2 - Thu Oct 23 15:02:46 2014
+ </CUNIT_FOOTER>.
+</CUNIT_TEST_RUN_REPORT>
+~~~
+
+@section functional Functional tests
+
+These tests are generally more elaborate with a goal of proving a complex functionality.
+@verbatim
+test
+.
+├── functional
+├    ├── <test executable>
+├    ├── <API group>/<test executable>
+@endverbatim
+ - There is no framework for these tests and they generally take command line arguments to modify their behaviour.
+ - These tests do provide a pass fail indication.
+ - These tests may require network interfaces and external equipment.
+
+@section benchmarking Benchmarks
+
+Benchmarks are currently also stored with the functional tests, this may be re assessed depending on the test volume.
+@verbatim
+test
+.
+├── functional
+├    ├── <test executable>
+├    ├── <API group>/<test executable>
+@endverbatim
+ - These tests are generally more elaborate with a goal of quantifying the capacity of a complex functionality.
+ - There is no framework for these benchmarks and they generally take command line arguments to modify their behaviour.
+ - They do not provide a pass fail indication.
+
+@section test_versioning Test versioning relationship to ODP and versioning
+
+For ODP to declare a for example 1.0 tag in its git repository it must have passed the corresponding test suit.
+It is expected that the test suite will be improved asynchronously to the API and it will be tagged to indicate the ODP API version that it provides compliance for.
+
+For example.
+
+When ODP v3.0 is released it will be validated against the test suite at the version tagged ODP v3.0
+
+It is possible that a version of the ODP API is released and that the test suite used has a flaw, in this case the ODP_API will be re released listing the correct test suite version to use. This change does not affect the ODP API and so the release number will change only in it minor digits.
+
+*/