OpenDDS¶
Welcome to the documentation for OpenDDS 3.18.0!
It is available for download on GitHub.
Common Terms¶
Environment Variables¶
- ACE_ROOT¶
Root of the ACE source tree or installation prefix being used.
- DDS_ROOT¶
Root of the OpenDDS source tree or installation prefix being used.
- TAO_ROOT¶
Root of the TAO source tree or installation prefix being used.
Internal Documentation¶
This documentation are for those who want to contribute to OpenDDS and those who are just curious!
Documentation Guidelines¶
This Sphinx-based documentation is hosted on Read the Docs and can be located here. It can also be built locally. To do this follow the steps in the following section.
Building¶
Run docs/build.py, passing the kinds of documentation desired. Multiple kinds can be passed, and they are documented in the following sections.
Requirements¶
The script requires Python 3.6 or later and an internet connection if the script needs to download dependencies or check the validity of external links.
You might receive a message like this when running for the first time:
build.py: Creating venv...
The virtual environment was not created successfully because ensurepip is not
available. On Debian/Ubuntu systems, you need to install the python3-venv
package using the following command.
apt install python3.9-venv
If you do, then follow the directions it gives, remove the docs/.venv
directory, and try again.
HTML¶
HTML documentation can be built and viewed using docs/build.py -o html
.
If it was built successfully, then the front page will be at docs/_build/html/index.html
.
PDF¶
Note
This has additional dependencies on LaTeX that are documented here.
PDF documentation can be built and viewed using docs/build.py -o pdf
.
If it was built successfully, then the PDF file will be at docs/_build/latex/opendds.pdf
.
Dash¶
Documentation can be built for Dash, Zeal, and other Dash-compatible applications using doc2dash.
The command for this is docs/build.py dash
.
This will create a docs/_build/OpenDDS.docset
directory that must be manually moved to where other docsets are stored.
Strict Checks¶
docs/build.py strict
will promote Sphinx warnings to errors and check to see if links resolve to a valid web page.
Note
The documenation includes dynamic links to files in the GitHub repo created by ghfile.
These links will be invalid until the git commit they were built under is pushed to a Github fork of OpenDDS.
This also means running will cause those links to marked as broken.
A workaround for this is to pass -c master
or another commit, branch, or tag that is desired.
Building Manually¶
It is recommended to use build.py
to build the documentation as it will handle dependencies automatically.
If necessary though, Sphinx can be ran directly.
To build the documentation the dependencies need to be installed first.
Run this from the docs
directory to do this:
pip3 install -r requirements.txt
Then sphinx-build
can be ran.
For example to build the HTML documentation:
sphinx-build -M html . _build
RST/Sphinx Usage¶
See Sphinx reStructuredText Primer for basic RST usage.
Inline code such as class names like
DataReader
and other symbolic text such as commands likels
should use double backticks:``TEXT``
. This distinguishes it as code, makes it easier to distinguish characters, and reduces the chance of needing to escape characters if they happen to be special for RST.One sentence per line should be perfered. This makes it easier to see what changed in a
git diff
or GitHub PR and easier to move sentences around in editors like Vim. It also avoids inconsistencies involving what the maximum line length is. This might make it more annoying to read the documentation raw, but that’s not the indented way to do so anyway.
GitHub Links¶
There are a few shortcuts for linking to the GitHub repository that are custom to OpenDDS. These come of the form of RST roles and are implemented in docs/sphinx_extensions/github_links.py.
ghfile¶
:ghfile:`README.md`
Turns into:
The file or directory must exist in the repo.
It will try to point to the most specific version of the file:
If
-c
or--gh-links-commit
was passed tobuild.py
, then it will use the commit, branch, or tag that was passed along with it.Else if the OpenDDS is a release it will calculate the release tag and use that.
Else if the OpenDDS is in a git repository it will use the commit hash.
Else it will use
master
.
ghissue¶
:ghissue:`213`
Turns into:
ghpr¶
:ghpr:`1`
Turns into:
Unit Tests¶
The Goals of Unit Testing¶
The primary goal of a unit test is to provide informal evidence that a piece of code performs correctly. An alternative to unit testing is writing formal proofs. However, formal proofs are difficult, expensive, and unmaintainable given the changing nature of software. Unit tests, while necessarily incomplete, are a practical alternative.
Unit tests document how to use various algorithms and data structures and serve as an informal set of requirements. As such, a unit test should be developed with the idea that it will serve as a reference for future developers. Clarity in unit tests serve to accomplish their primary goal of establishing correctness. That is, a unit test that is difficult to understand casts doubt that the code being tested is correct. Consequently, unit tests should be clear and concise.
The confidence one has in a piece of code is often related to the number of code paths explored in it. This is often approximated by “code coverage.” That is, one can run the unit test with a coverage tool to see which code paths were exercised by the unit test. Code with higher coverage tends to have fewer bugs because the tester has often considered various corner-cases. Consequently, unit tests should aim for high code coverage.
Unit tests should be executed frequently to provide developers with instant feedback. This applies to the feature under development and the system as a whole. That is, developers should frequently execute all of the unit tests to make sure they haven’t broken functionality elsewhere in the system. The more frequently the tests are run, the smaller the increment of development and the easier it is to identify a breaking change. Thus, unit tests should execute quickly.
Code that is difficult to test will most likely be difficult to use. Code that is difficult to use correctly will lead to bugs in code that use it. Consequently, unit tests are vital to the design of useful software as developing a unit test provides feedback on the design of the code under test. Often, when developing a unit test, one will find parts of the design that can be improved.
Unit tests should promote and not inhibit development. A robust set of unit tests allows a developer to aggressively refactor since the correctness of the system can be checked after the refactoring. However, unit tests do produce drag on development since they must be maintained as the code evolves. Thus, it is important that the unit test code be properly maintained so that they are an asset and not a liability.
Some of the goals mentioned above are in conflict. Adding code to increase coverage may make the tests less maintainable, slower, and more difficult to understand. The following metrics can be generated to measure the utility of the unit tests:
Code coverage
Test compilation time
Test execution time
Test code size vs. code size
Defect rate vs. code coverage (Are bugs appearing is code that is not tested as well?)
Unit Test Organization¶
The most basic unit when testing is the test case. A test case typically has four phases.
Setup - The system is initialized to a known state.
Exercise - The code under test is invoked.
Check - The resulting state of the system and outputs are checked.
Teardown - Any resources allocated in the test are deallocated.
Test cases are grouped into a test suite.
Test suites are organized into a test plan.
We adopt file boundaries for organizing the unit tests for OpenDDS.
That is, the unit tests for a file group dds/DCPS/SomeFile.(h|cpp)
will be located in tests/unit-tests/dds/DCPS/SomeFile.cpp
.
The file tests/unit-tests/dds/DCPS/SomeFile.cpp
is a test suite containing all of the test cases for dds/DCPS/SomeFile.(h|cpp)
.
The test plan for OpenDDS will execute all of the test suites under tests/unit-tests.
When the complete test plan takes too much time to execute, it will be sub-divided along module boundaries.
In regards to coverage, the coverage of dds/DCPS/SomeFile.(h|cpp)
is measured by executing the tests in its test suite tests/unit-tests/dds/DCPS/SomeFile.cpp
.
The purpose of this is to avoid indirect testing where a piece of code may get full coverage without ever being intentionally tested.
Unit Test Scope¶
A unit test should be completely deterministic with respect to the code paths that it exercises. This means the test code must have control over all relevant inputs, i.e., inputs that influence the code paths. To illustrate, the current time is relevant when testing algorithms that perform date related functions, e.g., code that is conditioned on a certificate being expired, while it is not relevant if it is only used when printing log messages. Sources of non-determinism include time, random numbers, schedulers, and the network. A dependency on the time is typically mitigated by mocking the service that return the time. Random numbers can be handled the same way. A unit test should never sleep. Avoiding schedulers means a unit test should not have multiple processes and should not have multiple threads unless they cannot impact the code paths being tested. The network can be avoided by defining a suitable abstraction and mocking.
Code that relies on event dispatching may use a mock dispatcher to control the sequence of events. One design that makes it possible to unit test in this way is to organize a module as a set of atomic event handlers around a plain old data structure core. The core should be easy to test. Event handlers are called for timers, I/O readiness, and method calls into the module. Event handlers update the core and can perform I/O and call into other modules. Inter-module calls are problematic in that they create the possibility for deadlock and other hazards. In the simplest designs, each module has a single lock that is acquired at the beginning of each event handler. The non-deterministic part of the module can be tested by isolating its dependencies on the operating system and other modules; typically by providing mock objects.
To illustrate the other side of determinism, consider other kinds of tests. Integration tests often use operating system services, e.g., threads and networking, to test partial or whole system functionality. A stress test executes the same code over and over hoping that non-determism results in a different outcome. Performance tests may or may not admit non-determinism and focuses on aggregate behavior as opposed to code-level correctness. Unit tests should focus on code-level correctness.
Isolating Dependencies¶
More often than not, the code under test will have dependencies on other objects. For each dependency, the test can either pass in a real object or a stand-in. Test stand-ins have a variety of names including mocks, spies, dummies, etc. depending on their function. Some take the position that everything should be mocked. The author takes the position that real objects should be preferred for the following reasons:
Less code to maintain
The design of the real objects improves to accommodate testing
Tests break in a more meaningful way when dependencies change, i.e., over time, a test stand-in may no longer behave in a realistic way
However, there are cases when a test stand-in is justified:
It is difficult to configure the real object
The real object lacks the necessary API for testing and adding it cannot be justified
The use of a mock assumes that an interface exists for the stand-in.
Writing a New Unit Test¶
Add the test to tests/unit-tests/dds/DCPS or the folder under it.
Name the test after the code it is meant to cover. For example, the
AccessControlBuiltInImpl
unit test covers theAccessControlBuiltInImpl.cpp
file.Add the test to the MPC file in its location.
If the test is a safety test, you will need to add it to the
run_test_safety.pl
located intests/unit-tests/dds/DCPS
.Add the test to the
.gitignore
in its directory.Add the path to the test in either tests/dcps_tests.lst or tests/security/security_tests.lst.
Using GTest¶
To use GTest in a test, add #include <gtest/gtest.h>
.
Then add the googletest
dependency to the MPC project for your test.
This provides you with many helpful tools to simplify the writing of tests.
When creating your test, the first step is to create a normal int main
function.
Inside the function we need to initialize google tests, then we set the return value as RUN_ALL_TESTS();
.
int main(int argc, char* argv[])
{
::testing::InitGoogleTest(&argc, argv);
return RUN_ALL_TESTS();
}
This return value will automatically run all test modules and output a series of values corresponding to each test. Speaking of test modules, you can create an individual test module with the following declaration
TEST(TestModule, TestSubmodule)
{
}
Each of these tests contain evaluators.
The most common evaluators are EXPECT_EQ
, EXPECT_TRUE
, EXPECT_FALSE
.
EXPECT_EQ(X, 2)
EXPECT_EQ(Y, 3)
This will mark the test as a failure if either X
does not equal 2, or Y
does not equal 3.
EXPECT_TRUE
and EXPECT_FALSE
are equivalence checks to a boolean value.
In the following examples we pass X
to a function is_even
that returns true if the passed value is an even number and returns false otherwise.
EXPECT_TRUE(is_even(X));
This will mark the test as a failure if is_even(X)
returns false.
EXPECT_FALSE(is_even(X));
This will mark the test as a failure if is_even(X)
returns true.
There are more EXPECT_* and ASSERT_*, but these are the most common ones.
The difference between EXPECT and ASSERT is that an ASSERT will cease the test upon failure, whereas EXPECTS continue to run.
For example if you have multiple EXPECT_EQ
, they will all always run.
For more information, visit the google test documentation: https://github.com/google/googletest/blob/master/docs/primer.md.
Final Word¶
Ignore anything in this document that prevents you from writing unit tests.
GitHub Actions Summary and FAQ¶
Overview¶
GitHub Actions is the continuous integration solution currently being used to evaluate the readiness of pull requests. It builds OpenDDS and runs the test suite across a wide variety of operation systems and build configurations.
Legend for GitHub Actions Build Names¶
Operating System¶
u18/u20 - Ubuntu 18.04/Ubuntu 20.04
w16/w19 - Windows Server 2016 (Visual Studio 2017)/Windows Server 2019 (Visual Studio 2019)
m10 - MacOS 10.15
See also
Build Configuration¶
x86 - Windows 32 bit. If not specified, x64 is implied.
re - Release build. If not specified, Debug is implied.
clang5/clang10/gcc6/gcc8/gcc10 - compiler used to build OpenDDS. If not specified, the default system compiler is used.
Build Type¶
stat - Static build
bsafe/esafe - Base Safety/Extended Safety build
sec - Security build
asan - Address Sanitizer build
Build Options¶
o1 - enables
--optimize
d0 - enables
--no-debug
i0 - enables
--no-inline
p1 - enables
--ipv6
w1 - enables wide characters
v1 - enables versioned namespace
cpp03 -
--std=c++03
j/j8/j12 - Default System Java/Java8/Java12
ace7 - uses ace7tao3 rather than ace6tao2
xer0 - disables xerces
qt - enables
--qt
ws - enables
--wireshark
js0 - enables
--no-rapidjson
Feature Mask¶
This is a mask in an attempt to keep names shorter.
FM-08
--no-built-in-topics
--no-content-subscription
--no-ownership-profile
--no-object-model-profile
--no-persistence-profile
FM-1f
--no-built-in-topics
FM-2c
--no-content-subscription
--no-object-model-profile
--no-persistence-profile
FM-2f
--no-content-subscription
FM-37
--no-content-filtered-topics
build_and_test.yml Workflow¶
Our main workflow which dictates our GitHub Actions run is .github/workflows/build_and_test.yml. It defines jobs, which are the tasks that are run by the CI.
Triggering the Build And Test Workflow¶
There are a couple ways in which a run of build and test workflow can be started.
Any pull request targeting master will automatically run the OpenDDS workflows. This form of workflow run will simulate a merge between the branch and master.
Push events on branches prefixed gh_wf_
will trigger workflow runs on the fork in which the branch resides.
These fork runs of GitHub Actions can be viewed in the “Actions” tab.
Runs of the workflow on forks will not simulate a merge between the branch and master.
Job Types¶
There are a number of job types that are contained in the file build_and_test.yml. Where possible, a configuration will contain 3 jobs. The first job that is run is ACE_TAO_. This will create an artifact which is used later by the OpenDDS build. The second job is build_, which uses the previous ACE_TAO_ job to configure and build OpenDDS. This job will then export an artifact to be used in the third step. The third step is the test_ job, which runs the appropriate tests for the associated OpenDDS configuration.
Certain builds do not follow this 3 step model. Safety Profile builds are done in one step due to cross-compile issues. Static and Release builds have a large footprint and therefore cannot fit the entire test suite onto a Github Actions runner. As a result, they only build and run a subset of the tests in their final jobs, but then have multiple final jobs to increase test coverage. These jobs are prefixed by:
compiler_ which runs the tests/DCPS/Compiler tests.
unit_ which runs the unit tests located in tests/DCPS/UnitTests and tests/unit-tests.
messenger_ which runs the tests in tests/DCPS/Messenger and tests/DCPS/C++11/Messenger.
To shorten the runtime of the continuous integration, some other builds will not run the test suite.
All builds with safety profile disabled and ownership profile enabled, will run the tests/cmake tests.
Test runs which only contain CMake tests are prefixed by cmake_
.
.lst Files¶
.lst files contain a list of tests with configuration options that will turn tests on or off.
The test_ jobs pass in tests/dcps_tests.lst.
Static and Release builds instead use tests/static_ci_tests.lst.
This seperation of .lst files is due to how excluding all but a few tests in the dcps_tests.lst would require adding a new config option to every test we didn’t want to run.
There is a seperate security test list, tests/security/security_tests.lst, which governs the security tests which are run when --security
is passed to auto_run_tests.pl
.
The last list file used by build_and_test.yml
is tools/modeling/tests/modeling_tests.lst, which is included by passing --modeling
to auto_run_tests.pl
.
To disable a test in GitHub Actions, !GH_ACTIONS
must be added next to the test in the .lst file.
These tests will not run when -Config GH_ACTIONS
is passed alongside the lst file.
There are similar test blockers which only block for specific github actions configurations: !GHA_OPENDDS_SAFETY_PROFILE
blocks Safety Profile builds from running a test.
These blocks are necessary because certain tests cannot properly run on GitHub Actions due to how the runners are configured.
See also
- Running Tests
For how
auto_run_tests.pl
works in general.
Test Results¶
The tests are run using autobuild which creates a number of output files that are turned into a GitHub artifact. This artifact is processed by the “Check Test Results” workflow which modifies the files with detailed summaries of the test runs. After all of the Check Test Results jobs are complete, the test results will be posted in either the build_and_test or lint workflows. It is random which one of the workflows the results will appear in, so be sure to check both. This is due to a known problem with the GitHub API.
Artifacts¶
Artifacts from the continuous integration run can be downloaded by clicking details on one of the Build & Test runs. Once all jobs are completed, a dropdown will appear on the bar next to “Re-run jobs”, called “Artifacts” which lists each artifact that can be downloaded.
Alternatively, clicking the “Summary” button at the top of the list of jobs will list all the available artifacts at the bottom of the page.
Using Artifacts to Replicate Builds¶
You can download the ACE_TAO_
and build_
artifacts then use them for a local build, so long as your operating system is the same as the one on the runner.
git clone
the ACE_TAO branch which is targeted by the build. This is usually going to beace6tao2
.git clone --recursive
the OpenDDS branch on which the CI was run.Merge OpenDDS master into your cloned branch.
run
tar xvfJ
from inside the cloned ACE_TAO, targeting theACE_TAO_*.tar.xz
file.run
tar xvfJ
from inside the cloned OpenDDS, targeting thebuild_*.tar.xz
file.Adjust the setenv.sh located inside OpenDDS to match the new locations for your ACE_TAO, and OpenDDS. The word “runner” should not appear within the setenv.sh once you are finished.
You should now have a working duplicate of the build that was run on GitHub Actions. This can be used for debugging as a way to quickly set up a problematic build.
Using Artifacts to View More Test Information¶
Tests failures which are recorded on github only contain a brief capture of output surrounding a failure.
This is useful for some tests, but it can often be helpful to view more of a test run.
This can be done by downloading the artifact for a test step you are viewing.
This test step artifact contains a number of files including output.log_Full.html
.
This is the full log of all output from all test runs done for the corresponding job.
It should be opened in either a text editor or Firefox, as Chrome will have issues due to the length of the file.
Caching¶
The OpenDDS workflows create .tar.xz archives of certain build artifacts which can then be up uploaded and shared between jobs (and the user) as part of GitHub Actions’ “artifact” API. A cache key comparison made using the relevant git commit SHA will determine whether to rebuild the artifact, or to use the cached artifact.
Running Tests¶
Main Test Suite¶
Building¶
Tests are not built by default, --tests
must be passed to the configure
script.
This will build all the tests.
There are a few ways to only have specific tests built:
If using Make, specify the targets instead of leaving it default to the
all
target.Run MPC on the test directory and build separately. Make sure to also build the test’s dependencies.
Create a custom workspace with the tests and pass it to the
configure
script using the--workspace
option. Also make sure to include the test’s dependencies.
Running¶
Note
Make sure ACE_ROOT
and DDS_ROOT
are set, which can be done by running source setenv.sh
on Linux and macOS or call setenv.cmd
on Windows.
OpenDDS’ main suite of tests is ran by the tests/auto_run_tests.pl Perl script that reads lists of tests from files and selectively runs based on how the script has been configured. By default it configures itself, but it can be configured manually.
For Unixes (Linux, macOS, BSDs, etc)¶
Run this in DDS_ROOT
:
./bin/auto_run_tests.pl
For Windows¶
Run this in DDS_ROOT
:
bin\auto_run_tests.pl
If OpenDDS was built in Release mode add -ExeSubDir Release
.
If it was built as static libraries add -ExeSubDir Static_Debug
or -ExeSubDir Static_Release
.
Manual Configuration¶
Manual configuration is done by passing -Config
, -Exclude
, and test list files arguments to the script.
To manually configure what tests to run:
See the
--list-configs
or--show-configs
options to see the existing configurations used by the tests.See the test list files for the tests themselves:
-
This is included by default. Use
--no-dcps
to exclude this list.
tests/security/security_tests.lst
Use
--security
to include this list.
java/tests/dcps_java_tests.lst
Use
--java
to include this list.
tools/modeling/tests/modeling_tests.lst
Use
--modeling
to include this list.
-
In a test list file each of the-space delimited words after the colon determines when the test is ran.
Passing
-Config RTPS
will run tests that haveRTPS
and leave out tests with!RTPS
.Passing
-Exclude RTPS
will exclude all tests that haveRTPS
in the entry. This option matches using RegEx, so a test withSUPER_DUPER_RTPS
will also be excluded. It also ignores inverse entries, so it will not exclude a test with!SUPER_DUPER_RTPS
.Assuming they were built, CMake tests are ran if
--cmake
is passed. This uses CTest, which is a system that is separate from the one previously described.See
--help
for all the available options.
Bench 2 Performance & Scalability Test Framework¶
Motivation¶
The Bench 2 framework grew out of a desire to be able to test the performance and scalability of OpenDDS in large and heterogeneous deployments, along with the ability to quickly develop and deploy new test scenarios across a potentially-unspecified number of machines.
Overview¶
The resulting design of the Bench 2 framework depends on three primary test applications: worker processes, one or more node controllers, and a test controller.

Bench 2 Overview¶
Worker¶
The worker
application, true to its name, performs most of the work associated with any given test scenario.
It creates and exercises the DDS entities specified in its configuration file and gathers performance statistics related to discovery, data integrity, and performance.
The worker’s configuration file contains regions that may be used to represent OpenDDS’s
configuration sections as well as individual DDS entities and the QoS policies to be for their creation.
In addition, the worker configuration contains test timing values and descriptions of test actions
to be taken (e.g. publishing and forwarding data read from subscriptions).
Upon test completion, the worker can write out a report file containing the performance statistics gathered during its run.
Node Controller¶
Each machine in the test environment will run (at least) one node_controller
application which acts as a daemon and, upon request from a test_controller
, will spawn one or more worker processes.
Each request will contain the configuration to use for the spawned workers and, upon successful exit, the workers’ report files will be read and sent back to the test_controller
which requested it.
Failed workers processes (aborts, crashes) will be noted and have their output logs sent back to the requesting test_controller
.
In addition to collecting worker reports, the node controller also gathers general system resource statistics during test execution (CPU and memory utilization) to be returned to the test controller at the end of the test.
Test Controller¶
Each execution of the test framework will use a test_controller
to read in a scenario configuration file (an annotated collection of worker configuration file names) before listening for available node_controller
’s and parceling out the scenario’s worker configurations to the individual node_controller
’s.
The test_controller
may also optionally adjust certain worker configuration values for the sake of the test (assigning a unique DDS partition to avoid collisions, coordinating worker test times, etc.).
After sending the allocated scenario to each of the available node controllers, the test controller waits to receive reports from each of the node controllers.
After receiving all the reports, the test_controller
coalesces the performance statistics from each of the workers and presents the final results to the user (both on screen & in a results file).
Building Bench 2¶
Required Features¶
The primary requirements for building OpenDDS such that Bench 2 also gets built:
C++11 Support (
--std=c++11
)RapidJSON present and enabled (
--rapidjson
)Tests are being built (
--tests
)
Required Targets¶
If these elements are present, you can either build the entire test suite (slow) or use these 3 targets (faster), which also cover all the required libraries:
Bench_Worker
node_controller
test_controller
Running Bench 2¶
Environment Variables¶
To run Bench 2 executables with dynamically linked or shared libraries, you’ll want to make sure the Bench 2 libraries are in your library path.
Linux/Unix¶
Add ${DDS_ROOT}/performance-tests/bench/lib
to your LD_LIBRARY_PATH
Windows¶
Add %DDS_ROOT%\performance-tests\bench\lib
to your PATH
Assuming DDS_ROOT
is already set on your system (from the configure
script or from sourcing setenv.sh
), there are convenience scripts to do this for you in the performance-tests/bench directory (set_bench_env[.sh/.cmd]
)
Running a Bench 2 CI Test¶
In the event that you’re debugging a failing Bench 2 CI test, you can use performance-tests/bench/run_test.pl to execute the full scenario without first setting the environment as listed above.
This is because the perl script sets it automatically before launching a single node_controller
in the background and executing the test controller with the requested scenario.
The perl script can be inspected in order to determine which scenarios have been made available in this way.
It can be modified to easily run other scenarios against a single node controller with relative ease.
Running Scenarios Manually¶
Assuming you already have scenario and worker configuration files defined, the general approach to running a scenario is to start one or more node_controller
s (across one or more hosts) and then execute the test_controller with the desired scenario configuration.
Configuration Files¶
As a rule, Bench 2 uses JSON configuration files that directly map onto the C++ Platform Specific Model (PSM) of the IDL found in performance-tests/bench/idl and the IDL used in the DDS specification. This allows the test applications to easily convert between configuration files and C++ structures useful for the configuration of DDS entities.
Scenario Configuration Files¶
Scenario configuration files are used by the test controller to determine the number and type (configuration) of worker processes required for a particular test scenario. In addition, the scenario file may specify certain sets of workers to be run on the same node by placing them together in a node “prototype” (see below).
IDL Definition¶
struct WorkerPrototype {
// Filename of the JSON Serialized Bench::WorkerConfig
string config;
// Number of workers to spawn using this prototype (Must be >=1)
unsigned long count;
};
typedef sequence<WorkerPrototype> WorkerPrototypes;
struct NodePrototype {
// Assign to a node controller with a name that matches this wildcard
string name_wildcard;
WorkerPrototypes workers;
// Number of Nodes to spawn using this prototype (Must be >=1)
unsigned long count;
// This NodePrototype must have a Node to itself
boolean exclusive;
};
typedef sequence<NodePrototype> NodePrototypes;
// This is the root type of the scenario configuration file
struct ScenarioPrototype {
string name;
string desc;
// Workers that must be deployed in sets
NodePrototypes nodes;
// Workers that can be assigned to any node
WorkerPrototypes any_node;
/*
* Number of seconds to wait for the scenario to end.
* 0 means never timeout.
*/
unsigned long timeout;
};
Annotated Example¶
{
"name": "An Example",
"desc": "This shows the structure of the scenario configuration",
"nodes": [
{
"name_wildcard": "example_nc_*",
"workers": [
{
"config": "daemon.json",
"count": 1
},
{
"config": "spawn.json",
"count": 1
}
],
"count": 2,
"exclusive": false
}
],
"any_node": [
{
"config": "master.json",
"count": 1
}
],
"timeout": 120
}
This scenario configuration will launch 5 worker processes.
It will launch 2 pairs of “daemon” / “spawn” processes, with each member of each pair being kept together on the same node (i.e. same node_controller
).
The pairs themselves may be split across nodes, but each “daemon” will be with at least one “spawn” and vice-versa.
They may also wind up all together on the same node, depending on the number of available nodes.
And finally, one “master” process will be started wherever there is room available.
The “name_wildcard” field is used to filter the node_controller
instances that can be used to host the nodes in the current node config - only the node_controller
instances with names matching the wildcard can be used.
If the “name_wildcard” is omitted or its value is empty, any node_controller
can be used.
If node “prototypes” are marked exclusive, the test controller will attempt to allocate them exclusively to their own node controllers.
If not enough node controllers exist to honor all the exclusive nodes, the test controller will
fail with an error message.
Worker Configuration Files¶
QoS Masking¶
In a typical DDS application, default QoS objects are often supplied by the entity factory so that the application developer can make required changes locally and not impact larger system configuration choices. As such, the QoS objects found within the JSON configuration file should be treated as a “delta” from the default configuration object of a parent factory class. So while the JSON “qos” element names will directly match the relevant IDL element names, there will also be an additional “qos_mask” element that lives alongside the “qos” element in order to specify which elements apply. For each QoS attribute “attribute” within the “qos” object, there will also be a boolean “has_attribute” within the “qos_mask” which informs the builder library that this attribute should indeed be applied against the default QoS object supplied by the parent factory class before the entity is created.
IDL Definition
struct TimeStamp {
long sec;
unsigned long nsec;
};
typedef sequence<string> StringSeq;
typedef sequence<double> DoubleSeq;
enum PropertyValueKind { PVK_TIME, PVK_STRING, PVK_STRING_SEQ, PVK_STRING_SEQ_SEQ, PVK_DOUBLE, PVK_DOUBLE_SEQ, PVK_ULL };
union PropertyValue switch (PropertyValueKind) {
case PVK_TIME:
TimeStamp time_prop;
case PVK_STRING:
string string_prop;
case PVK_STRING_SEQ:
StringSeq string_seq_prop;
case PVK_STRING_SEQ_SEQ:
StringSeqSeq string_seq_seq_prop;
case PVK_DOUBLE:
double double_prop;
case PVK_DOUBLE_SEQ:
DoubleSeq double_seq_prop;
case PVK_ULL:
unsigned long long ull_prop;
};
struct Property {
string name;
PropertyValue value;
};
typedef sequence<Property> PropertySeq;
struct ConfigProperty {
string name;
string value;
};
typedef sequence<ConfigProperty> ConfigPropertySeq;
// ConfigSection
struct ConfigSection {
string name;
ConfigPropertySeq properties;
};
typedef sequence<ConfigSection> ConfigSectionSeq;
// Writer
struct DataWriterConfig {
string name;
string topic_name;
string listener_type_name;
unsigned long listener_status_mask;
string transport_config_name;
DDS::DataWriterQos qos;
DataWriterQosMask qos_mask;
};
typedef sequence<DataWriterConfig> DataWriterConfigSeq;
// Reader
struct DataReaderConfig {
string name;
string topic_name;
string listener_type_name;
unsigned long listener_status_mask;
PropertySeq listener_properties;
string transport_config_name;
DDS::DataReaderQos qos;
DataReaderQosMask qos_mask;
StringSeq tags;
};
typedef sequence<DataReaderConfig> DataReaderConfigSeq;
// Publisher
struct PublisherConfig {
string name;
string listener_type_name;
unsigned long listener_status_mask;
string transport_config_name;
DDS::PublisherQos qos;
PublisherQosMask qos_mask;
DataWriterConfigSeq datawriters;
};
typedef sequence<PublisherConfig> PublisherConfigSeq;
// Subscription
struct SubscriberConfig {
string name;
string listener_type_name;
unsigned long listener_status_mask;
string transport_config_name;
DDS::SubscriberQos qos;
SubscriberQosMask qos_mask;
DataReaderConfigSeq datareaders;
};
typedef sequence<SubscriberConfig> SubscriberConfigSeq;
// Topic
struct ContentFilteredTopic {
string cft_name;
string cft_expression;
DDS::StringSeq cft_parameters;
};
typedef sequence<ContentFilteredTopic> ContentFilteredTopicSeq;
struct TopicConfig {
string name;
string type_name;
DDS::TopicQos qos;
TopicQosMask qos_mask;
string listener_type_name;
unsigned long listener_status_mask;
string transport_config_name;
ContentFilteredTopicSeq content_filtered_topics;
};
typedef sequence<TopicConfig> TopicConfigSeq;
// Participant
struct ParticipantConfig {
string name;
unsigned short domain;
DDS::DomainParticipantQos qos;
DomainParticipantQosMask qos_mask;
string listener_type_name;
unsigned long listener_status_mask;
string transport_config_name;
StringSeq type_names;
TopicConfigSeq topics;
PublisherConfigSeq publishers;
SubscriberConfigSeq subscribers;
};
typedef sequence<ParticipantConfig> ParticipantConfigSeq;
// TransportInstance
struct TransportInstanceConfig {
string name;
string type;
unsigned short domain;
};
typedef sequence<TransportInstanceConfig> TransportInstanceConfigSeq;
// Discovery
struct DiscoveryConfig {
string name;
string type; // "rtps" or "repo"
string ior; // "repo" URI (e.g. "file://repo.ior")
unsigned short domain;
};
typedef sequence<DiscoveryConfig> DiscoveryConfigSeq;
// Process
struct ProcessConfig {
ConfigSectionSeq config_sections;
DiscoveryConfigSeq discoveries;
TransportInstanceConfigSeq instances;
ParticipantConfigSeq participants;
};
// Worker
// This is the root structure of the worker configuration
// For the sake of readability, module names have been omitted
// All structures other than this one belong to the Builder module
struct WorkerConfig {
TimeStamp create_time;
TimeStamp enable_time;
TimeStamp start_time;
TimeStamp stop_time;
TimeStamp destruction_time;
PropertySeq properties;
ProcessConfig process;
ActionConfigSeq actions;
ActionReportSeq action_reports;
};
Annotated Example¶
{
"create_time": { "sec": -1, "nsec": 0 },
Since the timestamp is negative, this treats the time as relative and waits one second.
"enable_time": { "sec": -1, "nsec": 0 },
"start_time": { "sec": 0, "nsec": 0 },
Since the time is zero and thus neither absolute nor relative, this treats the time as indefinite and waits for keyboard input from the user.
"stop_time": { "sec": -10, "nsec": 0 },
Again, a relative timestamp. This time, it waits for 10 seconds for the test actions to run before stopping the test.
"destruction_time": { "sec": -1, "nsec": 0 },
"process": {
This is the primary section where all the DDS entities are described, along with configuration of OpenDDS.
"config_sections": [
The elements of this section are functionally identical to the sections of an OpenDDS .ini
file with the same name.
Each config section is created programmatically within the worker process using the name provided and made available to the OpenDDS ServiceParticipant
during entity creation.
The example here sets the value of both the DCPSSecurity
and DCPSDebugLevel
keys to 0 within the [common]
section of the configuration.
{ "name": "common",
"properties": [
{ "name": "DCPSSecurity",
"value": "0"
},
{ "name": "DCPSDebugLevel",
"value": "0"
}
]
}
],
"discoveries": [
Even if there is no configuration section for it (see above), this allows us to create unique discovery instances per domain.
If both are specified, this will find and use / modify the one specified in the configuration section above.
Valid types are "rtps"
and "repo"
(requires additional "ior"
element with valid URL)
{ "name": "bench_test_rtps",
"type": "rtps",
"domain": 7
}
],
"instances": [
Even if there is no configuration section for it (see above), this allows us to create unique transport instances.
If both are specified, this will find and use / modify the one specified in the configuration section above. Valid types are rtps_udp
, tcp
, udp
, ip_multicast
, shmem
.
{ "name": "rtps_instance_01",
"type": "rtps_udp",
"domain": 7
}
],
"participants": [
The list of participants to create.
{ "name": "participant_01",
"domain": 7,
"transport_config_name": "rtps_instance_01",
The transport config that gets bound to this participant
"qos": { "entity_factory": { "autoenable_created_entities": false } },
"qos_mask": { "entity_factory": { "has_autoenable_created_entities": false } },
An example of QoS masking.
Note that in this example, the boolean flag is false
, so the QoS mask is not actually applied.
In this case, both lines here were added to make switching back and forth between autoenable_created_entities
easier (simply change the value of the bottom element "has_autoenable_created_entities"
to "true"
).
"topics": [
List of topics to register for this participant
{ "name": "topic_01",
"type_name": "Bench::Data"
Note the type name.
"Bench::Data"
is currently the only topic type supported by the Bench 2 framework.
That said, it contains a variably sized array of octets, allowing a configurable range of data payload sizes (see write_action below).
"content_filtered_topics": [
{
"cft_name": "cft_1",
"cft_expression": "filter_class > %0",
"cft_parameters": ["2"]
}
]
List of content filtered topics.
Note "cft_name"
.
Its value can be used in DataReader "topic_name"
to use the content filter.
}
],
"subscribers": [
List of subscribers
{ "name": "subscriber_01",
"datareaders": [
List of DataReaders
{ "name": "datareader_01",
"topic_name": "topic_01",
"listener_type_name": "bench_drl",
"listener_status_mask": 4294967295,
Note the listener type and status mask.
"bench_drl"
is a listener type registered by the Bench Worker application that does most of the heavy lifting in terms of stats calculation and reporting.
The mask is a fully-enabled bitmask for all listener events (i.e. 2^32 - 1
).
"qos": { "reliability": { "kind": "RELIABLE_RELIABILITY_QOS" } },
"qos_mask": { "reliability": { "has_kind": true } },
DataReaders default to best effort QoS, so here we are setting the reader to reliable QoS and flagging the qos_mask
appropriately in order to get a reliable datareader.
"tags": [ "my_topic", "reliable_transport" ]
The config can specify a list of tags associated with each data reader.
The statistics for each tag is computed in addition to the overall statistics and can be printed out at the end of the run by the test_controller
.
}
]
}
],
"publishers": [
List of publishers within this participant
{ "name": "publisher_01",
"datawriters": [
List of DataWriters within this publisher
{ "name": "datawriter_01",
Note that each DDS entity is given a process-entity-unique name, which can be used below to locate / identify this entity.
"topic_name": "topic_01",
"listener_type_name": "bench_dwl",
"listener_status_mask": 4294967295
}
]
}
]
}
]
},
"actions": [
A list of worker ‘actions’ to start once the test ‘start’ period begins.
{
"name": "write_action_01",
"type": "write",
Current valid types are "write"
, "forward"
, and "set_cft_parameters"
.
"writers": [ "datawriter_01" ],
Note the datawriter name defined above is passed into the action’s writer list. This is used to locate the writer within the process.
"params": [
{ "name": "data_buffer_bytes",
The size of the octet array within the Bench::Data
message.
Note, actual messages will be slightly larger than this value.
"value": { "_d": "PVK_ULL", "ull_prop": 512 }
},
{ "name": "write_frequency",
The frequency with which the write action attempts to write a message. In this case, twice a second.
"value": { "_d": "PVK_DOUBLE", "double_prop": 2.0 }
},
{ "name": "filter_class_start_value",
"value": { "_d": "PVK_ULL", "ull_prop": 0 }
},
{ "name": "filter_class_stop_value",
"value": { "_d": "PVK_ULL", "ull_prop": 0 }
},
{ "name": "filter_class_increment",
"value": { "_d": "PVK_ULL", "ull_prop": 0 }
}
Value range and increment for "filter_class"
data variable, used when writing data.
This variable is an unsigned integer intended to be used for content filtered topics “set_cft_parameters” actions.
]
},
{ "name": "cft_action_01",
"type": "set_cft_parameters",
"params": [
{ "name": "content_filtered_topic_name",
"value": { "_d": "PVK_STRING", "string_prop": "cft_1" }
},
{ "name": "max_count",
"value": { "_d": "PVK_ULL", "ull_prop": 3 }
},
Maximum count of “Set” actions to be taken.
{ "name": "param_count",
"value": { "_d": "PVK_ULL", "ull_prop": 1 }
},
Number of parameters to be set
{ "name": "set_frequency",
"value": { "_d": "PVK_DOUBLE", "double_prop": 2.0 }
},
The frequency for set action, per second
{ "name": "acceptable_param_values",
"value": { "_d": "PVK_STRING_SEQ_SEQ", "string_seq_seq_prop": [ ["1", "2", "3"] ] }
},
Lists of allowed values to set to, for each parameter. Worker will iterate thought the list sequentially unless "random_order"
flag (below) is specified
{ "name": "random_order",
"value": { "_d": "PVK_ULL", "ull_prop": 1 }
}
]
}
]
}
Detailed Application Descriptions¶
test_controller¶
As mentioned above, the test_controller
application is the application responsible for running test scenarios and, as such, will probably wind up being the application most frequently run directly by testers.
The test_controller
needs network visibility to at least one node_controller
configured to run on the same domain. It expects, as arguments, the path to a directory containing config files (both scenario & worker) and the name of a scenario configuration file to run (without the .json
extension).
For historical reasons, the config directory is often simply called example
. The test_controller
application also supports a number of optional configuration parameters, some of which are described in the section below.
Usage¶
test_controller CONFIG_PATH SCENARIO_NAME [OPTIONS]
test_controller --help|-h
This is a subset of the options.
Use --help
option to see all the options.
- CONFIG_PATH¶
Path to the directory of the test configurations and artifacts
- SCENARIO_NAME¶
Name of the scenario file in the test context without the .json extension.
- --domain N¶
The DDS Domain to use. The default is 89.
- --wait-for-nodes N¶
The number of seconds to wait for nodes before broadcasting the scenario to them. The default is 10 seconds.
- --timeout N¶
The number of seconds to wait for a scenario to complete. Overrides the value defined in the scenario. If N is 0, there is no timeout.
- --override-create-time N¶
Overwrite individual worker configs to create their DDS entities N seconds from now (absolute time reference)
- --override-start-time N¶
Overwrite individual worker configs to start their test actions (writes & forwards) N seconds from now (absolute time reference)
- --tag TAG¶
Specify a tag for which the performance statistics will be printed out (and saved to a results file). Multiple instances of this option can be specified, each for a single tag.
- --json-result-id ID¶
Specify a name to store the raw JSON report under. By default, this not enabled. These results will contain the full raw
Bench::TestController
report, including all node controller and worker reports (and DDS entity reports)
node_controller¶
The node controller application is best thought of as a daemon, though the application can be run both in a long-running daemon
mode and also a one-shot
mode more appropriate for testing.
The daemon-exit-on-error
mode additionally has the ability to exit the process every time an error is encountered, which is useful for restarting the application when errors are detected, if run as a part of an OS system environment (systemd, supervisord, etc).
Usage¶
node_controller [OPTIONS] one-shot|daemon|daemon-exit-on-error
- one-shot¶
Run a single batch of worker requests (configs > processes > reports) and report the results before exiting. Useful for one-off and local testing.
- daemon¶
Act as a long-running process that continually runs batches of worker requests, reporting the results. Attempts to recover from errors.
- daemon-exit-on-error¶
Act as a long-running process that continually runs batches of worker requests, reporting the results. Does not attempt to recover from errors.
- --domain N¶
The DDS Domain to use. The default is 89.
- --name STRING¶
Human friendly name for the node. Will be used by the test controller for referring to the node. During allocation of node controllers, the name is used to match against the “name_wildcard” fields of the node configs. Only node controllers whose names match the “name_wildcard” of a given node config can be allocated to that node config. Multiple nodes could have the same name.
worker¶
The worker application is meant to mimic the behavior of a single arbitrary OpenDDS test application. It uses the Bench builder library along with its JSON configuration file to first configure OpenDDS (including discovery & transports) and then create all required DDS entities using any desired DDS QoS attributes. Additionally, it allows the user to configure several test phase timing parameters, using either absolute or relative times:
DDS entity creation (
create_time
)DDS entity “enabling” (
enable_time
) (only relevant ifautoenable_created_entities
QoS setting is false)test actions start time (
start_time
)test actions stop time (
stop_time
)DDS entity destruction (
destruction_time
)
Finally, it also allows for the configuration and execution of test “actions” which take place between the “start” and “stop” times indicated in configuration.These may make use of the created DDS entities in order to simulate application behavior.
At the time of this writing, the three actions are “write”
, which will write to a datawriter using data of a configurable size and frequency (and maximum count), “forward”
, which will pass along the data read from one datareader to a datawriter, allowing for more complex test behaviors (including round-trip latency & jitter calculations), and "set_cft_parameters"
, which will change the content filtered topic parameter values dynamically.
In addition to reading a JSON configuration file, the worker is capable of writing a JSON report file that contains various test statistics gathered from listeners attached to the created DDS entities.
This report is read by the node_controller
after the worker process ends and is then sent back to the waiting test_controller
.
Usage¶
worker [OPTIONS] CONFIG_FILE
- --log LOG_FILE¶
The log file path. Will log to
stdout
if not passed.
- --report REPORT_FILE¶
The report file path.