Python framework tests

The python test framework is built on top of the ChipDeviceCtrl.py python controller API and the Mobly test framework. Python tests are interaction tests, and can be used for certification testing, and / or integration testing in the CI.

Python tests located in src/python_testing

Resources for getting started

Writing Python tests

  • Defining arguments in the test script
    • In order to streamline the configuration and execution of tests, it is essential to define arguments at the top of the test script. This section should include various parameters and their respective values, which will guide the test runner on how to execute the tests.
  • All test classes inherit from MatterBaseTest in matter_testing.py
    • Support for commissioning using the python controller
    • Default controller (self.default_controller) of type ChipDeviceCtrl
    • MatterBaseTest inherits from the Mobly BaseTestClass
  • Test method(s) (start with test_) and are all run automatically
    • To run in the test harness, the test method name must be test_TC_PICSCODE_#_#
      • More information about integration with the test harness can be found in Test Harness helpers section
    • Any tests that use async method (read / write / commands) should be decorated with the @async_test_body decorator
  • Use ChipDeviceCtrl to interact with the DUT
    • Controller API is in ChipDeviceCtrl.py (see API doc in file)
    • Some support methods in matter_testing.py
  • Use Mobly assertions for failing tests
  • self.step() along with a steps_* method to mark test plan steps for cert tests

A simple test

# See https://github.com/project-chip/connectedhomeip/blob/master/docs/testing/python.md#defining-the-ci-test-arguments
# for details about the block below.
#
# === BEGIN CI TEST ARGUMENTS ===
# test-runner-runs:
#   run1:
#     app: ${ALL_CLUSTERS_APP}
#     app-args: --discriminator 1234 --KVS kvs1 --trace-to json:${TRACE_APP}.json
#     script-args: >
#       --storage-path admin_storage.json
#       --commissioning-method on-network
#       --discriminator 1234
#       --passcode 20202021
#       --trace-to json:${TRACE_TEST_JSON}.json
#       --trace-to perfetto:${TRACE_TEST_PERFETTO}.perfetto
#     factory-reset: true
#     quiet: true
# === END CI TEST ARGUMENTS ===

class TC_MYTEST_1_1(MatterBaseTest):

    @async_test_body
    async def test_TC_MYTEST_1_1(self):

        vendor_name = await self.read_single_attribute_check_success(
            dev_ctrl=self.default_controller, <span style="color:#38761D"># defaults to
self.default_controlller</span>
            node_id = self.dut_node_id, <span style="color:#38761D"># defaults to
self.dut_node_id</span>
            cluster=Clusters.BasicInformation,
            attribute=Clusters.BasicInformation.Attributes.VendorName,
            endpoint = 0, <span style="color:#38761D">#defaults to 0</span>
        )
        asserts.assert_equal(vendor_name, “Test vendor name”, “Unexpected vendor name”)

if __name__ == "__main__":
default_matter_test_main()

In this test, asserts.assert_equal is used to fail the test on equality assertion failure (throws an exception).

Because the test requires the use of the async method read_single_attribute_check_success, the test is decorated with the @async_test_body decorator

The default_matter_test_main() function is used to run the test on the command line. These two lines should appear verbatim at the bottom of every python test file.

The structured comments above the class definition are used to set up the CI for the tests. Please see Running tests in CI.

Cluster Codegen

Common import used in test files: import matter.clusters as Clusters

Each cluster is defined in the Clusters.<ClusterName> namespace and contains always:

  • id
  • descriptor

Each Clusters.<ClusterName> will include the appropriate sub-classes (if defined for the cluster):

  • Enums
  • Bitmaps
  • Structs
  • Attributes
  • Commands
  • Events

Attributes

Attributes derive from ClusterAttributeDescriptor

Each Clusters.<ClusterName>.Attributes.<AttributeName> class has:

  • cluster_id
  • attribute_id
  • attribute_type
  • value

Example:

  • class - Clusters.OnOff.Attributes.OnTime
    • Used for Read commands
  • instance - Clusters.OnOff.Attributes.OnTime(5)
    • Sets the value to 5
    • Pass the instance to Write method to write the value

Commands

Commands derive from ClusterCommand.

Each Clusters.<ClusterName>.Commands.<CommandName> class has:

  • cluster_id
  • command_id
  • is_client
  • response_type (None for status response)
  • descriptor
  • data members (if required)

Example:

  • Clusters.OnOff.Commands.OnWithTimedOff(onOffControl=0, onTime=5, offWaitTime=8)
  • Clusters.OnOff.Commands.OnWithTimedOff()
    • Command with no fields

Events

Events derive from ClusterEvent.

Each Clusters.<ClusterName>.Events.<EventName> class has:

  • cluster_id
  • event_id
  • descriptor
  • Other data members if required

Example:

  • Clusters.AccessControl.Events.AccessControlEntryChanged.adminNodeID

Enums

Enums derive from MatterIntEnum.

Each Clusters.<ClusterName>.Enum.<EnumName> has

  • k<value> constants
  • kUnknownEnumValue (used for testing, do not transmit)

Example:

  • Clusters.AccessControl.Enums.AccessControlEntryPrivilegeEnum.kAdminister

Bitmaps

Bitmaps derive from IntFlag

Each Clusters.<ClusterName>.Bitmaps.<BitmapName> has: - k

Special class:

  • class Feature(IntFlag) - contains the feature map bitmaps

Example:

  • Clusters.LaundryWasherControls.Bitmaps.Feature.kSpin

Structs

Structs derive from ClusterObject.

Each Clusters.<ClusterName>.Structs.<StructName> has:

  • A “descriptor”
  • Data members

Example:

Clusters.BasicInformation.Structs.ProductAppearanceStruct(
   finish=Clusters.BasicInformation.Enums.ProductFinishEnum.kFabric,
   primaryColor=Clusters.BasicInformation.Enums.ColorEnum.kBlack)

Accessing Clusters and Cluster Elements by ID

ClusterObjects.py has a set of objects that map ID to the code generated object.

matter.clusters.ClusterObjects.ALL_CLUSTERS

  • dict[int, Cluster] - maps cluster ID to Cluster class
    • cluster = matter.clusters.ClusterObjects.ALL_CLUSTERS[cluster_id]

matter.clusters.ClusterObjects.ALL_ATTRIBUTES

  • dict[int, dict[int, ClusterAttributeDescriptor]] - maps cluster ID to a dict of attribute ID to attribute class
    • attr = matter.clusters.ClusterObjects.ALL_ATTRIBUTES[cluster_id][attribute_id]

matter.clusters.ClusterObjects.ALL_ACCEPTED_COMMANDS/ALL_GENERATED_COMMANDS

  • dict[int, dict[int, ClusterCommand]]
  • cmd = matter.clusters.ClusterObjects.ALL_ACCEPTED_COMMANDS[cluster_id][cmd_id]

ChipDeviceCtrl API

The ChipDeviceCtrl API is implemented in ChipDeviceCtrl.py.

The ChipDeviceCtrl implements a python-based controller that can be used to commission and control devices. The API is documented here in the ChipDeviceCtrl API documentation

The API doc gives full descriptions of the APIs being used. The most commonly used methods are linked below.

Read

  • Read both attributes and events
  • Can handle wildcard or concrete path

ReadAttribute

  • Convenience wrapper for Read for attributes

Examples: Wildcard read (all clusters, all endpoints):

await dev_ctrl.ReadAttribute(node_id, [()])

Wildcard read (single endpoint 0)

await dev_ctrl.ReadAttribute(node_id, [(0)])

Wildcard read (single cluster from single endpoint 0)

await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff)])

Single attribute

await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff.Attributes.OnTime)])

Multi-path

await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff.Attributes.OnTime),(1, Clusters.OnOff.Attributes.OnOff)])

ReadEvent

  • Convenience wrapper for Read
  • Similar to ReadAttribute, but the tuple includes urgency as the last argument

Example:

urgent = 1

await dev_ctrl ReadEvent(node_id, [(1,
Clusters.TimeSynchronization.Events.MissingTrustedTimeSource, urgent)])

Subscriptions

Subscriptions are handled in the Read / ReadAttribute / ReadEvent APIs. To initiate a subscription, set the reportInterval tuple argument to set the floor and ceiling. The keepSubscriptions and autoResubscribe arguments also apply to subscriptions.

Subscription return ClusterAttribute.SubscriptionTransaction. This can be used to set callbacks. The object is returned after the priming data read is complete, and the values there are used to populate the cache. The attribute callbacks are called on update.

  • SetAttributeUpdateCallback
    • Callable[[TypedAttributePath, SubscriptionTransaction], None]
  • SetEventUpdateCallback
    • Callable[[EventReadResult, SubscriptionTransaction], None]
  • await changes in the main loop using a trigger mechanism from the callback.

Example for setting callbacks:

cb = EventSubscriptionHandler(cluster, cluster_id, event_id)

urgent = 1
subscription = await dev_ctrl.ReadEvent(nodeid=1, events=[(1, event, urgent)], reportInterval=[1, 3])
subscription.SetEventUpdateCallback(callback=cb)

try:
    cb.get_event_from_queue(block=True, timeout=timeout)
except queue.Empty:
    asserts.assert_fail("Timeout on event")

WriteAttribute

Handles concrete paths only (per spec), can handle lists. Returns list of PyChipError

  • Instantiate the ClusterAttributeDescriptor class with the value you want to send, tuple is (endpoint, attribute)
    • use timedRequestTimeoutMs for timed request actions

Example:

res = await devCtrl.WriteAttribute(nodeid=0, attributes=[(0,Clusters.BasicInformation.Attributes.NodeLabel("Test"))])
asserts.assert_equal(ret[0].status, Status.Success, “write failed”)

SendCommand

  • Instantiate the command object with the values you need to populate
  • If there is a non-status return, it’s returned from the command
  • If there is a pure status return it will return nothing
  • Raises InteractionModelError on failure

Example:

pai = await dev_ctrl.SendCommand(nodeid, 0, Clusters.OperationalCredentials.Commands.CertificateChainRequest(2))

MatterBaseTest helpers

  • Because we tend to do a lot of single read / single commands in tests, we added a couple of helpers in MatterBaseTest that use some of the default values
    • read_single_attribute_check_success()
    • read_single_attribute_expect_error()
    • send_single_cmd()
  • step() method to mark step progress for the test harness
  • skip() / skip_step() / skip_remaining_steps() methods for test harness integration
  • check_pics() / pics_guard() to handle pics

Mobly helpers

The test system is based on Mobly, and the matter_testing.py class provides some helpers for Mobly integration.

  • default_matter_test_main
    • Sets up commissioning and finds all tests, parses command-line arguments

use as:

if __name__ == "__main__":
    default_matter_test_main()
  • Mobly will run all methods starting with test_ prefix by default
    • use --tests command line argument to specify exact name,s
  • Setup and teardown methods
    • setup_class / teardown_class
    • setup_test / teardown_test
    • Don’t forget to call the super() if you override these

Test harness helpers

The python testing system also includes several methods for integrations with the test harness. To integrate with the test harness, you can define the following methods on your class to allow the test harness UI to properly work through your tests.

All of these methods are demonstrated in the hello_example.py reference.

  • Steps enumeration:
    • Define a method called steps_<YourTestMethodName> to allow the test harness to display the steps
    • Use the self.step(<stepnum>) method to walk through the steps
  • Test description:
    • Define a method called desc_<YourTestMethodName> to send back a string with the test description
  • Top-level PICS:
    • To guard your test on a top level PICS, define a method called pics_<YourTestMethodName> to send back a list of PICS. If this method is omitted, the test will be run for every endpoint on every device.
  • Overriding the default timeout:
    • If the test is exceptionally long running, define a property getter method default_timeout to adjust the timeout. The default is 90 seconds.

Deferred failures: For some tests, it makes sense to perform the entire test before failing and collect all the errors so the developers can address all the failures without needing to re-run the test multiple times. For example, tests that look at every attribute on the cluster and perform independent operations on them etc.

For such tests, use the ProblemNotice format and the convenience methods:

  • self.record_error
  • self.record_warning

These methods keep track of the problems, and will print them at the end of the test. The test will not be failed until an assert is called.

A good example of this type of test can be found in the device basic composition tests, where all the test steps are independent and performed on a single read. See Device Basic Composition tests

Command line arguments

  • Use --help to get a full list
  • --storage-path
    • Used to set a local storage file path for persisted data to avoid clashing files. It is suggested to always provide this argument. Default value is admin_storage.json in current directory.
  • --commissioning-method
    • Need to re-commission to python controller as chip-tool and python commissioner do not share a credentials
  • --discriminator, --passcode, --qr-code, --manual-code
  • --tests to select tests
  • --PICS
  • --int-arg, --bool-arg, --float-arg, --string-arg, --json-arg, --hex-arg
    • Specify as key:value ex --bool-arg pixit_name:False
    • Used for custom arguments to scripts (PIXITs)

PICS and PIXITS

  • PICS
    • use --PICS on the command line to specify the PICS file
    • use check_pics to gate steps in a file
  • have_whatever = check_pics(“PICS.S.WHATEVER”)
  • PIXITs
    • use --int-arg, --bool-arg etc on the command line to specify PIXITs
    • Warn users if they don’t set required values, add instructions in the comments
  • pixit_value = self.user_params.get(“pixit_name”, default)

Support functionality

To create a controller on a new fabric:

new_CA = self.certificate_authority_manager.NewCertificateAuthority()

new_fabric_admin = new_certificate_authority.NewFabricAdmin(vendorId=0xFFF1,
    fabricId=self.matter_test_config.fabric_id + 1)

TH2 = new_fabric_admin.NewController(nodeId=112233)

Open a commissioning window (ECW):

params = self.OpenCommissioningWindow(dev_ctrl=self.default_controller, node_id=self.dut_node_id)

To create a new controller on the SAME fabric, allocate a new controller from the fabric admin.

Fabric admin for default controller:

  fa = self.certificate_authority_manager.activeCaList[0].adminList[0]
  second_ctrl = fa.new_fabric_admin.NewController(nodeId=node_id)

Automating manual steps

Some test plans have manual steps that require the tester to manually change the state of the DUT. To run these tests in a CI environment, specific example apps can be built such that these manual steps can be achieved by Matter or named-pipe commands.

In the case that all the manual steps in a test script can be achieved just using Matter commands, you can check if the PICS_SDK_CI_ONLY PICS is set to decide if the test script should send the required Matter commands to perform the manual step.

self.is_ci = self.check_pics("PICS_SDK_CI_ONLY")

In the case that a test script requires the use of named-pipe commands to achieve the manual steps, you can use the method write_to_app_pipe(command,app_pipe) to send these commands. This method requires value for app_pipe, if is not provided in the method it will use argument from the CMD or CI argument --app-pipe which must contain the string value with path of the pipe. This value depends on how the --app-pipe in the app is set up.

Note: The name of the pipe can be anything while is a valid file path.

Example of usage:

First run the app with the desired app-pipe path:
./out/darwin-arm64-all-clusters/chip-all-clusters-app --app-pipe  /tmp/ref_alm_2_2

Then execute the test with the app-pipe argument with the value defined while running the app.
python3 src/python_testing/TC_REFALM_2_2.py --commissioning-method on-network --qr-code MT:-24J0AFN00KA0648G00  --PICS src/app/tests/suites/certification/ci-pics-values --app-pipe /tmp/ref_alm_2_2  --int-arg PIXIT.REFALM.AlarmThreshold:1

Running on a separate machines

If the DUT and test script are running on different machines, the write_to_app_pipe method can send named-pipe commands to the DUT via ssh. This requires two additional environment variables:

  • LINUX_DUT_IP sets the DUT's IP address
  • LINUX_DUT_UNAME sets the DUT's ssh username. If not set, this is assumed to be root.

The write_to_app_pipe also requires that ssh-keys are set up to access the DUT from the machine running the test script without a password. You can follow these steps to set this up:

  1. If you do not have a key, create one using ssh-keygen.
  2. Authorize this key on the remote host: run ssh-copy-id user@ip once, using your password.
  3. From now on ssh user@ip will no longer ask for your password.

Other support utilities

  • basic_composition
    • wildcard read, whole device analysis
  • CommissioningFlowBlocks
    • various commissioning support for core tests
  • spec_parsing
    • parsing data model XML into python readable format

Running tests locally

Setup

The scripts require the python wheel to be compiled and installed before running. To compile and install the wheel, do the following:

First activate the matter environment using either

. ./scripts/bootstrap.sh

or

. ./scripts/activate.sh

bootstrap.sh should be used for for the first setup, activate.sh may be used for subsequent setups as it is faster.

Next build the python wheels and create / activate a venv

./scripts/build_python.sh -i out/python_env
source out/python_env/bin/activate

Running tests

  • Note that devices must be commissioned by the python test harness to run tests. chip-tool and the python test harness DO NOT share a fabric.

Once the wheel is installed, you can run the python script as a normal python file for local testing against an already-running DUT. This can be an example app on the host computer (running in a different terminal), or a separate device that will be commissioned either over BLE or WiFi.

For example, to run the TC-ACE-1.2 tests against an un-commissioned DUT:

python3 src/python_testing/TC_ACE_1_2.py --commissioning-method on-network --qr-code MT:-24J0AFN00KA0648G00

Some tests require additional arguments (ex. PIXITs or configuration variables for the CI). These arguments can be passed as sets of key/value pairs using the --<type>-arg:<value> command line arguments. For example:

--int-arg PIXIT.ACE.APPENDPOINT:1 --int-arg PIXIT.ACE.APPDEVTYPEID:0x0100 --string-arg PIXIT.ACE.APPCLUSTER:OnOff --string-arg PIXIT.ACE.APPATTRIBUTE:OnOff

IDM Conformance tests

The conformance tests are whole-node tests that ensure the device under test meets some of the basic interaction model, data model, and system model requirements in the specification. These tests are some of the most commonly failed tests as they often turn up real bugs on devices. The recommendation from the certification testing team is to run these tests before any other application-specific tests as failures on these tests normally result in device configuration changes that would necessitate re-test in other areas.

The tests in https://github.com/project-chip/connectedhomeip/blob/master/src/python_testing/TC_DeviceBasicComposition.py cover the following checks:

  • Root node checks for commissionable devices
  • Base device checks for all endpoints
  • Base cluster checks for all clusters (checking global attribute conformance)
  • Basic utf-8 string format verification for string type attributes
  • Tag list conformance for sibling endpoints
  • Endpoint list verification for power source clusters
  • Various Descriptor cluster checks
  • “test” to create device description files in MatterTlvJson and text

The tests in https://github.com/project-chip/connectedhomeip/blob/master/src/python_testing/TC_DeviceConformance.py cover the following checks:

  • Cluster conformance for every cluster is correct with respect to implemented features, attributes and commands
  • Cluster revisions are correct for the stated specification revision
  • Device types implement all the required cluster and element overrides
  • Device type revisions are correct for the stated specification revision
  • Devices types respect superset rules

The tests in https://github.com/project-chip/connectedhomeip/blob/master/src/python_testing/TC_DefaultWarnings.py cover the following checks:

  • Checks for common unintentional default values in clusters that may have been missed when implementing devices from the SDK

Running Conformance Tests

There are a number of options for running conformance tests. Conformance tests can be run on devices that have been commissioned into the test framework fabric, similar to other certification tests. Because conformance tests operate entirely on a single wildcard read of the device attributes, they can be run over PASE on an uncommissioned device, or run against a device that has been commissioned into the test fabric. They can also be run against a MatterTlvJson file with a representation of this information. These files can be generated from the TC_DeviceBasicComposition.py tests, as described below.

To run any of the python certification tests, you will first need to set up your environment as described in Setup.

Running conformance tests against an uncommissioned device over PASE

First ensure the device is in commissioning mode. Then run the test against the device by supplying either the QR code, the manual pairing code or the discriminator / password pair.

python3 TC_DeviceConformance.py --qr-code MT:-24J0AFN00KA0648G00
python3 TC_DeviceConformance.py --manual-code 34970112332
python3 TC_DeviceConformance.py --discriminator 3840 --passcode 20202021

Running conformance tests against a commissioned device over CASE

If the device has already been commissioned into the python testing fabric, you can run the test directly

python3 TC_DeviceConformance.py

This uses the default fabric storage and node ID. These can also be specified when running the tests.

You can commission the device into the test fabric before running the test by using the --commissioning-method flag and the --qr-code, --manual-code or --discriminator and --passcode flags.

For example:

python3 TC_DeviceConformance.py --discriminator 3840 --passcode 20202021 --commissioning-method on-network

You may also need to provide parameters for the commissioning method.

If the commissioning-method is ble-thread, you will also need to provide the thread operational dataset via the --thread-dataset-hex parameter.

If the commissioning method is ble-wifi, you will also need to provide the wifi SSID and password via the --wifi-ssid and --wifi-passphrase parameters.

By default, the test stores fabric information in admin_storage.json in the current directory and uses a node ID of 0x12344321 for the device being tested. You can supply these directly by using the --storage-path and --dut-node-id flags.

Running conformance tests against MatterTlvJson files

Because the conformance tests are run against the attribute wildcard read from the device, they can also be run against a MatterTlvJson machine readable file without requiring the device to be physically present. To run against a previously generated MatterTlvJson device dump file:

python3 TC_DeviceConformance.py --string-arg test_from_file:device_dump_0xFFF1_0x8001_1.json

You can generate a MatterTlvJson file for a device by leveraging the certification test that generates these (TC-IDM-12.1, implemented in TC_DeviceBasicComposition.py).

python3 TC_DeviceBasicComposition.py --qr-code MT:-24J0AFN00KA0648G00 --tests test_TC_IDM_12_1

Local host app testing

./scripts/tests/run_python_test.py is a convenient script that starts an example DUT on the host and includes factory reset support

./scripts/tests/run_python_test.py --factory-reset --app <your_app> --app-args "whatever" --script <your_script> --script-args "whatever"

For example, to run TC-ACE-1.2 tests against the linux chip-lighting-app:

./scripts/tests/run_python_test.py --factory-reset --app ./out/linux-x64-light-no-ble/chip-lighting-app --app-args "--trace-to json:log" --script src/python_testing/TC_ACE_1_2.py --script-args "--commissioning-method on-network --qr-code MT:-24J0AFN00KA0648G00"

Running tests in CI

  • Add test to the repl_tests_linux section of .github/workflows/tests.yaml
  • Don’t forget to set the PICS file to the ci-pics-values
  • If there are steps in your test that will fail on CI (e.g. test vendor checks), gate them on the PICS_SDK_CI_ONLY
    • if not self.is_pics_sdk_ci_only:
          ...  # Step that will fail on CI
      

The CI test runner uses a structured environment setup that can be declared using structured comments at the top of the test file. To use this structured format, use the --load-from-env flag with the run_python_tests.py runner.

Ex: scripts/run_in_python_env.sh out/venv './scripts/tests/run_python_test.py --load-from-env /tmp/test_env.yaml --script src/python_testing/TC_ICDM_2_1.py'

Running ALL or a subset of tests when changing application code

scripts/tests/local.py is a wrapper that is able to build and run tests in a single command.

Example to compile all prerequisites and then running all python tests:

./scripts/tests/local.py build         # will compile python in out/pyenv and ALL application prerequisites
./scripts/tests/local.py python-tests  # Runs all python tests that are runnable in CI

Defining the CI test arguments

Arguments required to run a test can be defined in the comment block at the top of the test script. The section with the arguments should be placed between the # === BEGIN CI TEST ARGUMENTS === and # === END CI TEST ARGUMENTS === markers. Arguments should be structured as a valid YAML dictionary with a root key test-runner-runs, followed by the run identifier, and then the parameters for that run, e.g.:

# See https://github.com/project-chip/connectedhomeip/blob/master/docs/testing/python.md#defining-the-ci-test-arguments
# for details about the block below.
#
# === BEGIN CI TEST ARGUMENTS ===
# test-runner-runs:
#   run1:
#     app: ${TYPE_OF_APP}
#     app-args: <app_arguments>
#     script-args: <script_arguments>
#     factory-reset: <true|false>
#     quiet: <true|false>
# === END CI TEST ARGUMENTS ===

Description of Parameters

  • app: Indicates the application to be used in the test. Different app types as needed could be referenced from section [name: Generate an argument environment file ] of the file .github/workflows/tests.yaml

    • Example: ${TYPE_OF_APP}
  • factory-reset: Determines whether a factory reset should be performed before the test.

    • Example: true
  • quiet: Sets the verbosity level of the test run. When set to True, the test run will be quieter.

    • Example: true
  • app-args: Specifies the arguments to be passed to the application during the test.

    • Example: --discriminator 1234 --KVS kvs1 --trace-to json:${TRACE_APP}.json
  • app-ready-pattern: Regular expression pattern to match against the output of the application to determine when the application is ready. If this parameter is specified, the test runner will not run the test script until the pattern is found.

    • Example: "Manual pairing code: \\[\\d+\\]"
  • app-stdin-pipe: Specifies the path to the named pipe that the test runner might use to send input to the application.

    • Example: /tmp/app-fifo
  • script-args: Specifies the arguments to be passed to the test script.

    • Example: --storage-path admin_storage.json --commissioning-method on-network --discriminator 1234 --passcode 20202021 --trace-to json:${TRACE_TEST_JSON}.json --trace-to perfetto:${TRACE_TEST_PERFETTO}.perfetto

This structured format ensures that all necessary configurations are clearly defined and easily understood, allowing for consistent and reliable test execution.