The python test framework is built on top of the ChipDeviceCtrl.py python controller API and the Mobly test framework. Python tests are interaction tests, and can be used for certification testing, and / or integration testing in the CI.
Python tests located in src/python_testing
MatterBaseTest
in matter_testing_support.pyself.default_controller
) of type ChipDeviceCtrl
MatterBaseTest
inherits from the Mobly BaseTestClasstest_TC_PICSCODE_#_#
ChipDeviceCtrl
to interact with the DUTChipDeviceCtrl.py
(see API doc in file)matter_testing_support.py
self.step()
along with a steps_*
method to mark test plan steps for cert tests# See https://github.com/project-chip/connectedhomeip/blob/master/docs/testing/python.md#defining-the-ci-test-arguments # for details about the block below. # # === BEGIN CI TEST ARGUMENTS === # test-runner-runs: run1 # test-runner-run/run1/app: ${ALL_CLUSTERS_APP} # test-runner-run/run1/factoryreset: True # test-runner-run/run1/quiet: True # test-runner-run/run1/app-args: --discriminator 1234 --KVS kvs1 --trace-to json:${TRACE_APP}.json # test-runner-run/run1/script-args: --storage-path admin_storage.json --commissioning-method on-network --discriminator 1234 --passcode 20202021 --trace-to json:${TRACE_TEST_JSON}.json --trace-to perfetto:${TRACE_TEST_PERFETTO}.perfetto # === END CI TEST ARGUMENTS === class TC_MYTEST_1_1(MatterBaseTest): @async_test_body async def test_TC_MYTEST_1_1(self): vendor_name = await self.read_single_attribute_check_success( dev_ctrl=self.default_controller, <span style="color:#38761D"># defaults to self.default_controlller</span> node_id = self.dut_node_id, <span style="color:#38761D"># defaults to self.dut_node_id</span> cluster=Clusters.BasicInformation, attribute=Clusters.BasicInformation.Attributes.VendorName, endpoint = 0, <span style="color:#38761D">#defaults to 0</span> ) asserts.assert_equal(vendor_name, “Test vendor name”, “Unexpected vendor name”) if __name__ == "__main__": default_matter_test_main()
In this test, asserts.assert_equal
is used to fail the test on equality assertion failure (throws an exception).
Because the test requires the use of the async method read_single_attribute_check_success
, the test is decorated with the @async_test_body
decorator
The default_matter_test_main()
function is used to run the test on the command line. These two lines should appear verbatim at the bottom of every python test file.
The structured comments above the class definition are used to set up the CI for the tests. Please see Running tests in CI.
Common import used in test files: import chip.clusters as Clusters
Each cluster is defined in the Clusters.<ClusterName>
namespace and contains always:
Each Clusters.<ClusterName>
will include the appropriate sub-classes (if defined for the cluster):
Enums
Bitmaps
Structs
Attributes
Commands
Events
Attributes derive from ClusterAttributeDescriptor
Each Clusters.<ClusterName>.Attributes.<AttributeName>
class has:
Example:
Clusters.OnOff.Attributes.OnTime
Clusters.OnOff.Attributes.OnTime(5)
5
Commands derive from ClusterCommand
.
Each Clusters.<ClusterName>.Commands.<CommandName>
class has:
cluster_id
command_id
is_client
response_type
(None for status response)descriptor
Example:
Clusters.OnOff.Commands.OnWithTimedOff(onOffControl=0, onTime=5, offWaitTime=8)
Clusters.OnOff.Commands.OnWithTimedOff()
Events derive from ClusterEvent
.
Each Clusters.<ClusterName>.Events.<EventName>
class has:
cluster_id
event_id
descriptor
Example:
Enums derive from MatterIntEnum
.
Each Clusters.<ClusterName>.Enum.<EnumName>
has
k<value>
constantskUnknownEnumValue
(used for testing, do not transmit)Example:
Clusters.AccessControl.Enums.AccessControlEntryPrivilegeEnum.kAdminister
Bitmaps derive from IntFlag
Each Clusters.<ClusterName>.Bitmaps.<BitmapName>
has: - k
Special class:
Feature(IntFlag)
- contains the feature map bitmapsExample:
Clusters.LaundryWasherControls.Bitmaps.Feature.kSpin
Structs derive from ClusterObject
.
Each Clusters.<ClusterName>.Structs.<StructName>
has:
Example:
Clusters.BasicInformation.Structs.ProductAppearanceStruct( finish=Clusters.BasicInformation.Enums.ProductFinishEnum.kFabric, primaryColor=Clusters.BasicInformation.Enums.ColorEnum.kBlack)
ClusterObjects.py has a set of objects that map ID to the code generated object.
chip.clusters.ClusterObjects.ALL_CLUSTERS
dict[int, Cluster]
- maps cluster ID to Cluster classcluster = chip.clusters.ClusterObjects.ALL_CLUSTERS[cluster_id]
chip.clusters.ClusterObjects.ALL_ATTRIBUTES
dict[int, dict[int, ClusterAttributeDescriptor]]
- maps cluster ID to a dict of attribute ID to attribute classattr = chip.clusters.ClusterObjects.ALL_ATTRIBUTES[cluster_id][attribute_id]
chip.clusters.ClusterObjects.ALL_ACCEPTED_COMMANDS/ALL_GENERATED_COMMANDS
The ChipDeviceCtrl
API is implemented in ChipDeviceCtrl.py.
The ChipDeviceCtrl
implements a python-based controller that can be used to commission and control devices. The API is documented here in the ChipDeviceCtrl API documentation
The API doc gives full descriptions of the APIs being used. The most commonly used methods are linked below.
Examples: Wildcard read (all clusters, all endpoints):
await dev_ctrl.ReadAttribute(node_id, [()])
Wildcard read (single endpoint 0)
await dev_ctrl.ReadAttribute(node_id, [(0)])
Wildcard read (single cluster from single endpoint 0)
await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff)])
Single attribute
await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff.Attributes.OnTime)])
Multi-path
await dev_ctrl.ReadAttribute(node_id, [(1, Clusters.OnOff.Attributes.OnTime),(1, Clusters.OnOff.Attributes.OnOff)])
Read
ReadAttribute
, but the tuple includes urgency as the last argumentExample:
urgent = 1 await dev_ctrl ReadEvent(node_id, [(1, Clusters.TimeSynchronization.Events.MissingTrustedTimeSource, urgent)])
Subscriptions are handled in the Read
/ ReadAttribute
/ ReadEvent
APIs. To initiate a subscription, set the reportInterval
tuple argument to set the floor and ceiling. The keepSubscriptions
and autoResubscribe
arguments also apply to subscriptions.
Subscription return ClusterAttribute.SubscriptionTransaction
. This can be used to set callbacks. The object is returned after the priming data read is complete, and the values there are used to populate the cache. The attribute callbacks are called on update.
SetAttributeUpdateCallback
SetEventUpdateCallback
Example for setting callbacks:
q = queue.Queue() cb = SimpleEventCallback("cb", cluster_id, event_id, q) urgent = 1 subscription = await dev_ctrl.ReadEvent(nodeid=1, events=[(1, event, urgent)], reportInterval=[1, 3]) subscription.SetEventUpdateCallback(callback=cb) try: q.get(block=True, timeout=timeout) except queue.Empty: asserts.assert_fail(“Timeout on event”)
Handles concrete paths only (per spec), can handle lists. Returns list of PyChipError
ClusterAttributeDescriptor
class with the value you want to send, tuple is (endpoint, attribute)Example:
res = await devCtrl.WriteAttribute(nodeid=0, attributes=[(0,Clusters.BasicInformation.Attributes.NodeLabel("Test"))]) asserts.assert_equal(ret[0].status, Status.Success, “write failed”)
Example:
pai = await dev_ctrl.SendCommand(nodeid, 0, Clusters.OperationalCredentials.Commands.CertificateChainRequest(2))
read_single_attribute_check_success()
read_single_attribute_expect_error()
send_single_cmd()
step()
method to mark step progress for the test harnessskip()
/ skip_step()
/ skip_remaining_steps()
methods for test harness integrationcheck_pics()
/ pics_guard()
to handle picsThe test system is based on Mobly, and the matter_testing_support.py class provides some helpers for Mobly integration.
default_matter_test_main
use as:
if __name__ == "__main__": default_matter_test_main()
test_
prefix by default--tests
command line argument to specify exact name,ssetup_class
/ teardown_class
setup_test
/ teardown_test
super()
if you override theseThe python testing system also includes several methods for integrations with the test harness. To integrate with the test harness, you can define the following methods on your class to allow the test harness UI to properly work through your tests.
All of these methods are demonstrated in the hello_example.py reference.
steps_<YourTestMethodName>
to allow the test harness to display the stepsself.step(<stepnum>)
method to walk through the stepsdesc_<YourTestMethodName>
to send back a string with the test descriptionpics_<YourTestMethodName>
to send back a list of PICS. If this method is omitted, the test will be run for every endpoint on every device.default_timeout
to adjust the timeout. The default is 90 seconds.Deferred failures: For some tests, it makes sense to perform the entire test before failing and collect all the errors so the developers can address all the failures without needing to re-run the test multiple times. For example, tests that look at every attribute on the cluster and perform independent operations on them etc.
For such tests, use the ProblemNotice format and the convenience methods:
self.record_error
self.record_warning
These methods keep track of the problems, and will print them at the end of the test. The test will not be failed until an assert is called.
A good example of this type of test can be found in the device basic composition tests, where all the test steps are independent and performed on a single read. See Device Basic Composition tests
--help
to get a full list--storage-path
admin_storage.json
in current directory.--commissioning-method
--discriminator
, --passcode
, --qr-code
, --manual-code
--tests
to select tests--PICS
--int-arg
, --bool-arg
, --float-arg
, --string-arg
, --json-arg
, --hex-arg
To create a controller on a new fabric:
new_CA = self.certificate_authority_manager.NewCertificateAuthority() new_fabric_admin = new_certificate_authority.NewFabricAdmin(vendorId=0xFFF1, fabricId=self.matter_test_config.fabric_id + 1) TH2 = new_fabric_admin.NewController(nodeId=112233)
Open a commissioning window (ECW):
params = self.OpenCommissioningWindow(dev_ctrl=self.default_controller, node_id=self.dut_node_id)
To create a new controller on the SAME fabric, allocate a new controller from the fabric admin.
Fabric admin for default controller:
fa = self.certificate_authority_manager.activeCaList[0].adminList[0] second_ctrl = fa.new_fabric_admin.NewController(nodeId=node_id)
basic_composition_support
CommissioningFlowBlocks
spec_parsing_support
The scripts require the python wheel to be compiled and installed before running. To compile and install the wheel, do the following:
First activate the matter environment using either
. ./scripts/bootstrap.sh
or
. ./scripts/activate.sh
bootstrap.sh should be used for for the first setup, activate.sh may be used for subsequent setups as it is faster.
Next build the python wheels and create / activate a venv (called pyenv
here, but any name may be used)
./scripts/build_python.sh -i pyenv source pyenv/bin/activate
Once the wheel is installed, you can run the python script as a normal python file for local testing against an already-running DUT. This can be an example app on the host computer (running in a different terminal), or a separate device that will be commissioned either over BLE or WiFi.
For example, to run the TC-ACE-1.2 tests against an un-commissioned DUT:
python3 src/python_testing/TC_ACE_1_2.py --commissioning-method on-network --qr-code MT:-24J0AFN00KA0648G00
Some tests require additional arguments (ex. PIXITs or configuration variables for the CI). These arguments can be passed as sets of key/value pairs using the --<type>-arg:<value>
command line arguments. For example:
--int-arg PIXIT.ACE.APPENDPOINT:1 --int-arg PIXIT.ACE.APPDEVTYPEID:0x0100 --string-arg PIXIT.ACE.APPCLUSTER:OnOff --string-arg PIXIT.ACE.APPATTRIBUTE:OnOff
./scripts/tests/run_python_test.py
is a convenient script that starts an example DUT on the host and includes factory reset support
./scripts/tests/run_python_test.py --factoryreset --app <your_app> --app-args "whatever" --script <your_script> --script-args "whatever"
repl_tests_linux
section of .github/workflows/tests.yaml
is_ci = self.check_pics('PICS_SDK_CI_ONLY')
The CI test runner uses a structured environment setup that can be declared using structured comments at the top of the test file. To use this structured format, use the --load-from-env
flag with the run_python_tests.py
runner.
Ex: scripts/run_in_python_env.sh out/venv './scripts/tests/run_python_test.py --load-from-env /tmp/test_env.yaml --script src/python_testing/TC_ICDM_2_1.py'
Below is the format of the structured environment definition comments:
# See https://github.com/project-chip/connectedhomeip/blob/master/docs/testing/python.md#defining-the-ci-test-arguments # for details about the block below. # # === BEGIN CI TEST ARGUMENTS === # test-runner-runs: <run_identifier> # test-runner-run/<run_identifier>/app: ${TYPE_OF_APP} # test-runner-run/<run_identifier>/factoryreset: <True|False> # test-runner-run/<run_identifier>/quiet: <True|False> # test-runner-run/<run_identifier>/app-args: <app_arguments> # test-runner-run/<run_identifier>/script-args: <script_arguments> # === END CI TEST ARGUMENTS ===
NOTE: The === BEGIN CI TEST ARGUMENTS ===
and === END CI TEST ARGUMENTS ===
markers must be present.
test-runner-runs
: Specifies the identifier for the run. This can be any unique identifier.
run1
test-runner-run/<run_identifier>/app
: Indicates the application to be used in the test. Different app types as needed could be referenced from section [name: Generate an argument environment file ] of the file .github/workflows/tests.yaml
- Example: `${TYPE_OF_APP}`
test-runner-run/<run_identifier>/factoryreset
: Determines whether a factory reset should be performed before the test.
True
test-runner-run/<run_identifier>/quiet
: Sets the verbosity level of the test run. When set to True, the test run will be quieter.
True
test-runner-run/<run_identifier>/app-args
: Specifies the arguments to be passed to the application during the test.
--discriminator 1234 --KVS kvs1 --trace-to json:${TRACE_APP}.json
test-runner-run/<run_identifier>/script-args
: Specifies the arguments to be passed to the test script.
--storage-path admin_storage.json --commissioning-method on-network --discriminator 1234 --passcode 20202021 --trace-to json:${TRACE_TEST_JSON}.json --trace-to perfetto:${TRACE_TEST_PERFETTO}.perfetto
This structured format ensures that all necessary configurations are clearly defined and easily understood, allowing for consistent and reliable test execution.