This guide provides a comprehensive walkthrough for creating a new Matter cluster implementation, referred to as a “code-driven” cluster.
Writing a new cluster involves the following key stages:
Clusters are defined based on the Matter specification. The C++ code for them is generated from XML definitions located in src/app/zap-templates/zcl/data-model/chip.
Generate XML: To create or update a cluster XML, use Alchemy to parse the specification's asciidoc. Manual editing of XML is discouraged, as it is error-prone.
Run Code Generation: Once the XML is ready, run the code generation script. It's often sufficient to run:
./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
For more details, see the code generation guide.
Create a new directory for your cluster at src/app/clusters/<cluster-directory>/. This directory will house the cluster implementation and its unit tests.
For zap-based support, the directory mapping is defined in src/app/zap_cluster_list.json under the ServerDirectories key. This maps the UPPER_SNAKE_CASE define of the cluster to the directory name under src/app/clusters.
Names vary, however to be consistent with most of the existing code use:
cluster-name-server for the cluster directory nameClusterNameSnakeCluster.h/cpp for the ServerClusterInterface implementationClusterNameSnakeLogic.h/cpp for the Logic implementation if applicableFor better testability and maintainability, we recommend splitting the implementation into logical components. The Software Diagnostics cluster is a good example of this pattern.
ClusterLogic:ClusterImplementation:ServerClusterInterface (often by deriving from DefaultServerCluster).ClusterLogic.ClusterDriver (or Delegate):Driver to avoid confusion with the overloaded term Delegate.When implementing a cluster, you have two primary architectural choices: a combined implementation and a modular implementation. The best choice depends on the cluster's complexity and the constraints of the target device, particularly flash and RAM usage.
Combined Implementation (Logic and Data in One Class):
ServerClusterInterface implementation are all contained within a single class.Modular Implementation (Logic Separated from Data Model):
ClusterLogic class, while the ClusterImplementation class handles the translation between the data model and the logic.ClusterLogic can be unit-tested in isolation. It is also more maintainable for complex clusters.Recommendation: Start with a combined implementation for simpler clusters. If the cluster's logic is complex or if you need to maximize testability, choose the modular approach.
The description below will describe build files under src/app/clusters/<cluster-directory>/. You are expected to have the following items:
BUILD.gnThis file will contain a target that is named <cluster-directory>, usually a source_set. This file gets referenced from src/app/chip_data_model.gni by adding a dependency as deps += [ "${_app_root}/clusters/${cluster}" ], so the default target name is important.
app_config_dependent_sourcesThere are two code generation integration support files: one for GN and one for CMake. The way these work is that chip_data_model.gni/chip_data_model.cmake will include these files and bundle ALL referenced sources into ONE SINGLE SOURCE SET, together with ember code-generated settings (e.g. endpoint_config.h and similar files that are application-specific)
As a result, there will be a difference between .gni and .cmake:
app_config_dependent_sources.gni will typically just contain CodegenIntegration.cpp and any other helper/compatibility layers (e.g. CodegenIntegration.h if applicable)app_config_dependent_sources.cmake will contain all the files that the .gni file contains PLUS any dependencies that the BUILD.gn would pull in but cmake would not (i.e. dependencies not in the libCHIP builds). These extra files are often the *.h/*.cpp files that were in the BUILD.gn source set.EXAMPLE taken from (src/app/clusters/basic-information):
# BUILD.gn
import("//build_overrides/build.gni")
import("//build_overrides/chip.gni")
source_set("basic-information") {
sources = [ ... ]
public_deps = [ ... ]
}
# app_config_dependent_sources.gni app_config_dependent_sources = [ "CodegenIntegration.cpp" ]
# app_config_dependent_sources.cmake
# This block adds the codegen integration sources, similar to app_config_dependent_sources.gni
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/CodegenIntegration.cpp"
)
# These are the things that BUILD.gn dependencies would pull
TARGET_SOURCES(
${APP_TARGET}
PRIVATE
"${CLUSTER_DIR}/BasicInformationCluster.cpp"
"${CLUSTER_DIR}/BasicInformationCluster.h"
)
Your implementation must correctly report which attributes and commands are available based on the enabled features and optional items.
BitFlags for purely optional elements.For subscriptions to work correctly, you must notify the system whenever an attribute's value changes.
The Startup method of your cluster receives a ServerClusterContext.
Use the context to call interactionContext->dataModelChangeListener->MarkDirty(path). A NotifyAttributeChanged helper exists for paths managed by this cluster.
For write implementations, you can use NotifyAttributeChangedIfSuccess together with a separate WriteImpl such that any successful attribute write will notify.
Canonical example code would look like:
DataModel::ActionReturnStatus SomeCluster::WriteAttribute(const DataModel::WriteAttributeRequest & request, AttributeValueDecoder & decoder) { // Delegate everything to WriteImpl. If write succeeds, notify that the attribute changed. return NotifyAttributeChangedIfSuccess(request.path.mAttributeId, WriteImpl(request, decoder)); }
For the NotifyAttributeChangedIfSuccess ensure that WriteImpl is returning ActionReturnStatus::FixedStatus::kWriteSuccessNoOp when no notification should be sent (e.g. write was a noop because existing value was already the same).
Canonical example is:
VerifyOrReturnValue(mValue != value, ActionReturnStatus::FixedStatus::kWriteSuccessNoOp);
AttributePersistence from src/app/persistence/AttributePersistence.h. The ServerClusterContext provides an AttributePersistenceProvider.PersistentStorageDelegate.For common or large clusters, you may need to optimize for resource usage. Consider using C++ templates to compile-time select features and attributes, which can significantly reduce flash and RAM footprint.
ServerClusterInterface DetailsWhile ReadAttribute, WriteAttribute, and InvokeCommand are the most commonly implemented methods, the ServerClusterInterface has other methods for more advanced use cases.
ListAttributeWriteNotification)This method is an advanced callback for handling large list attributes that may require special handling, such as persisting them to storage in chunks. A typical example of a cluster that might use this is the Binding cluster. For most clusters, the default implementation is sufficient.
EventInfo)You must implement the EventInfo method if your cluster emits any events that require non-default permissions to be read. For example, an event might require Administrator privileges. While not common, this should be verified for every new cluster implementation and checked during code reviews to ensure event access is correctly restricted.
The distinction between AcceptedCommands and GeneratedCommands can be understood using a REST API analogy:
AcceptedCommands: These are the “requests” that the server cluster can process. In the Matter specification, these are commands sent from the client to the server (client => server).GeneratedCommands: These are the “responses” that the server cluster can generate after processing an accepted command. In the spec, these are commands sent from the server back to the client (server => client).These lists are built based on the cluster's definition in the Matter specification.
src/app/clusters/<cluster-name>/tests/.ClusterLogic should be fully tested, including its behavior with different feature configurations.ClusterImplementation can also be unit-tested if its logic is complex. Otherwise, integration tests should provide sufficient coverage.The build system maps cluster names to their source directories. Add your new cluster to this mapping:
src/app/zap_cluster_list.json and add an entry for your cluster, pointing to the directory you created.CodegenIntegration.cpp)To integrate your cluster with an application's .zap file configuration, you need to bridge the gap between the statically generated code and your C++ implementation.
Create CodegenIntegration.cpp: This file will contain the integration logic.
Create Build Files: Add app_config_dependent_sources.gni and app_config_dependent_sources.cmake to your cluster directory. These files should list CodegenIntegration.cpp and its dependencies. See existing clusters for examples.
Use Generated Configuration: The code generator creates a header file at <app/static-cluster-config/<cluster-name>.h that provides static, application-specific configuration. Use this to initialize your cluster correctly for each endpoint.
Implement Callbacks: Implement Matter<Cluster>ClusterInitCallback(EndpointId) and Matter<Cluster>ClusterShutdownCallback(EndpointId) in your CodegenIntegration.cpp.
Update config-data.yaml: To enable these callbacks, add your cluster to the CodeDrivenClusters array in src/app/common/templates/config-data.yaml.
Update ZAP Configuration: To prevent the Ember framework from allocating memory for your cluster's attributes (which are now managed by your ClusterLogic), you must:
src/app/common/templates/config-data.yaml, consider adding your cluster to CommandHandlerInterfaceOnlyClusters if it does not need Ember command dispatch.src/app/zap-templates/zcl/zcl.json and zcl-with-test-extensions.json, add all non-list attributes of your cluster to attributeAccessInterfaceAttributes. This marks them as externally handled.Once config-data.yaml and zcl.json/zcl-with-test-extensions.json are updated, run the ZAP regeneration command, like
./scripts/run_in_build_env.sh 'scripts/tools/zap_regen_all.py'
all-clusters-app, to test it in a real-world scenario.chip-tool or matter-repl to manually validate the cluster