Build system features:
To check out the Matter repository:
git clone --recurse-submodules firstname.lastname@example.org:project-chip/connectedhomeip.git
If you already have a checkout, run the following command to sync submodules:
git submodule update --init
Before building, you'll need to install a few OS specific dependencies.
On Debian-based Linux distributions such as Ubuntu, these dependencies can be satisfied with the following:
sudo apt-get install git gcc g++ pkg-config libssl-dev libdbus-1-dev \ libglib2.0-dev libavahi-client-dev ninja-build python3-venv python3-dev \ python3-pip unzip libgirepository1.0-dev libcairo2-dev libreadline-dev
On macOS, first install Xcode from the Mac App Store. The remaining dependencies can be installed and satisfied using Brew:
brew install openssl pkg-config
However, that does not expose the package to
pkg-config. To fix that, one needs to run something like the following:
cd /usr/local/lib/pkgconfig ln -s ../../Cellaremail@example.com/1.1.1g/lib/pkgconfig/* .
firstname.lastname@example.org/1.1.1g may need to be replaced with the actual version of OpenSSL installed by Brew.
Note: If using MacPorts,
port install openssl is sufficient to satisfy this dependency.
rpi-imager, install the Ubuntu 22.04 64-bit server OS for arm64 architectures on a micro SD card.
Boot the SD card, login with the default user account “ubuntu” and password “ubuntu”, then proceed with Installing prerequisites on Linux.
Finally, install some Raspberry Pi specific dependencies:
sudo apt-get install pi-bluetooth avahi-utils
You need to reboot your RPi after install
By default, wpa_supplicant is not allowed to update (overwrite) configuration, if you want the Matter app to be able to store the configuration changes permanently, we need to make the following changes.
sudo nano /etc/systemd/system/dbus-fi.w1.wpa_supplicant1.service
Change the wpa_supplicant start parameters to:
ExecStart=/sbin/wpa_supplicant -u -s -i wlan0 -c /etc/wpa_supplicant/wpa_supplicant.conf
sudo nano /etc/wpa_supplicant/wpa_supplicant.conf
And add the following content to the file:
Finally, reboot your RPi.
zap-cli is already installed in pre-built docker images for chip-build, such as chip-build-vscode.
Zap generation and tooling relies on
zap-cli being available on the current system. You can install it from the zap project Releases.
You should install a compatible release version, generally checking against the release set in integrations/docker/images/chip-build/Dockerfile.
On linux, installation from
zap-linux.zip is recommended as it pulls fewer dependencies than the
ZAP scripting uses the following detection, in order:
$ZAP_DEVELOPMENT_PATH to point to a zap checkout. Use this if you are developing zap locally and would like to run zap with your changes
$ZAP_INSTALL_PATH to point to where
zap-mac.zip was unpacked. This allows you to not need to place zap/zap-cli in
Otherwise scripts assume
zap is available in
Before running any other build command, the
scripts/activate.sh environment setup script should be sourced at the top level. This script takes care of downloading GN, ninja, and setting up a Python environment with libraries used to build and test.
If this script says the environment is out of date, it can be updated by running:
scripts/bootstrap.sh script re-creates the environment from scratch, which is expensive, so avoid running it unless the environment is out of date.
This will build all sources, libraries, and tests for the host platform:
source scripts/activate.sh gn gen out/host ninja -C out/host
This generates a configuration suitable for debugging. To configure an optimized build, specify
gn gen out/host --args='is_debug=false' ninja -C out/host
The directory name
out/host can be any directory, although it‘s conventional to build within the
out directory. This example uses
host to emphasize that we’re building for the host system. Different build directories can be used for different configurations, or a single directory can be used and reconfigured as necessary via
To run all tests, run:
ninja -C out/host check
To run only the tests in
src/inet/tests, you can run:
ninja -C out/host src/inet/tests:tests_run
Note that the build system caches passing tests, so if you see
ninja: no work to do
that means that the tests passed in a previous build.
The build is configured by setting build arguments. These are set by passing the
--args option to
gn gen, by running
gn args on the output directory, or by hand editing
args.gn in the output directory. To configure a new build or edit the arguments to existing build, run:
source scripts/activate.sh gn args out/custom ninja -C out/custom
Two key builtin build arguments are
target_cpu, which control the OS & CPU of the build.
To see help for all available build arguments:
gn gen out/custom gn args --list out/custom
Examples can be built in two ways, as separate projects that add Matter in the third_party directory, or in the top level Matter project.
To build the
chip-shell example as a separate project:
cd examples/shell gn gen out/debug ninja -C out/debug
To build it at the top level, see below under “Unified Builds”.
To build a unified configuration that approximates the set of continuous builds:
source scripts/activate.sh gn gen out/unified --args='is_debug=true target_os="all"' ninja -C out/unified all
This can be used prior to change submission to configure, build, and test the gcc, clang, mbedtls, & examples configurations all together in one parallel build. Each configuration has a separate subdirectory in the output dir.
This unified build can be used for day to day development, although it's more expensive to build everything for every edit. To save time, you can name the configuration to build:
ninja -C out/unified host_gcc ninja -C out/unified check_host_gcc
host_gcc with the name of the configuration, which is found in the root
You can also fine tune the configurations generated via arguments such as:
gn gen out/unified --args='is_debug=true target_os="all" enable_host_clang_build=false'
For a full list, see the root
Note that in the unified build, targets have multiple instances and need to be disambiguated by adding a
(toolchain) suffix. Use
gn ls out/debug to list all of the target instances. For example:
gn desc out/unified '//src/controller(//build/toolchain/host:linux_x64_clang)'
Note: Some platforms that can be built as part of the unified build require downloading additional tools. To add these to the build, the location must be provided as a build argument. For example, to add the Simplelink cc13x2_26x2 examples to the unified build, install SysConfig and add the following build arguments:
gn gen out/unified --args="target_os=\"all\" enable_ti_simplelink_builds=true ti_sysconfig_root=\"/path/to/sysconfig\""
GN has builtin help via
gn help execution gn help grammar gn help toolchain
Also see the quick start guide.
GN has various introspection tools to help examine the build configuration.
To show all of the targets in an output directory:
gn ls out/host
To show all of the files that will be built:
gn outputs out/host '*'
To show the GN representation of a configured target:
gn desc out/host //src/inet --all
To dump the GN representation of the entire build as JSON:
gn desc out/host/ '*' --all --format=json
To show the dependency tree:
gn desc out/host //:all deps --tree --all
To find dependency paths:
gn path out/host //src/transport/tests:tests //src/system
To list useful information for linking against libCHIP:
gn desc out/host //src/lib include_dirs gn desc out/host //src/lib defines gn desc out/host //src/lib outputs # everything as JSON gn desc out/host //src/lib --format=json
If you make any change to the GN build system, the next build will regenerate the ninja files automatically. No need to do anything.