IDT v0.0.2 (#29931)

* Squashed commit of the following:

commit 29d46b1e1a49ea4952edb5767f057600e9a3f9be
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 21:48:48 2023 -0700

    Nit

commit b43293773d2a3efbbe640b3c56889ee557b77bcd
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 21:11:22 2023 -0700

    README and error reporting and controlelr resturct

commit 4ac762d9ca34d7e603921a0721a1bcd3b17223d8
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 19:15:59 2023 -0700

    Logging

commit 5b6b3254b5afc975989d70df3bc60c501ee8a045
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 18:38:25 2023 -0700

    Move logging

commit 9df3206fe6e87c801eef085ba3aded65b92d2fdb
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 18:13:57 2023 -0700

    Prober

commit 4683b34a3ef3e4e63decc5ad1f73cb7543d0620f
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 17:55:03 2023 -0700

    Make logcat resiliant

commit 12bb816fddee9261167c7fcfd0a5a0c67c5a01ad
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 17:30:37 2023 -0700

    Warn when file is not growing

commit 1451a027ae738cdffbea011d75a1358f953a4b9f
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 16:03:36 2023 -0700

    Real time analysis

commit 8c8ac7f348935de39bab08e7a4c5521359a8a7bb
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 12:22:00 2023 -0700

    Fix capabilities, screen

commit 6ccfcc7b00d9b32dbb1f71c0e604e2215f902ad5
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 11:27:38 2023 -0700

    Snoop

commit 29d8969c363146f8e10c5b6ba550c7f6b01c9500
Author: bozowski <bozowski@google.com>
Date:   Mon Oct 23 11:12:26 2023 -0700

    HCI logs

commit a856bf67f37826de6594161e44a6547425686814
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 15:04:32 2023 -0700

    Fix pull

commit 1239a9066afb0f980e10611f8bc877e5834372e1
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 14:51:22 2023 -0700

    OK

commit 2b3db4dfb2935513b0995bdea8afb211671c8390
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 14:09:16 2023 -0700

    Screen recording

commit 5d02aa61f9439bcbabffcbf6ce57d75636093a1a
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 13:34:05 2023 -0700

    Nits

commit 85517ff7901854b229a10914a5234ea58072ecaa
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 13:07:40 2023 -0700

    Fix prober, color config

commit c5d668c40067a1615dc19ebd329f14cb00659b47
Author: bozowski <bozowski@google.com>
Date:   Fri Oct 20 12:45:20 2023 -0700

    Screen

commit 63525818453d0190df0f9b06f29efa200ca928d4
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 21:39:05 2023 -0700

    Nit

commit 1c8a45a7a7301031f904aa20ad4049a93a85bbde
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 21:22:14 2023 -0700

    Nit

commit 51c187f13301386cca0a14a2c0a32b61c38a7cc7
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 21:10:07 2023 -0700

    Splash

commit 0091588772a0dd29fce448a6cda1c19c655574b5
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 20:47:48 2023 -0700

    Refactor

commit c480a023875e807d7ccb5780ab3846730211239c
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 15:41:39 2023 -0700

    Refactor

commit 024eeb6f979a400f2315a68baa7a6a5ba094edfa
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 15:27:05 2023 -0700

    Bugreport

commit efcc621f5c772f60405ed4b0035102d71a97f4cd
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 14:51:30 2023 -0700

    Nit

commit d5474b7128607256972424bc3dd7b16012cc979c
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 14:33:18 2023 -0700

    Configs

commit 0fcbb101db0f38c41f01aaf54a29830a5e5527ab
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 04:26:54 2023 -0700

    TODO

commit b1c7a304c5a204fa43e12111cf4612471e6e5e4e
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 04:20:18 2023 -0700

    Clean BUILD

commit d86778a5adecfc4b50f330a6dd0af2cfe651e82d
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 04:16:19 2023 -0700

    Logging bug

commit 6ebf7c06cd45df1bfee44126e5707121d5149a22
Author: bozowski <bozowski@google.com>
Date:   Thu Oct 19 03:03:48 2023 -0700

    Branch

* Restyled by prettier-markdown

* Restyled by shellharden

* Restyled by autopep8

* Restyled by isort

* Spelling

* Lints

* Support RPi install

* Restyled by prettier-markdown

* Uneeded sudo

* Readme clarification

* Remove dead comment

* Remove dead comment

* Squashed commit of the following:

commit 36f3cebc8bead66b65bcfa1cc31892da05699a40
Author: Austin Bozowski <bozowski@google.com>
Date:   Wed Oct 25 20:07:16 2023 -0700

    Multiproc to async and target macOS

* Fix stopping procs on mac

* Replace multiproc with async

* Fix sudo kill issue on macOS

* Note

* Fix stopping pcap issue on macOS

* Cleanup

* BLE scanning on macOS

* Final cleanup for macOS and README

* Squashed commit of the following:

commit 3da0875bd3d1c1c8b43f9d1985a30ec570fcefb3
Author: Austin Bozowski <bozowski@google.com>
Date:   Fri Nov 3 01:49:45 2023 -0700

    README

commit b7d115d58fe7a0d8189e1bf48ab8397f47f6044a
Author: Austin Bozowski <bozowski@google.com>
Date:   Fri Nov 3 01:44:58 2023 -0700

    Refactor

commit b9dca9db08ff7a675a3bc33d4cfaba1040e0e648
Author: Austin Bozowski <bozowski@google.com>
Date:   Fri Nov 3 01:16:11 2023 -0700

    Refactor

commit d960b41e2bc5a697da7addd3bafdd8e5e0b3e97e
Author: Austin Bozowski <bozowski@google.com>
Date:   Fri Nov 3 00:43:21 2023 -0700

    Logging

commit 4f6644cfe1db756ff2ab337bb32c559523834d97
Author: Austin Bozowski <bozowski@google.com>
Date:   Fri Nov 3 00:18:11 2023 -0700

    Discovery

commit 6900a196f752b30b1539cf393d00c46f6066a5fa
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 23:55:17 2023 -0700

    README

commit f2e9a8cd705f2cc7a1490fd0ef51de24865b1c47
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 23:44:15 2023 -0700

    README

commit dc98974c4c1eece8f7a820b30df734a1c1787732
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 23:27:02 2023 -0700

    Write dnssd log

commit 1fe3f98116407d59a2fd392d507ea3bb96705297
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 22:34:32 2023 -0700

    Cleanup probers

commit 5e48bc0f95a2e0b218bf00f31c6b1bb7f0baeedf
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 14:55:03 2023 -0700

    Probes

commit c9d14ed1ac32a9107bd2ff63ac9aaeaa32812384
Author: Austin Bozowski <bozowski@google.com>
Date:   Thu Nov 2 13:22:21 2023 -0700

    Probers

commit 12307e422bb206e9d6dd868698c58674648cb3b2
Author: Austin Bozowski <bozowski@google.com>
Date:   Wed Nov 1 15:43:31 2023 -0700

    Earlier prober

* Nits

* Remove temp tests

* Prevent re run of Bash proc and nits

* Restyled by prettier-markdown

* Restyled by autopep8

* Restyled by isort

* Spelling

* Lint

* Respond comment.

* Comment and rename

* Nits

* Clarify

* Add info on interfaces to README

* Document failure mode of not allowing bt perms in macOS

* Restyled by prettier-markdown

* Restyled by isort

* Verify py version and fix non py dep check

* Move host dependencies back to main

* Fix py version check

* Make task cleanup more graceful

* Nit

* Respond comments

* Ensure which is available in host_platform

* Add mac tcpdump build script thanks to James
Rename timeout config and explain it in comments
Slight change to where timeouts are in controller

* README

* Logging

* Explain all DNS SD content to the user (parse TXT etc.)

* Remove TODO

* TTL and nits

* Logging, limit tracert, nits

* Restyled by prettier-markdown

* Restyled by shellharden

* Restyled by shfmt

* Restyled by autopep8

* Restyled by isort

* Integrate mac android tcp and bump restyle

* Use termcolor for colored logs

* Timeout config documentation

* Remove unnecessary abstractions from controller

* Cleanup py version check

* Cleanup error reporting in controller

* Remove unneeded instance vars in playservices

* Fix logcat file scope issue introd in last commit

* Fix dedent

* Add bison to Dockerfile

* Make screen on check in android explicit

* Restyled by autopep8

* Restyled by isort

---------

Co-authored-by: Restyled.io <commits@restyled.io>
diff --git a/src/tools/interop/idt/.gitignore b/src/tools/interop/idt/.gitignore
index d221600..55da2a8 100644
--- a/src/tools/interop/idt/.gitignore
+++ b/src/tools/interop/idt/.gitignore
@@ -7,3 +7,4 @@
 pycache/
 venv/
 .zip
+BUILD
diff --git a/src/tools/interop/idt/Dockerfile b/src/tools/interop/idt/Dockerfile
index 6eeea77..29036c3 100644
--- a/src/tools/interop/idt/Dockerfile
+++ b/src/tools/interop/idt/Dockerfile
@@ -21,6 +21,12 @@
     adb \
     aircrack-ng \
     apt-utils \
+    bison \
+    byacc \
+    dnsutils \
+    flex \
+    gcc-aarch64-linux-gnu \
+    gcc-arm-linux-gnueabi \
     git \
     glib-2.0 \
     kmod \
diff --git a/src/tools/interop/idt/README.md b/src/tools/interop/idt/README.md
index dc60067..acc0c25 100644
--- a/src/tools/interop/idt/README.md
+++ b/src/tools/interop/idt/README.md
@@ -29,7 +29,32 @@
 that analyzes capture data, displays info to the user, probes the local
 environment and generates additional artifacts.
 
-## Getting started
+## Single host installation (no Raspberry Pi)
+
+All features of `idt` are available on macOS and Linux (tested with Debian based
+systems).  
+If you would prefer to execute capture and discovery from a Raspberry Pi, read
+the next section instead.
+
+The machine running `idt` should be connected to the same Wi-Fi network used for
+testing.  
+Follow the steps below to execute capture and discovery without a Raspberry Pi:
+
+-   From the parent directory of `idt`, run `source idt/scripts/alias.sh`.
+-   Optionally, run `source idt/scripts/setup_shell.sh` to install aliases
+    permanently.
+-   After `idt` aliases are available in your environment, calling any `idt`
+    command will automatically create a new virtual environment and install
+    python dependencies.
+    -   If you're missing non-Python dependencies, you'll be prompted to install
+        them until they're available.
+-   Bluetooth discovery on macOS will require granting the program where `idt`
+    is run, e.g. terminal emulator or IDE permission to access bluetooth in
+    macOS settings.
+    -   Failure to do so may result in any of the following:
+        -   A single `abort` message and no further output in the terminal.
+        -   Failure with a relevant stack trace in the terminal.
+        -   A prompt to allow the application access to bluetooth.
 
 ## Raspberry Pi installation
 
@@ -137,7 +162,7 @@
 ```
 
 NOTE the idt artifacts directory is contained in idt, so running this will
-delete any artifacts ([TODO] change).
+delete any artifacts.
 
 Then from the admin computer:
 
@@ -145,54 +170,13 @@
 idt_push
 ```
 
-## Single host installation (no Raspberry Pi)
-
-Follow the steps below to execute capture and discovery without a Raspberry Pi.
-
-### Linux installation
-
-#### Requirements
-
--   This package should work on most Debian (/based) systems.
--   `idt` is currently tested on `Python 3.11`.
--   `adb` and `tcpdump` are required.
--   The machine running `idt` should be connected to the same Wi-Fi network used
-    for testing.
-
-#### Setup
-
--   From the parent directory of `idt`, run `source idt/scripts/alias.sh`.
--   Optionally, run `source idt/scripts/setup_shell.sh` to install aliases
-    permanently.
-
-> You may use `idt` in a Python virtual environment OR using a container from
-> the idt image.
-
-#### Python virtual environment
-
--   After `idt` aliases are available in your environment, calling any `idt`
-    command will automatically create a new virtual environment and install
-    dependencies.
-
-#### Docker
-
--   Run `idt_build` and `idt_activate` to enter the `idt` container.
-
-[TODO] Podman
-
-### macOS installation
-
-Most features other than BLE should work on macOS.
-
-Follow the Linux installation steps above, but do not use Docker.
-
-[TODO] macOS BLE support
-
 ## User guide
 
 > **_IMPORTANT_**  
 > `idt_` commands are shell aliases helpful for administrative commands.  
-> `idt` invokes the `idt` python package.
+> `idt` invokes the `idt` python package.  
+> Output from `idt` will generally be colorized while output from sub processes
+> is generally not.
 
 RPi users, as needed:
 
@@ -208,45 +192,38 @@
 
 ### Capture
 
+> **_IMPORTANT_**  
+> Ensure you've made it to the log line "Starting real time analysis, press
+> enter to stop!" before launching the app under test.
+
 ```
 idt capture -h
 
-usage: idt capture [-h] [--platform {Android}]
-                   [--ecosystem {PlayServices,PlayServicesUser,ALL}]
-                   [--pcap {t,f}]
-                   [--interface {wlp0s20f3,docker0,lo}]
-                   [--additional {t,f}]
+usage: idt capture [-h] [--platform {Android}] [--ecosystem {PlayServicesUser,PlayServices,ALL}] [--pcap {t,f}] [--interface {wlp0s20f3,lo,docker0,any}]
 
 options:
   -h, --help            show this help message and exit
   --platform {Android}, -p {Android}
-                        Run capture for a particular platform
-                        (default Android)
-  --ecosystem {PlayServices,PlayServicesUser,ALL}, -e {PlayServices,PlayServicesUser,ALL}
-                        Run capture for a particular ecosystem or ALL
-                        ecosystems (default ALL)
+                        Run capture for a particular platform (default Android)
+  --ecosystem {PlayServicesUser,PlayServices,ALL}, -e {PlayServicesUser,PlayServices,ALL}
+                        Run capture for a particular ecosystem or ALL ecosystems (default ALL)
   --pcap {t,f}, -c {t,f}
                         Run packet capture (default t)
-  --interface {wlp0s20f3,docker0,lo}, -i {wlp0s20f3,docker0,lo}
-                        Run packet capture against a specified
-                        interface (default wlp0s20f3)
-  --additional {t,f}, -a {t,f}
-                        Run ble and mdns scanners in the background
-                        while capturing (default t)
+  --interface {wlp0s20f3,lo,docker0,any}, -i {wlp0s20f3,lo,docker0,any}
+                        Specify packet capture interface (default any)
 ```
 
+For packet capture interface (`-i`/`--interface`:
+
+-   On macOS, the only available interface is `any`.
+-   On Linux, `idt` checks available interfaces from `/sys/class/net/` as well
+    as allowing `any`.
+
 #### Artifacts
 
 Each ecosystem and platform involved in the capture will have their own
 subdirectory in the root artifact dir.
 
-To download your artifacts, run these commands from your admin computer:
-
-`idt_fetch_artifacts`
-
-On windows admin computers, you may use `FileZilla` to pull the archive listed
-at the end of output.
-
 ### Discovery
 
 ```
@@ -260,7 +237,7 @@
                         Specify the type of discovery to execute
 ```
 
-#### ble
+#### BLE
 
 ```
 idt discover -t b
@@ -274,13 +251,90 @@
 
 #### Artifacts
 
-There is a per device log for ble scanning in `ble` subdirectory of the root
-artifact dir.
+There is a per device log in `ble` and `dnssd` subdirectory of the root artifact
+dir.
 
-[TODO] dnssd per device log
+### Probe
+
+```
+usage: idt probe [-h]
+
+options:
+  -h, --help  show this help message and exit
+```
+
+Collect contextually relevant networking info from the local environment and
+provide artifacts.
+
+## Troubleshooting
+
+-   Wireless `adb` may fail to connect indefinitely depending on network
+    configuration. Use a wired connection if wireless fails repeatedly.
+-   Change log level from `INFO` to `DEBUG` in root `config.py` for additional
+    logging.
+-   Compiling `tcpdump` for android may require additional dependencies.
+    -   If the build script fails for you, try
+        `idt_go && source idt/scripts/compilers.sh`.
+-   You may disable colors and splash by setting `enable_color` in `config.py`
+    to `False`.
+-   `idt_clean_child` will kill any stray `tcpdump` and `adb` commands.
+    -   `idt_check_child` will look for leftover processes.
+    -   Not expected to be needed outside of development scenarios.
+
+## Project overview
+
+-   The entry point is in `idt.py` which contains simple CLI parsing with
+    `argparse`.
+
+### `capture`
+
+-   `base` contains the base classes for ecosystems and platforms.
+-   `controller` contains the ecosystem and platform producer and controller
+-   `loader` is a generic class loader that dynamically imports classes matching
+    a given super class from a given directory.
+-   `/platform` and `/ecosystem` contain one package for each platform and
+    ecosystem, which should each contain one implementation of the respective
+    base class.
+
+### `discovery`
+
+-   `matter_ble` provides a simple ble scanner that shows matter devices being
+    discovered and lost, as well as their VID/PID, RSSI, etc.
+-   `matter_dnssd` provides a simple DNS-SD browser that searches for matter
+    devices and thread border routers.
+
+### `probe`
+
+-   `probe` contains the base class for (`idt`'s) host platform specific
+    implementation.
+    -   Reuses the dnssd discovery implementation to build probe targets.
+    -   Calls platform + addr type specific probe methods for each target.
+-   `linux` and `mac` contain `probe` implementations for each host platform.
+
+### `utils`
+
+-   `log` contains logging utilities used by everything in the project.
+-   `artifact` contains helper functions for managing artifacts.
+-   `shell` contains a simple helper class for background and foreground Bash
+    commands.
+-   `host_platform` contains helper functions for the interacting with the host
+    running `idt`.
+
+### Conventions
+
+-   `config.py` should be used to hold development configs within the directory
+    where they are needed.
+    -   It may also hold configs for flaky/cumbersome features that might need
+        to be disabled in an emergency.
+    -   `config.py` **should not** be used for everyday operation.
+-   When needed, execute builds in a folder called `BUILD` within the source
+    tree.
+    -   `idt_clean_all` deletes all `BUILD` dirs and `BUILD` is in `.gitignore`.
 
 ## Extending functionality
 
+### Capture
+
 Ecosystem and Platform implementations are dynamically loaded.
 
 For each package in `capture/ecosystem`, the ecosystem loader expects a module
@@ -299,14 +353,13 @@
 usage: idt capture [-h] [--platform {Android}] [--ecosystem {DemoExtEcosystem...
 ```
 
+> **IMPORTANT:** Note the following runtime expectations of ecosystems:  
+> `analyze_capture()` must not block the async event loop excessively and must
+> not interact with standard in
+
 The platform loader functions the same as `capture/ecosystem`.
 
 For each package in `capture/platform`, the platform loader expects a module
 name matching the package name.  
 This module must contain a single class which is a subclass of
 `capture.base.PlatformLogStreamer`.
-
-Note the following runtime expectations of platforms:
-
--   Start should be able to be called repeatedly without restarting streaming.
--   Stop should not cause an error even if the stream is not running.
diff --git a/src/tools/interop/idt/__main__.py b/src/tools/interop/idt/__main__.py
index c354801..b927d05 100644
--- a/src/tools/interop/idt/__main__.py
+++ b/src/tools/interop/idt/__main__.py
@@ -16,6 +16,8 @@
 #
 
 from idt import InteropDebuggingTool
+from utils.host_platform import verify_py_version
 
 if __name__ == "__main__":
+    verify_py_version()
     InteropDebuggingTool()
diff --git a/src/tools/interop/idt/capture/__init__.py b/src/tools/interop/idt/capture/__init__.py
index d4e9bae..18069e7 100644
--- a/src/tools/interop/idt/capture/__init__.py
+++ b/src/tools/interop/idt/capture/__init__.py
@@ -17,16 +17,14 @@
 
 from capture import ecosystem, platform
 
-from .factory import EcosystemCapture, EcosystemController, EcosystemFactory, PlatformFactory, PlatformLogStreamer
+from .controller import EcosystemCapture, PlatformLogStreamer
 from .pcap import PacketCaptureRunner
 
 __all__ = [
     'ecosystem',
     'platform',
+    'controller',
     'EcosystemCapture',
-    'EcosystemController',
-    'EcosystemFactory',
     'PacketCaptureRunner',
-    'PlatformFactory',
     'PlatformLogStreamer',
 ]
diff --git a/src/tools/interop/idt/capture/base.py b/src/tools/interop/idt/capture/base.py
index 6c1ab6f..b905710 100644
--- a/src/tools/interop/idt/capture/base.py
+++ b/src/tools/interop/idt/capture/base.py
@@ -31,18 +31,32 @@
         raise NotImplementedError
 
     @abstractmethod
+    async def connect(self) -> None:
+        """
+        Establish connections to log targets for this platform
+        """
+        raise NotImplementedError
+
+    @abstractmethod
     async def start_streaming(self) -> None:
         """
         Begin streaming logs
-        Start should be able to be called repeatedly without restarting streaming
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    async def run_observers(self) -> None:
+        """
+        Observe log procs and restart as needed
+        Must be async aware and not interact with stdin
         """
         raise NotImplementedError
 
     @abstractmethod
     async def stop_streaming(self) -> None:
         """
-        Stop streaming logs
-        Stop should not cause an error even if the stream is not running
+        Stop the capture and pull any artifacts from remote devices
+        Write artifacts to artifact_dir passed on instantiation
         """
         raise NotImplementedError
 
@@ -71,13 +85,16 @@
     async def start_capture(self) -> None:
         """
         Start the capture
+        Platform is already started
         """
         raise NotImplementedError
 
     @abstractmethod
     async def stop_capture(self) -> None:
         """
-        Stop the capture
+        Stop the capture and pull any artifacts from remote devices
+        Write artifacts to artifact_dir passed on instantiation
+        Platform is already stopped
         """
         raise NotImplementedError
 
@@ -85,6 +102,13 @@
     async def analyze_capture(self) -> None:
         """
         Parse the capture and create + display helpful analysis artifacts that are unique to the ecosystem
-        Write analysis artifacts to artifact_dir
+        Must be async aware and not interact with stdin
+        """
+        raise NotImplementedError
+
+    @abstractmethod
+    async def probe_capture(self) -> None:
+        """
+        Probe the local environment, e.g. ping relevant remote services and write respective artifacts
         """
         raise NotImplementedError
diff --git a/src/tools/interop/idt/capture/config.py b/src/tools/interop/idt/capture/config.py
new file mode 100644
index 0000000..12ce985
--- /dev/null
+++ b/src/tools/interop/idt/capture/config.py
@@ -0,0 +1,50 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+"""
+The timeout time used by Orchestrator in capture/controller
+
+Used when calling:
+- Platform.connect() - if timeout, then halt
+- Ecosystem.start(), .stop(), .probe() - if timeout, then continue execution and log error
+
+This is an async timeout, so dependent on event loop being released to work.
+To illustrate, consider this example where no timeout is thrown despite the awaited task running for twice the timeout:
+----
+sleep_time = 2
+
+async def not_actually_async():
+    time.sleep(sleep_time * 2)  # Blocking the EL!
+
+async def try_timeout():
+    async with asyncio.timeout(sleep_time):
+        await not_actually_async()
+    print("Timeout was NOT thrown!")
+
+asyncio.run(try_timeout())
+----
+Result: Timeout was NOT thrown!
+
+Update the example
+----
+async def not_actually_async():  # Now it is_actually_async because we
+    await asyncio.sleep(sleep_time * 2)  # change to something that isn't blocking the EL
+----
+Result: The timeout error will be raised.
+
+"""
+orchestrator_async_step_timeout_seconds = 240
diff --git a/src/tools/interop/idt/capture/controller.py b/src/tools/interop/idt/capture/controller.py
new file mode 100644
index 0000000..624a6f2
--- /dev/null
+++ b/src/tools/interop/idt/capture/controller.py
@@ -0,0 +1,183 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import copy
+import os
+import sys
+import traceback
+import typing
+from dataclasses import dataclass
+
+import capture
+from capture.base import EcosystemCapture, PlatformLogStreamer, UnsupportedCapturePlatformException
+from utils.artifact import create_standard_log_name, log, safe_mkdir
+from utils.log import add_border, border_print
+
+from . import config
+
+
+@dataclass(repr=True)
+class ErrorRecord:
+    ecosystem: str
+    help_message: str
+    stack_trace: str
+
+
+_PLATFORM_MAP: typing.Dict[str, PlatformLogStreamer] = {}
+_ECOSYSTEM_MAP: typing.Dict[str, PlatformLogStreamer] = {}
+_ERROR_REPORT: typing.Dict[str, ErrorRecord] = {}
+
+logger = log.get_logger(__file__)
+
+
+def track_error(ecosystem: str, help_message: str) -> None:
+    if ecosystem not in _ERROR_REPORT:
+        _ERROR_REPORT[ecosystem] = []
+    record = ErrorRecord(ecosystem, help_message, traceback.format_exc())
+    logger.error(record)
+    _ERROR_REPORT[ecosystem].append(record)
+
+
+def list_available_platforms() -> typing.List[str]:
+    return copy.deepcopy(capture.platform.__all__)
+
+
+async def get_platform_impl(
+        platform: str,
+        artifact_dir: str) -> PlatformLogStreamer:
+    if platform in _PLATFORM_MAP:
+        return _PLATFORM_MAP[platform]
+    border_print(f"Initializing platform {platform}")
+    platform_class = getattr(capture.platform, platform)
+    platform_artifact_dir = os.path.join(artifact_dir, platform)
+    safe_mkdir(platform_artifact_dir)
+    platform_inst = platform_class(platform_artifact_dir)
+    _PLATFORM_MAP[platform] = platform_inst
+    async with asyncio.timeout(config.orchestrator_async_step_timeout_seconds):
+        await platform_inst.connect()
+    return platform_inst
+
+
+def list_available_ecosystems() -> typing.List[str]:
+    return copy.deepcopy(capture.ecosystem.__all__)
+
+
+async def get_ecosystem_impl(
+        ecosystem: str,
+        platform: PlatformLogStreamer,
+        artifact_dir: str) -> EcosystemCapture:
+    if ecosystem in _ECOSYSTEM_MAP:
+        return _ECOSYSTEM_MAP[ecosystem]
+    ecosystem_class = getattr(capture.ecosystem, ecosystem)
+    ecosystem_artifact_dir = os.path.join(artifact_dir, ecosystem)
+    safe_mkdir(ecosystem_artifact_dir)
+    ecosystem_instance = ecosystem_class(platform, ecosystem_artifact_dir)
+    _ECOSYSTEM_MAP[ecosystem] = ecosystem_instance
+    return ecosystem_instance
+
+
+async def init_ecosystems(platform, ecosystem, artifact_dir):
+    platform = await get_platform_impl(platform, artifact_dir)
+    ecosystems_to_load = list_available_ecosystems() \
+        if ecosystem == 'ALL' \
+        else [ecosystem]
+    for ecosystem in ecosystems_to_load:
+        try:
+            await get_ecosystem_impl(
+                ecosystem, platform, artifact_dir)
+        except UnsupportedCapturePlatformException:
+            help_message = f"Unsupported platform {ecosystem} {platform}"
+            logger.error(help_message)
+            track_error(ecosystem, help_message)
+        except Exception:
+            help_message = f"Unknown error instantiating ecosystem {ecosystem} {platform}"
+            logger.error(help_message)
+            track_error(ecosystem, help_message)
+
+
+async def handle_capture(attr):
+    attr = f"{attr}_capture"
+    for ecosystem in _ECOSYSTEM_MAP:
+        try:
+            border_print(f"{attr} for {ecosystem}")
+            async with asyncio.timeout(config.orchestrator_async_step_timeout_seconds):
+                await getattr(_ECOSYSTEM_MAP[ecosystem], attr)()
+        except TimeoutError:
+            help_message = f"Timeout after {config.orchestrator_async_step_timeout_seconds} seconds {attr} {ecosystem}"
+            logger.error(help_message)
+            track_error(ecosystem, help_message)
+        except Exception:
+            help_message = f"Unexpected error {attr} {ecosystem}"
+            logger.error(help_message)
+            track_error(ecosystem, help_message)
+
+
+async def start():
+    for platform_name, platform, in _PLATFORM_MAP.items():
+        # TODO: Write error log if halt here
+        border_print(f"Starting streaming for platform {platform_name}")
+        await platform.start_streaming()
+    await handle_capture("start")
+
+
+async def stop():
+    for platform_name, platform, in _PLATFORM_MAP.items():
+        # TODO: Write error log if halt here
+        border_print(f"Stopping streaming for platform {platform_name}")
+        await platform.stop_streaming()
+    await handle_capture("stop")
+
+
+async def run_analyzers():
+    border_print("Starting real time analysis, press enter to stop!", important=True)
+    analysis_tasks = []
+    monitor_tasks = []
+    for platform_name, platform in _PLATFORM_MAP.items():
+        logger.info(f"Creating monitor task for {platform_name}")
+        monitor_tasks.append(asyncio.create_task(platform.run_observers()))
+    for ecosystem_name, ecosystem in _ECOSYSTEM_MAP.items():
+        logger.info(f"Creating analysis task for {ecosystem_name}")
+        analysis_tasks.append(asyncio.create_task(ecosystem.analyze_capture()))
+    logger.info("Done creating analysis tasks")
+    await asyncio.get_event_loop().run_in_executor(
+        None, sys.stdin.readline)
+    border_print("Cancelling monitor tasks")
+    for task in monitor_tasks:
+        task.cancel()
+    logger.info("Done cancelling monitor tasks")
+    border_print("Cancelling analysis tasks")
+    for task in analysis_tasks:
+        task.cancel()
+    logger.info("Done cancelling analysis tasks")
+
+
+async def probe():
+    await handle_capture("probe")
+
+
+def write_error_report(artifact_dir: str):
+    if _ERROR_REPORT:
+        logger.critical("DETECTED ERRORS THIS RUN!")
+        error_report_file_name = create_standard_log_name("error_report", "txt", parent=artifact_dir)
+        with open(error_report_file_name, "a+") as error_report_file:
+            for ecosystem in _ERROR_REPORT:
+                log.print_and_write(add_border(f"Errors for {ecosystem}"), error_report_file)
+                for record in _ERROR_REPORT[ecosystem]:
+                    log.print_and_write(str(record), error_report_file)
+    else:
+        logger.info("No errors seen this run!")
diff --git a/src/tools/interop/idt/capture/ecosystem/play_services/config.py b/src/tools/interop/idt/capture/ecosystem/play_services/config.py
new file mode 100644
index 0000000..2daacb3
--- /dev/null
+++ b/src/tools/interop/idt/capture/ecosystem/play_services/config.py
@@ -0,0 +1,19 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+enable_foyer_probers = True
+foyer_prober_traceroute_limit = 32
diff --git a/src/tools/interop/idt/capture/ecosystem/play_services/play_services.py b/src/tools/interop/idt/capture/ecosystem/play_services/play_services.py
index aa92768..e94a10f 100644
--- a/src/tools/interop/idt/capture/ecosystem/play_services/play_services.py
+++ b/src/tools/interop/idt/capture/ecosystem/play_services/play_services.py
@@ -15,16 +15,22 @@
 #    limitations under the License.
 #
 
+import asyncio
 import json
 import os
-from typing import Dict
+from typing import IO, Dict
 
 from capture.base import EcosystemCapture, UnsupportedCapturePlatformException
-from capture.file_utils import create_standard_log_name
 from capture.platform.android import Android
+from capture.platform.android.streams.logcat import LogcatStreamer
+from utils.artifact import create_standard_log_name, log
 
-from .analysis import PlayServicesAnalysis
+from . import config
 from .command_map import dumpsys, getprop
+from .play_services_analysis import PlayServicesAnalysis
+from .prober import PlayServicesProber
+
+logger = log.get_logger(__file__)
 
 
 class PlayServices(EcosystemCapture):
@@ -33,7 +39,6 @@
     """
 
     def __init__(self, platform: Android, artifact_dir: str) -> None:
-
         self.artifact_dir = artifact_dir
 
         if not isinstance(platform, Android):
@@ -52,10 +57,12 @@
                             '305',  # Thread
                             '168',  # mDNS
                             ]
+        self.logcat_stream: LogcatStreamer = self.platform.streams["LogcatStreamer"]
+        self.logcat_file: IO = None
 
     def _write_standard_info_file(self) -> None:
         for k, v in self.standard_info_data.items():
-            print(f"{k}: {v}")
+            logger.info(f"{k}: {v}")
         standard_info_data_json = json.dumps(self.standard_info_data, indent=2)
         with open(self.standard_info_file_path, mode='w+') as standard_info_file:
             standard_info_file.write(standard_info_data_json)
@@ -87,10 +94,24 @@
             verbose_command = f"shell setprop log.tag.gms_svc_id:{service_id} VERBOSE"
             self.platform.run_adb_command(verbose_command)
         self._get_standard_info()
-        await self.platform.start_streaming()
+
+    async def analyze_capture(self):
+        try:
+            self.logcat_file = open(self.logcat_stream.logcat_artifact, "r")
+            while True:
+                self.analysis.do_analysis(self.logcat_file.readlines())
+                # Releasing async event loop for other analysis / monitor topics
+                await asyncio.sleep(0.5)
+        except asyncio.CancelledError:
+            logger.info("Closing logcat stream")
+            if self.logcat_file:
+                self.logcat_file.close()
 
     async def stop_capture(self) -> None:
-        await self.platform.stop_streaming()
+        self.analysis.show_analysis()
 
-    async def analyze_capture(self) -> None:
-        self.analysis.do_analysis()
+    async def probe_capture(self) -> None:
+        if config.enable_foyer_probers:
+            await PlayServicesProber(self.platform, self.artifact_dir).probe_services()
+        else:
+            logger.critical("Foyer probers disabled in config!")
diff --git a/src/tools/interop/idt/capture/ecosystem/play_services/analysis.py b/src/tools/interop/idt/capture/ecosystem/play_services/play_services_analysis.py
similarity index 85%
rename from src/tools/interop/idt/capture/ecosystem/play_services/analysis.py
rename to src/tools/interop/idt/capture/ecosystem/play_services/play_services_analysis.py
index 46fcfc1..e6ebb8f 100644
--- a/src/tools/interop/idt/capture/ecosystem/play_services/analysis.py
+++ b/src/tools/interop/idt/capture/ecosystem/play_services/play_services_analysis.py
@@ -17,13 +17,17 @@
 
 import os
 
-from capture.file_utils import add_border, create_standard_log_name, print_and_write
 from capture.platform.android import Android
+from utils.artifact import create_standard_log_name, log
+from utils.log import add_border, print_and_write
+
+logger = log.get_logger(__file__)
 
 
 class PlayServicesAnalysis:
 
     def __init__(self, platform: Android, artifact_dir: str) -> None:
+        self.logger = logger
         self.artifact_dir = artifact_dir
         self.analysis_file_name = os.path.join(
             self.artifact_dir, create_standard_log_name(
@@ -41,6 +45,7 @@
     def _log_proc_matter_commissioner(self, line: str) -> None:
         """Core commissioning flow"""
         if 'MatterCommissioner' in line:
+            self.logger.info(line)
             self.matter_commissioner_logs += line
 
     def _log_proc_commissioning_failed(self, line: str) -> None:
@@ -51,24 +56,28 @@
             self.failure_stack_trace += line
             self.fail_trace_line_counter += 1
         if 'SetupDeviceView' and 'Commissioning failed' in line:
+            self.logger.info(line)
             self.fail_trace_line_counter = 0
             self.failure_stack_trace += line
 
     def _log_proc_pake(self, line: str) -> None:
         """Three logs for pake 1-3 expected"""
         if "Pake" in line and "chip_logging" in line:
+            self.logger.info(line)
             self.pake_logs += line
 
     def _log_proc_mdns(self, line: str) -> None:
         if "_matter" in line and "ServiceResolverAdapter" in line:
+            self.logger.info(line)
             self.resolver_logs += line
 
     def _log_proc_sigma(self, line: str) -> None:
         """Three logs expected for sigma 1-3"""
         if "Sigma" in line and "chip_logging" in line:
+            self.logger.info(line)
             self.sigma_logs += line
 
-    def _show_analysis(self) -> None:
+    def show_analysis(self) -> None:
         analysis_file = open(self.analysis_file_name, mode="w+")
         print_and_write(add_border('Matter commissioner logs'), analysis_file)
         print_and_write(self.matter_commissioner_logs, analysis_file)
@@ -85,11 +94,9 @@
         analysis_file.close()
 
     def process_line(self, line: str) -> None:
-        for line_func in filter(lambda s: s.startswith('_log'), dir(self)):
+        for line_func in [s for s in dir(self) if s.startswith('_log')]:
             getattr(self, line_func)(line)
 
-    def do_analysis(self) -> None:
-        with open(self.platform.logcat_output_path, mode='r') as logcat_file:
-            for line in logcat_file:
-                self.process_line(line)
-        self._show_analysis()
+    def do_analysis(self, batch: [str]) -> None:
+        for line in batch:
+            self.process_line(line)
diff --git a/src/tools/interop/idt/capture/ecosystem/play_services/prober.py b/src/tools/interop/idt/capture/ecosystem/play_services/prober.py
new file mode 100644
index 0000000..8dcda80
--- /dev/null
+++ b/src/tools/interop/idt/capture/ecosystem/play_services/prober.py
@@ -0,0 +1,71 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import os
+
+from utils.shell import Bash, log
+
+from . import config
+
+logger = log.get_logger(__file__)
+
+
+class PlayServicesProber:
+
+    def __init__(self, platform, artifact_dir):
+        # TODO: Handle all resolved addresses
+        self.platform = platform
+        self.artifact_dir = artifact_dir
+        self.logger = logger
+        self.probe_artifact = os.path.join(self.artifact_dir, "net_probes.txt")
+        self.command_suffix = f" 2>&1  | tee -a {self.probe_artifact}"
+        self.target = "googlehomefoyer-pa.googleapis.com"
+        self.tracert_limit = config.foyer_prober_traceroute_limit
+
+    def run_command(self, command):
+        Bash(f"{command} {self.command_suffix}", sync=True).start_command()
+
+    async def _probe_tracert_icmp_foyer(self) -> None:
+        self.logger.info(f"icmp traceroute to {self.target}")
+        self.run_command(f"traceroute -m {self.tracert_limit} {self.target}")
+
+    async def _probe_tracert_udp_foyer(self) -> None:
+        # TODO: Per-host-platform impl
+        self.logger.info(f"udp traceroute to {self.target}")
+        self.run_command(f"traceroute -m {self.tracert_limit} -U -p 443 {self.target}")
+
+    async def _probe_tracert_tcp_foyer(self) -> None:
+        # TODO: Per-host-platform impl
+        self.logger.info(f"tcp traceroute to {self.target}")
+        self.run_command(f"traceroute -m {self.tracert_limit} -T -p 443 {self.target}")
+
+    async def _probe_ping_foyer(self) -> None:
+        self.logger.info(f"ping {self.target}")
+        self.run_command(f"ping -c 4 {self.target}")
+
+    async def _probe_dns_foyer(self) -> None:
+        self.logger.info(f"dig {self.target}")
+        self.run_command(f"dig {self.target}")
+
+    async def _probe_from_phone_ping_foyer(self) -> None:
+        self.logger.info(f"ping {self.target} from phone")
+        self.platform.run_adb_command(f"shell ping -c 4 {self.target} {self.command_suffix}")
+
+    async def probe_services(self) -> None:
+        self.logger.info(f"Probing {self.target}")
+        for probe_func in [s for s in dir(self) if s.startswith('_probe')]:
+            await getattr(self, probe_func)()
diff --git a/src/tools/interop/idt/capture/ecosystem/play_services_user/play_services_user.py b/src/tools/interop/idt/capture/ecosystem/play_services_user/play_services_user.py
index 5044842..2fdac62 100644
--- a/src/tools/interop/idt/capture/ecosystem/play_services_user/play_services_user.py
+++ b/src/tools/interop/idt/capture/ecosystem/play_services_user/play_services_user.py
@@ -15,11 +15,16 @@
 #    limitations under the License.
 #
 
+import asyncio
 import os
 
 from capture.base import EcosystemCapture, UnsupportedCapturePlatformException
-from capture.file_utils import create_standard_log_name, print_and_write
 from capture.platform.android.android import Android
+from capture.platform.android.streams.logcat import LogcatStreamer
+from utils.artifact import create_standard_log_name
+from utils.log import get_logger, print_and_write
+
+logger = get_logger(__file__)
 
 
 class PlayServicesUser(EcosystemCapture):
@@ -28,7 +33,7 @@
     """
 
     def __init__(self, platform: Android, artifact_dir: str) -> None:
-
+        self.logger = logger
         self.artifact_dir = artifact_dir
         self.analysis_file = os.path.join(
             self.artifact_dir, create_standard_log_name(
@@ -39,28 +44,47 @@
                 'only platform=android is supported for '
                 'ecosystem=PlayServicesUser')
         self.platform = platform
+        self.logcat_fd = None
+        self.output = ""
+        self.logcat_stream: LogcatStreamer = self.platform.streams["LogcatStreamer"]
 
     async def start_capture(self) -> None:
-        await self.platform.start_streaming()
+        pass
 
     async def stop_capture(self) -> None:
-        await self.platform.stop_streaming()
+        self.show_analysis()
+
+    def proc_line(self, line) -> None:
+        if "CommissioningServiceBin: Binding to service" in line:
+            s = f"3P commissioner initiated Play Services commissioning\n{line}"
+            logger.info(s)
+            self.output += f"{s}\n"
+        elif "CommissioningServiceBin: Sending commissioning request to bound service" in line:
+            s = f"Play Services commissioning complete; passing back to 3P\n{line}"
+            logger.info(s)
+            self.output += f"{s}\n"
+        elif "CommissioningServiceBin: Received commissioning complete from bound service" in line:
+            s = f"3P commissioning complete!\n{line}"
+            logger.info(s)
+            self.output += f"{s}\n"
 
     async def analyze_capture(self) -> None:
         """"Show the start and end times of commissioning boundaries"""
-        analysis_file = open(self.analysis_file, mode='w+')
-        with open(self.platform.logcat_output_path, mode='r') as logcat_file:
-            for line in logcat_file:
-                if "CommissioningServiceBin: Binding to service" in line:
-                    print_and_write(
-                        f"3P commissioner initiated Play Services commissioning\n{line}",
-                        analysis_file)
-                elif "CommissioningServiceBin: Sending commissioning request to bound service" in line:
-                    print_and_write(
-                        f"Play Services commissioning complete; passing back to 3P\n{line}",
-                        analysis_file)
-                elif "CommissioningServiceBin: Received commissioning complete from bound service" in line:
-                    print_and_write(
-                        f"3P commissioning complete!\n{line}",
-                        analysis_file)
-        analysis_file.close()
+        try:
+            self.logcat_fd = open(self.logcat_stream.logcat_artifact, "r")
+            while True:
+                for line in self.logcat_fd.readlines():
+                    self.proc_line(line)
+                # Releasing async event loop for other analysis / monitor tasks
+                await asyncio.sleep(0.5)
+        except asyncio.CancelledError:
+            self.logger.info("Closing logcat stream")
+            if self.logcat_fd is not None:
+                self.logcat_fd.close()
+
+    def show_analysis(self) -> None:
+        with open(self.analysis_file, "w") as analysis_file:
+            print_and_write(self.output, analysis_file)
+
+    async def probe_capture(self) -> None:
+        pass
diff --git a/src/tools/interop/idt/capture/factory.py b/src/tools/interop/idt/capture/factory.py
deleted file mode 100644
index 3c86045..0000000
--- a/src/tools/interop/idt/capture/factory.py
+++ /dev/null
@@ -1,124 +0,0 @@
-#
-#    Copyright (c) 2023 Project CHIP Authors
-#    All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License");
-#    you may not use this file except in compliance with the License.
-#    You may obtain a copy of the License at
-#
-#        http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-#
-
-import asyncio
-import copy
-import os
-import traceback
-import typing
-
-import capture
-from capture.base import EcosystemCapture, PlatformLogStreamer, UnsupportedCapturePlatformException
-from capture.file_utils import border_print, safe_mkdir
-
-_CONFIG_TIMEOUT = 45.0
-_PLATFORM_MAP: typing.Dict[str, PlatformLogStreamer] = {}
-_ECOSYSTEM_MAP: typing.Dict[str, PlatformLogStreamer] = {}
-
-
-def _get_timeout():
-    return asyncio.get_running_loop().time() + _CONFIG_TIMEOUT
-
-
-class PlatformFactory:
-
-    @staticmethod
-    def list_available_platforms() -> typing.List[str]:
-        return copy.deepcopy(capture.platform.__all__)
-
-    @staticmethod
-    def get_platform_impl(
-            platform: str,
-            artifact_dir: str) -> PlatformLogStreamer:
-        if platform in _PLATFORM_MAP:
-            return _PLATFORM_MAP[platform]
-        platform_class = getattr(capture.platform, platform)
-        platform_artifact_dir = os.path.join(artifact_dir, platform)
-        safe_mkdir(platform_artifact_dir)
-        platform_inst = platform_class(platform_artifact_dir)
-        _PLATFORM_MAP[platform] = platform_inst
-        return platform_inst
-
-
-class EcosystemFactory:
-
-    @staticmethod
-    def list_available_ecosystems() -> typing.List[str]:
-        return copy.deepcopy(capture.ecosystem.__all__)
-
-    @staticmethod
-    async def get_ecosystem_impl(
-            ecosystem: str,
-            platform: str,
-            artifact_dir: str) -> EcosystemCapture:
-        if ecosystem in _ECOSYSTEM_MAP:
-            return _ECOSYSTEM_MAP[ecosystem]
-        ecosystem_class = getattr(capture.ecosystem, ecosystem)
-        ecosystem_artifact_dir = os.path.join(artifact_dir, ecosystem)
-        safe_mkdir(ecosystem_artifact_dir)
-        platform_instance = PlatformFactory.get_platform_impl(
-            platform, artifact_dir)
-        ecosystem_instance = ecosystem_class(platform_instance, ecosystem_artifact_dir)
-        _ECOSYSTEM_MAP[ecosystem] = ecosystem_instance
-        return ecosystem_instance
-
-    @staticmethod
-    async def init_ecosystems(platform, ecosystem, artifact_dir):
-        ecosystems_to_load = EcosystemFactory.list_available_ecosystems() \
-            if ecosystem == 'ALL' \
-            else [ecosystem]
-        for ecosystem in ecosystems_to_load:
-            try:
-                async with asyncio.timeout_at(_get_timeout()):
-                    await EcosystemFactory.get_ecosystem_impl(
-                        ecosystem, platform, artifact_dir)
-            except UnsupportedCapturePlatformException:
-                print(f"ERROR unsupported platform {ecosystem} {platform}")
-            except TimeoutError:
-                print(f"ERROR timeout starting ecosystem {ecosystem} {platform}")
-            except Exception:
-                print("ERROR unknown error instantiating ecosystem")
-                print(traceback.format_exc())
-
-
-class EcosystemController:
-
-    @staticmethod
-    async def handle_capture(attr):
-        attr = f"{attr}_capture"
-        for ecosystem in _ECOSYSTEM_MAP:
-            try:
-                border_print(f"{attr} capture for {ecosystem}")
-                async with asyncio.timeout_at(_get_timeout()):
-                    await getattr(_ECOSYSTEM_MAP[ecosystem], attr)()
-            except TimeoutError:
-                print(f"ERROR timeout {attr} {ecosystem}")
-            except Exception:
-                print(f"ERROR unexpected error {attr} {ecosystem}")
-                print(traceback.format_exc())
-
-    @staticmethod
-    async def start():
-        await EcosystemController.handle_capture("start")
-
-    @staticmethod
-    async def stop():
-        await EcosystemController.handle_capture("stop")
-
-    @staticmethod
-    async def analyze():
-        await EcosystemController.handle_capture("analyze")
diff --git a/src/tools/interop/idt/capture/loader.py b/src/tools/interop/idt/capture/loader.py
index 27fd7c1..8e02e9d 100644
--- a/src/tools/interop/idt/capture/loader.py
+++ b/src/tools/interop/idt/capture/loader.py
@@ -18,12 +18,18 @@
 import importlib
 import inspect
 import os
+import traceback
 from typing import Any
 
+from utils import log
+
+logger = log.get_logger(__file__)
+
 
 class CaptureImplsLoader:
 
     def __init__(self, root_dir: str, root_package: str, search_type: type):
+        self.logger = logger
         self.root_dir = root_dir
         self.root_package = root_package
         self.search_type = search_type
@@ -38,25 +44,35 @@
         return os.path.exists(init_path)
 
     def verify_coroutines(self, subclass) -> bool:
+        # ABC does not verify coroutines on subclass instantiation, it merely checks the presence of methods
         for item in dir(self.search_type):
             item_attr = getattr(self.search_type, item)
             if inspect.iscoroutinefunction(item_attr):
                 if not hasattr(subclass, item):
+                    self.logger.warning(f"Missing coroutine in {subclass}")
                     return False
                 if not inspect.iscoroutinefunction(getattr(subclass, item)):
+                    self.logger.warning(f"Missing coroutine in {subclass}")
+                    return False
+        for item in dir(subclass):
+            item_attr = getattr(subclass, item)
+            if inspect.iscoroutinefunction(item_attr) and hasattr(self.search_type, item):
+                if not inspect.iscoroutinefunction(getattr(self.search_type, item)):
+                    self.logger.warning(f"Unexpected coroutine in {subclass}")
                     return False
         return True
 
     def is_type_match(self, potential_class_match: Any) -> bool:
         if inspect.isclass(potential_class_match):
+            self.logger.debug(f"Checking {self.search_type} match against {potential_class_match}")
             if issubclass(potential_class_match, self.search_type):
+                self.logger.debug(f"Found type match search: {self.search_type} match: {potential_class_match}")
                 if self.verify_coroutines(potential_class_match):
                     return True
-                else:
-                    print(f"WARNING missing coroutine {potential_class_match}")
         return False
 
     def load_module(self, to_load):
+        self.logger.debug(f"Loading module {to_load}")
         saw_more_than_one_impl = False
         saw_one_impl = False
         found_class = None
@@ -73,14 +89,16 @@
             self.impl_names.append(found_class)
             self.impls[found_class] = found_impl
         elif saw_more_than_one_impl:
-            print(f"WARNING more than one impl in {module_item}")
+            self.logger.warning(f"more than one impl in {module_item}")
 
     def fetch_impls(self):
+        self.logger.debug(f"Searching for implementations in {self.root_dir}")
         for item in os.listdir(self.root_dir):
             dir_content = os.path.join(self.root_dir, item)
             if self.is_package(dir_content):
+                self.logger.debug(f"Found package in {dir_content}")
                 try:
                     module = importlib.import_module("." + item, self.root_package)
                     self.load_module(module)
                 except ModuleNotFoundError:
-                    print(f"WARNING no module matching package name for {item}")
+                    self.logger.warning(f"No module matching package name for {item}\n{traceback.format_exc()}")
diff --git a/src/tools/interop/idt/capture/pcap/pcap.py b/src/tools/interop/idt/capture/pcap/pcap.py
index 8208d59..2cb2285 100644
--- a/src/tools/interop/idt/capture/pcap/pcap.py
+++ b/src/tools/interop/idt/capture/pcap/pcap.py
@@ -18,21 +18,23 @@
 import os
 import time
 
-from capture.file_utils import create_standard_log_name
-from capture.shell_utils import Bash
+from utils.artifact import create_standard_log_name, log
+from utils.shell import Bash
+
+logger = log.get_logger(__file__)
 
 
 class PacketCaptureRunner:
 
     def __init__(self, artifact_dir: str, interface: str) -> None:
-
+        self.logger = logger
         self.artifact_dir = artifact_dir
         self.output_path = str(
             os.path.join(
                 self.artifact_dir,
                 create_standard_log_name(
                     "pcap",
-                    "cap")))
+                    "pcap")))
         self.start_delay_seconds = 2
         self.interface = interface
         self.pcap_command = f"tcpdump -i {self.interface} -n -w {self.output_path}"
@@ -40,22 +42,22 @@
 
     def start_pcap(self) -> None:
         self.pcap_proc.start_command()
-        print("Pausing to check if pcap started...")
+        self.logger.info("Pausing to check if pcap started...")
         time.sleep(self.start_delay_seconds)
         if not self.pcap_proc.command_is_running():
-            print(
+            self.logger.error(
                 "Pcap did not start, you might need root; please authorize if prompted.")
-            Bash("sudo echo \"\"", sync=True)
-            print("Retrying pcap with sudo...")
+            Bash("sudo echo \"\"", sync=True).start_command()
+            self.logger.warning("Retrying pcap with sudo...")
             self.pcap_command = f"sudo {self.pcap_command}"
             self.pcap_proc = Bash(self.pcap_command)
             self.pcap_proc.start_command()
             time.sleep(self.start_delay_seconds)
         if not self.pcap_proc.command_is_running():
-            print("WARNING Failed to start pcap!")
+            self.logger.error("Failed to start pcap!")
         else:
-            print(f"Pcap output path {self.output_path}")
+            self.logger.info(f"Pcap output path {self.output_path}")
 
     def stop_pcap(self) -> None:
-        self.pcap_proc.stop_command(soft=True)
-        print("Pcap stopped")
+        self.logger.info("Stopping pcap proc")
+        self.pcap_proc.stop_command()
diff --git a/src/tools/interop/idt/capture/platform/android/__init__.py b/src/tools/interop/idt/capture/platform/android/__init__.py
index 7ec319f..021ec15 100644
--- a/src/tools/interop/idt/capture/platform/android/__init__.py
+++ b/src/tools/interop/idt/capture/platform/android/__init__.py
@@ -18,5 +18,5 @@
 from .android import Android
 
 __all__ = [
-    'Android'
+    'Android',
 ]
diff --git a/src/tools/interop/idt/capture/platform/android/android.py b/src/tools/interop/idt/capture/platform/android/android.py
index 2a39449..4ca8b26 100644
--- a/src/tools/interop/idt/capture/platform/android/android.py
+++ b/src/tools/interop/idt/capture/platform/android/android.py
@@ -14,65 +14,60 @@
 #    See the License for the specific language governing permissions and
 #    limitations under the License.
 #
-
 import asyncio
 import ipaddress
 import os
+import traceback
 import typing
+from asyncio import Task
 
 from capture.base import PlatformLogStreamer
-from capture.file_utils import create_standard_log_name
-from capture.shell_utils import Bash
+from utils.shell import Bash, log
+
+from . import config, streams
+from .capabilities import Capabilities
+
+logger = log.get_logger(__file__)
 
 
 class Android(PlatformLogStreamer):
-    """
-    Class that supports:
-    - Running synchronous adb commands
-    - Maintaining a singleton logcat stream
-    - Maintaining a singleton screen recording
-    """
 
     def __init__(self, artifact_dir: str) -> None:
-
+        self.logger = logger
         self.artifact_dir = artifact_dir
-
         self.device_id: str | None = None
         self.adb_devices: typing.Dict[str, bool] = {}
-        self._authorize_adb()
-
-        self.logcat_output_path = os.path.join(
-            self.artifact_dir, create_standard_log_name(
-                'logcat', 'txt'))
-        self.logcat_command = f'adb -s {self.device_id} logcat -T 1 >> {self.logcat_output_path}'
-        self.logcat_proc = Bash(self.logcat_command)
-
-        screen_cast_name = create_standard_log_name('screencast', 'mp4')
-        self.screen_cap_output_path = os.path.join(
-            self.artifact_dir, screen_cast_name)
-        self.check_screen_command = "shell dumpsys deviceidle | grep mScreenOn"
-        self.screen_path = f'/sdcard/Movies/{screen_cast_name}'
-        self.screen_command = f'adb -s {self.device_id} shell screenrecord --bugreport {self.screen_path}'
-        self.screen_proc = Bash(self.screen_command)
-        self.pull_screen = False
-        self.screen_pull_command = f'pull {self.screen_path} {self.screen_cap_output_path}'
+        self.capabilities: None | Capabilities = None
+        self.streams = {}
+        self.connected = False
 
     def run_adb_command(
             self,
             command: str,
-            capture_output: bool = False) -> Bash:
+            capture_output: bool = False,
+            cwd=None) -> Bash:
         """
         Run an adb command synchronously
         Capture_output must be true to call get_captured_output() later
         """
-        return Bash(
+        bash_command = Bash(
             f'adb -s {self.device_id} {command}',
             sync=True,
-            capture_output=capture_output)
+            capture_output=capture_output,
+            cwd=cwd)
+        bash_command.start_command()
+        return bash_command
+
+    def get_adb_background_command(
+            self,
+            command: str,
+            cwd=None) -> Bash:
+        return Bash(f'adb -s {self.device_id} {command}', cwd=cwd)
 
     def get_adb_devices(self) -> typing.Dict[str, bool]:
         """Returns a dict of device ids and whether they are authorized"""
         adb_devices = Bash('adb devices', sync=True, capture_output=True)
+        adb_devices.start_command()
         adb_devices_output = adb_devices.get_captured_output().split('\n')
         devices_auth = {}
         header_done = False
@@ -83,11 +78,11 @@
                 device_is_auth = line_parsed[1] == "device"
                 if line_parsed[1] == "offline":
                     disconnect_command = f"adb disconnect {device_id}"
-                    print(f"Device {device_id} is offline, trying disconnect!")
+                    self.logger.warning(f"Device {device_id} is offline, trying disconnect!")
                     Bash(
                         disconnect_command,
                         sync=True,
-                        capture_output=False)
+                        capture_output=False).start_command()
                 else:
                     devices_auth[device_id] = device_is_auth
             header_done = True
@@ -103,11 +98,11 @@
     def _set_device_if_only_one_connected(self) -> None:
         if self._only_one_device_connected():
             self.device_id = self._get_first_connected_device()
-            print(f'Only one device detected; using {self.device_id}')
+            self.logger.warning(f'Only one device detected; using {self.device_id}')
 
     def _log_adb_devices(self) -> None:
         for dev in self.adb_devices:
-            print(dev)
+            self.logger.info(dev)
 
     @staticmethod
     def _is_connection_str(adb_input_str: str) -> bool:
@@ -134,22 +129,23 @@
     def _check_connect_wireless_adb(self, temp_device_id: str) -> None:
         if Android._is_connection_str(temp_device_id):
             connect_command = f"adb connect {temp_device_id}"
-            print(
+            self.logger.warning(
                 f"Detected connection string; attempting to connect: {connect_command}")
-            Bash(connect_command, sync=True, capture_output=False)
+            Bash(connect_command, sync=True, capture_output=False).start_command()
             self.get_adb_devices()
 
     def _device_id_user_input(self) -> None:
-        print('If there is no output below, press enter after connecting your phone under test OR')
-        print('Enter (copy paste) the target device id from the list of available devices below OR')
-        print('Enter $IP4:$PORT to connect wireless debugging.')
+        self.logger.error('Connect additional android devices via USB and press enter OR')
+        self.logger.error('Enter (copy paste) the target device id from the list of available devices below OR')
+        self.logger.error('Enter $IP4:$PORT to connect wireless debugging.')
         self._log_adb_devices()
         temp_device_id = input('').strip()
         self._check_connect_wireless_adb(temp_device_id)
+        self.get_adb_devices()
         if self._only_one_device_connected():
             self._set_device_if_only_one_connected()
         elif temp_device_id not in self.adb_devices:
-            print('Entered device not in adb devices!')
+            self.logger.warning('Entered device not in adb devices!')
         else:
             self.device_id = temp_device_id
 
@@ -161,7 +157,7 @@
         self._set_device_if_only_one_connected()
         while self.device_id not in self.get_adb_devices():
             self._device_id_user_input()
-        print(f'Selected device {self.device_id}')
+        self.logger.info(f'Selected device {self.device_id}')
 
     def _authorize_adb(self) -> None:
         """
@@ -170,48 +166,61 @@
         self.get_adb_devices()
         self._choose_device_id()
         while not self.get_adb_devices()[self.device_id]:
-            print('Confirming authorization, press enter after auth')
+            self.logger.info('Confirming authorization, press enter after auth')
             input('')
-        print(f'Target android device ID is authorized: {self.device_id}')
+        self.logger.info(f'Target android device ID is authorized: {self.device_id}')
 
-    def check_screen(self) -> bool:
-        screen_cmd_output = self.run_adb_command(
-            self.check_screen_command, capture_output=True)
-        return "true" in screen_cmd_output.get_captured_output()
+    async def connect(self) -> None:
+        if not self.connected:
+            self._authorize_adb()
+            self.capabilities = Capabilities(self)
+            self.capabilities.check_capabilities()
+            for stream in streams.__all__:
+                self.streams[stream] = getattr(streams, stream)(self)
+            self.connected = True
 
-    async def prepare_screen_recording(self) -> None:
-        if self.screen_proc.command_is_running():
-            return
-        try:
-            async with asyncio.timeout_at(asyncio.get_running_loop().time() + 20.0):
-                screen_on = self.check_screen()
-                print("Please turn the screen on so screen recording can start!")
-                while not screen_on:
-                    await asyncio.sleep(2)
-                    screen_on = self.check_screen()
-                    if not screen_on:
-                        print("Screen is still not on for recording!")
-        except TimeoutError:
-            print("WARNING screen recording timeout")
-            return
+    async def handle_stream_action(self, action: str) -> None:
+        had_error = False
+        for stream_name, stream in self.streams.items():
+            self.logger.info(f"Doing {action} for {stream_name}!")
+            try:
+                await getattr(stream, action)()
+            except Exception:
+                self.logger.error(traceback.format_exc())
+                had_error = True
+        if had_error:
+            raise Exception("Propagating to controller!")
 
     async def start_streaming(self) -> None:
-        await self.prepare_screen_recording()
-        if self.check_screen():
-            self.pull_screen = True
-            self.screen_proc.start_command()
-        self.logcat_proc.start_command()
+        await self.handle_stream_action("start")
 
-    async def pull_screen_recording(self) -> None:
-        if self.pull_screen:
-            self.screen_proc.stop_command()
-            print("screen proc stopped")
-            await asyncio.sleep(3)
-            self.run_adb_command(self.screen_pull_command)
-            print("screen recording pull attempted")
-            self.pull_screen = False
+    async def run_observers(self) -> None:
+        try:
+            observer_tasks: [Task] = []
+            for stream_name, stream in self.streams.items():
+                observer_tasks.append(asyncio.create_task(stream.run_observer()))
+            while True:
+                self.logger.info("Android root observer task checking sub tasks")
+                for task in observer_tasks:
+                    if task.done() or task.cancelled():
+                        self.logger.error(f"An android monitoring task has died, consider restarting! {task.__str__()}")
+                await asyncio.sleep(30)
+        except asyncio.CancelledError:
+            self.logger.info("Cancelling observer tasks")
+            for observer_tasks in observer_tasks:
+                observer_tasks.cancel()
 
     async def stop_streaming(self) -> None:
-        await self.pull_screen_recording()
-        self.logcat_proc.stop_command()
-        print("logcat stopped")
+        await self.handle_stream_action("stop")
+        if config.enable_bug_report:
+            found = False
+            for item in os.listdir(self.artifact_dir):
+                if "bugreport" in item and ".zip" in item:
+                    found = True
+            if not found:
+                self.logger.info("Taking bugreport")
+                self.run_adb_command("bugreport", cwd=self.artifact_dir)
+            else:
+                self.logger.warning("bugreport already taken")
+        else:
+            self.logger.critical("bugreport disabled in settings!")
diff --git a/src/tools/interop/idt/capture/platform/android/capabilities.py b/src/tools/interop/idt/capture/platform/android/capabilities.py
new file mode 100644
index 0000000..cd2f7f6
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/capabilities.py
@@ -0,0 +1,60 @@
+from typing import TYPE_CHECKING
+
+from utils.artifact import create_standard_log_name, log
+from utils.shell import Bash
+
+if TYPE_CHECKING:
+    from capture.platform.android import Android
+
+from . import config
+
+logger = log.get_logger(__file__)
+
+
+class Capabilities:
+
+    def __init__(self, platform: "Android"):
+        self.logger = logger
+        self.platform = platform
+        self.c_has_tcpdump = False
+        self.c_has_root = False
+        self.c_is_64 = False
+        self.c_hci_snoop_enabled = False
+        self.artifact = create_standard_log_name("capabilities", "txt", parent=platform.artifact_dir)
+
+    def __repr__(self):
+        s = "Detected capabilities:\n"
+        for item in [x for x in dir(self) if x.startswith("c_")]:
+            s += f"{item}: {getattr(self, item)}\n"
+        return s
+
+    def check_snoop_log(self) -> bool:
+        return config.hci_log_level in self.platform.run_adb_command("shell getprop persist.bluetooth.btsnooplogmode",
+                                                                     capture_output=True).get_captured_output()
+
+    def check_capabilities(self):
+        self.logger.info("Checking if device has root")
+        self.c_has_root = self.platform.run_adb_command(
+            "shell which su", capture_output=True).finished_success()
+        if self.c_has_root:
+            self.logger.warning("adb root!")
+            Bash("adb root", sync=True).start_command()
+        self.logger.info("Checking if device has tcpdump")
+        self.c_has_tcpdump = self.platform.run_adb_command(
+            "shell which tcpdump", capture_output=True).finished_success()
+        self.logger.info("Checking device CPU arch")
+        self.c_is_64 = "8" in self.platform.run_adb_command("shell cat /proc/cpuinfo | grep rch",
+                                                            capture_output=True).get_captured_output()
+        self.c_hci_snoop_enabled = self.check_snoop_log()
+        if not self.c_hci_snoop_enabled:
+            self.logger.info("HCI not enabled, attempting to enable!")
+            self.platform.run_adb_command(
+                f"shell setprop persist.bluetooth.btsnooplogmode {config.hci_log_level}")
+            self.platform.run_adb_command("shell svc bluetooth disable")
+            self.platform.run_adb_command("shell svc bluetooth enable")
+            self.c_hci_snoop_enabled = self.check_snoop_log()
+            if not self.c_hci_snoop_enabled:
+                self.logger.error("Failed to enabled HCI snoop log")
+        self.logger.info(self)
+        with open(self.artifact, "w") as artifact:
+            artifact.write(str(self))
diff --git a/src/tools/interop/idt/capture/platform/android/config.py b/src/tools/interop/idt/capture/platform/android/config.py
new file mode 100644
index 0000000..a0b05cd
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/config.py
@@ -0,0 +1,20 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+enable_build_push_tcpdump = True
+enable_bug_report = True
+hci_log_level = "full"
diff --git a/src/tools/interop/idt/capture/platform/android/streams/__init__.py b/src/tools/interop/idt/capture/platform/android/streams/__init__.py
new file mode 100644
index 0000000..9f48b1f
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/__init__.py
@@ -0,0 +1,14 @@
+from capture.loader import CaptureImplsLoader
+
+from .base import AndroidStream
+
+impl_loader = CaptureImplsLoader(
+    __path__[0],
+    "capture.platform.android.streams",
+    AndroidStream
+)
+
+for impl_name, impl in impl_loader.impls.items():
+    globals()[impl_name] = impl
+
+__all__ = impl_loader.impl_names
diff --git a/src/tools/interop/idt/capture/platform/android/streams/base.py b/src/tools/interop/idt/capture/platform/android/streams/base.py
new file mode 100644
index 0000000..6468131
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/base.py
@@ -0,0 +1,33 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from abc import ABC, abstractmethod
+
+
+class AndroidStream(ABC):
+
+    @abstractmethod
+    async def start(self) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    async def run_observer(self) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    async def stop(self) -> None:
+        raise NotImplementedError
diff --git a/src/tools/interop/idt/capture/platform/android/streams/logcat/__init__.py b/src/tools/interop/idt/capture/platform/android/streams/logcat/__init__.py
new file mode 100644
index 0000000..eebc071
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/logcat/__init__.py
@@ -0,0 +1,20 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from .logcat import LogcatStreamer
+
+__all__ = ["LogcatStreamer"]
diff --git a/src/tools/interop/idt/capture/platform/android/streams/logcat/logcat.py b/src/tools/interop/idt/capture/platform/android/streams/logcat/logcat.py
new file mode 100644
index 0000000..78670bf
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/logcat/logcat.py
@@ -0,0 +1,65 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import os
+from typing import TYPE_CHECKING
+
+from utils.artifact import create_standard_log_name, log
+
+from ..base import AndroidStream
+
+logger = log.get_logger(__file__)
+
+if TYPE_CHECKING:
+    from capture.platform.android import Android
+
+
+class LogcatStreamer(AndroidStream):
+
+    def __init__(self, platform: "Android"):
+        self.logger = logger
+        self.platform = platform
+        self.logcat_artifact = create_standard_log_name("logcat", "txt", parent=platform.artifact_dir)
+        self.logcat_command = f"logcat -T 1 >> {self.logcat_artifact}"
+        self.logcat_proc = platform.get_adb_background_command(self.logcat_command)
+        self.was_ever_running = False
+
+    async def run_observer(self) -> None:
+        last_size = 0
+        if not os.path.exists(self.logcat_artifact):
+            self.logger.warning("Logcat artifact does not exist yes, this might be normal at the start of execution")
+            asyncio.sleep(15)
+        while True:
+            try:
+                new_size = os.path.getsize(self.logcat_artifact)
+                if not (new_size > last_size):
+                    self.logger.warning(f"Logcat file not growing for {self.platform.device_id}, check connection!")
+                last_size = new_size
+            except OSError:
+                self.logger.error(f"Logcat file does not exist for {self.platfrom.device_id}, check connection!")
+            if not self.logcat_proc.command_is_running():
+                self.logger.error("Logcat proc is not running, trying to restart!")
+                self.logcat_proc = self.platform.get_adb_background_command(self.logcat_command)
+                self.logcat_proc.start_command()
+            await asyncio.sleep(4)
+
+    async def start(self):
+        self.logcat_proc.start_command()
+
+    async def stop(self):
+        self.logcat_proc.stop_command()
diff --git a/src/tools/interop/idt/capture/platform/android/streams/pcap/__init__.py b/src/tools/interop/idt/capture/platform/android/streams/pcap/__init__.py
new file mode 100644
index 0000000..8462107
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/pcap/__init__.py
@@ -0,0 +1,20 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from .pcap import AndroidPcap
+
+__all__ = ["AndroidPcap"]
diff --git a/src/tools/interop/idt/capture/platform/android/streams/pcap/linux_build_tcpdump_64.sh b/src/tools/interop/idt/capture/platform/android/streams/pcap/linux_build_tcpdump_64.sh
new file mode 100755
index 0000000..c82141b
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/pcap/linux_build_tcpdump_64.sh
@@ -0,0 +1,49 @@
+#!/bin/bash
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+set -e
+export TCPDUMP=4.99.4
+export LIBPCAP=1.10.4
+
+wget https://www.tcpdump.org/release/tcpdump-"$TCPDUMP".tar.gz
+wget https://www.tcpdump.org/release/libpcap-"$LIBPCAP".tar.gz
+
+tar zxvf tcpdump-"$TCPDUMP".tar.gz
+tar zxvf libpcap-"$LIBPCAP".tar.gz
+export CC=aarch64-linux-gnu-gcc
+cd libpcap-"$LIBPCAP"
+./configure --host=arm-linux --with-pcap=linux
+make
+cd ..
+
+cd tcpdump-"$TCPDUMP"
+export ac_cv_linux_vers=2
+export CFLAGS=-static
+export CPPFLAGS=-static
+export LDFLAGS=-static
+
+./configure --host=arm-linux
+make
+
+aarch64-linux-gnu-strip tcpdump
+cp tcpdump ..
+cd ..
+rm -R libpcap-"$LIBPCAP"
+rm -R tcpdump-"$TCPDUMP"
+rm libpcap-"$LIBPCAP".tar.gz
+rm tcpdump-"$TCPDUMP".tar.gz
diff --git a/src/tools/interop/idt/capture/platform/android/streams/pcap/mac_build_tcpdump_64.sh b/src/tools/interop/idt/capture/platform/android/streams/pcap/mac_build_tcpdump_64.sh
new file mode 100644
index 0000000..15d9119
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/pcap/mac_build_tcpdump_64.sh
@@ -0,0 +1,60 @@
+#!/usr/bin/env bash
+
+# This script cross-compiles libpcap and tcpdump for a specified architecture (default: ARM64)
+
+# Set bash script to exit immediately if any commands fail, any variables are unexpanded, or any commands in a pipeline fail.
+set -o errexit
+set -o nounset
+set -o pipefail
+
+# Check for optional target architecture argument, default to aarch64-linux-gnu if not provided
+TARGET_ARCH="${1:-aarch64-linux-gnu}"
+
+# Create a random temporary directory using mktemp
+TMP_DIR=$(mktemp -d)
+OUT_DIR=$TMP_DIR/out
+
+# Function to download and extract archives
+download_and_extract() {
+    local url="$1"
+    local filepath="$2"
+    local tar_dir="$3"
+
+    wget -O "$filepath" "$url"
+    tar -C "$tar_dir" -zxvf "$filepath"
+}
+
+# Function to clean up downloaded and extracted files
+cleanup() {
+    local filepath="$1"
+    local dirpath="$2"
+
+    rm -rf "$filepath" "$dirpath"
+}
+
+# Cross-compile libpcap
+LIBPCAP_VERSION=1.10.4
+LIBPCAP_DIR=$TMP_DIR/libpcap-$LIBPCAP_VERSION
+LIBPCAP_ARCHIVE=$TMP_DIR/libpcap-$LIBPCAP_VERSION.tar.gz
+
+download_and_extract "https://www.tcpdump.org/release/libpcap-$LIBPCAP_VERSION.tar.gz" "$LIBPCAP_ARCHIVE" "$TMP_DIR"
+(cd "$LIBPCAP_DIR" && ./configure --prefix="$OUT_DIR" --host="$TARGET_ARCH" --with-pcap=linux)
+make -C "$LIBPCAP_DIR" -j"$(nproc)"
+make -C "$LIBPCAP_DIR" install
+cleanup "$LIBPCAP_ARCHIVE" "$LIBPCAP_DIR"
+
+# Cross-compile tcpdump
+TCPDUMP_VERSION=4.99.4
+TCPDUMP_DIR=$TMP_DIR/tcpdump-$TCPDUMP_VERSION
+TCPDUMP_ARCHIVE=$TMP_DIR/tcpdump-$TCPDUMP_VERSION.tar.gz
+
+download_and_extract "https://www.tcpdump.org/release/tcpdump-$TCPDUMP_VERSION.tar.gz" "$TCPDUMP_ARCHIVE" "$TMP_DIR"
+(cd "$TCPDUMP_DIR" && CFLAGS="-static -I$OUT_DIR/include" CPPFLAGS="-static" LDFLAGS="-static -L$OUT_DIR/lib" ./configure --prefix="$OUT_DIR" --host="$TARGET_ARCH")
+make -C "$TCPDUMP_DIR"
+make -C "$TCPDUMP_DIR" install
+cleanup "$TCPDUMP_ARCHIVE" "$TCPDUMP_DIR"
+
+# Prepare the artifact
+strip "$OUT_DIR/bin/tcpdump"
+mv "$OUT_DIR/bin/tcpdump" .
+rm -rf "$TMP_DIR"
diff --git a/src/tools/interop/idt/capture/platform/android/streams/pcap/pcap.py b/src/tools/interop/idt/capture/platform/android/streams/pcap/pcap.py
new file mode 100644
index 0000000..8eab53b
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/pcap/pcap.py
@@ -0,0 +1,99 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import os
+from typing import TYPE_CHECKING
+
+from utils.artifact import create_standard_log_name, log, safe_mkdir
+from utils.host_platform import is_mac
+from utils.shell import Bash
+
+from ... import config
+from ..base import AndroidStream
+
+if TYPE_CHECKING:
+    from capture.platform.android import Android
+
+logger = log.get_logger(__file__)
+
+
+class AndroidPcap(AndroidStream):
+
+    def __init__(self, platform: "Android"):
+        self.logger = logger
+        self.platform = platform
+        self.target_dir = "/sdcard/Download"
+        self.pcap_artifact = create_standard_log_name("android_tcpdump", "pcap", parent=self.platform.artifact_dir)
+        self.pcap_phone_out_path = f"{self.target_dir}/{os.path.basename(self.pcap_artifact)}"
+        self.pcap_phone_bin_location = "tcpdump" if platform.capabilities.c_has_tcpdump \
+            else f"{self.target_dir}/tcpdump"
+        self.pcap_command = f"shell {self.pcap_phone_bin_location} -w {self.pcap_phone_out_path}"
+        self.pcap_proc = platform.get_adb_background_command(self.pcap_command)
+        self.pcap_pull = False
+        self.pcap_pull_command = f"pull {self.pcap_phone_out_path} {self.pcap_artifact}"
+        self.build_dir = os.path.join(os.path.dirname(__file__), "BUILD")
+
+    async def pull_packet_capture(self) -> None:
+        if self.pcap_pull:
+            self.logger.info("Attempting to pull android pcap")
+            await asyncio.sleep(3)
+            self.platform.run_adb_command(self.pcap_pull_command)
+            self.pcap_pull = False
+
+    async def start(self):
+        if not self.platform.capabilities.c_has_root:
+            self.logger.warning("Phone is not rooted, cannot take pcap!")
+            return
+        if self.platform.capabilities.c_has_tcpdump:
+            self.logger.info("tcpdump already available; using!")
+            self.pcap_proc.start_command()
+            self.pcap_pull = True
+            return
+        if not config.enable_build_push_tcpdump:
+            self.logger.critical("Android TCP Dump build and push disabled in configs!")
+            return
+        if not os.path.exists(os.path.join(self.build_dir, "tcpdump")):
+            self.logger.warning("tcpdump bin not found, attempting to build, please wait a few moments!")
+            safe_mkdir(self.build_dir)
+            if is_mac():
+                build_script = os.path.join(os.path.dirname(__file__), "mac_build_tcpdump_64.sh")
+                Bash(f"{build_script} 2>&1 >> BUILD_LOG.txt", sync=True, cwd=self.build_dir).start_command()
+            else:
+                build_script = os.path.join(os.path.dirname(__file__), "linux_build_tcpdump_64.sh")
+                Bash(f"{build_script} 2>&1 >> BUILD_LOG.txt", sync=True, cwd=self.build_dir).start_command()
+        else:
+            self.logger.warning("Reusing existing tcpdump build")
+        if not self.platform.run_adb_command(f"shell ls {self.target_dir}/tcpdump").finished_success():
+            self.logger.warning("Pushing tcpdump to device")
+            self.platform.run_adb_command(f"push {os.path.join(self.build_dir, 'tcpdump')} f{self.target_dir}")
+            self.platform.run_adb_command(f"chmod +x {self.target_dir}/tcpdump")
+        else:
+            self.logger.info("tcpdump already in the expected location, not pushing!")
+        self.logger.info("Starting Android pcap command")
+        self.pcap_proc.start_command()
+        self.pcap_pull = True
+
+    async def run_observer(self) -> None:
+        while True:
+            # TODO: Implement, need to restart w/ new out file (no append) and keep pull manifest, much like `screen`
+            await asyncio.sleep(120)
+
+    async def stop(self):
+        self.logger.info("Stopping android pcap proc")
+        self.pcap_proc.stop_command()
+        await self.pull_packet_capture()
diff --git a/src/tools/interop/idt/capture/platform/android/streams/screen/__init__.py b/src/tools/interop/idt/capture/platform/android/streams/screen/__init__.py
new file mode 100644
index 0000000..1170a57
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/screen/__init__.py
@@ -0,0 +1,20 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from .screen import ScreenRecorder
+
+__all__ = ["ScreenRecorder"]
diff --git a/src/tools/interop/idt/capture/platform/android/streams/screen/screen.py b/src/tools/interop/idt/capture/platform/android/streams/screen/screen.py
new file mode 100644
index 0000000..e190e7c
--- /dev/null
+++ b/src/tools/interop/idt/capture/platform/android/streams/screen/screen.py
@@ -0,0 +1,100 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import os
+from typing import TYPE_CHECKING
+
+from utils.artifact import create_standard_log_name, log
+
+from ..base import AndroidStream
+
+if TYPE_CHECKING:
+    from capture.platform.android import Android
+
+logger = log.get_logger(__file__)
+
+
+class ScreenRecorder(AndroidStream):
+
+    def __init__(self, platform: "Android"):
+        self.screen_artifact = None
+        self.screen_phone_out_path = None
+        self.screen_command = None
+        self.screen_proc = None
+        self.logger = logger
+        self.platform = platform
+        self.screen_check_command = "shell dumpsys deviceidle | grep mScreenOn"
+        self.screen_pull = False
+        self.file_counter = 0
+        self.pull_commands: [str] = []
+        self.manifest_file = os.path.join(platform.artifact_dir, "screen_manifest.txt")
+
+    def check_screen(self) -> bool:
+        screen_cmd_output = self.platform.run_adb_command(
+            self.screen_check_command, capture_output=True)
+        return "mScreenOn=true" == screen_cmd_output.get_captured_output().strip()
+
+    async def prepare_screen_recording(self) -> None:
+        screen_on = self.check_screen()
+        while not screen_on:
+            await asyncio.sleep(3)
+            screen_on = self.check_screen()
+            if not screen_on:
+                self.logger.error("Please turn the screen on so screen recording can start or check connection!")
+
+    def update_commands(self) -> None:
+        self.screen_artifact = create_standard_log_name("screencast" + str(self.file_counter),
+                                                        "mp4",
+                                                        parent=self.platform.artifact_dir)
+        self.screen_phone_out_path = f"/sdcard/Movies/{os.path.basename(self.screen_artifact)}"
+        self.screen_command = f"shell screenrecord --bugreport {self.screen_phone_out_path}"
+        screen_pull_command = f"pull {self.screen_phone_out_path} {self.screen_artifact}\n"
+        self.pull_commands.append(screen_pull_command)
+        with open(self.manifest_file, "a+") as manifest:
+            manifest.write(screen_pull_command)
+        self.file_counter += 1
+
+    async def start(self):
+        await self.prepare_screen_recording()
+        if self.check_screen():
+            self.screen_pull = True
+            self.update_commands()
+            self.screen_proc = self.platform.get_adb_background_command(self.screen_command)
+            self.screen_proc.start_command()
+            self.logger.info(f"New screen recording file started {self.screen_phone_out_path} {self.screen_artifact}")
+
+    async def run_observer(self) -> None:
+        while True:
+            if not self.screen_proc.command_is_running():
+                self.logger.warning(f"Screen recording proc needs restart (may be normal) {self.platform.device_id}")
+                await self.start()
+            await asyncio.sleep(4)
+
+    async def pull_screen_recording(self) -> None:
+        if self.screen_pull:
+            self.logger.info("Attempting to pull screen recording")
+            await asyncio.sleep(3)
+            with open(self.manifest_file) as manifest:
+                for line in manifest:
+                    self.platform.run_adb_command(line)
+            self.screen_pull = False
+
+    async def stop(self):
+        self.logger.info("Stopping screen proc")
+        self.screen_proc.stop_command()
+        await self.pull_screen_recording()
diff --git a/src/tools/interop/idt/capture/shell_utils.py b/src/tools/interop/idt/capture/shell_utils.py
deleted file mode 100644
index 92bb11c..0000000
--- a/src/tools/interop/idt/capture/shell_utils.py
+++ /dev/null
@@ -1,78 +0,0 @@
-#
-#    Copyright (c) 2023 Project CHIP Authors
-#    All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License");
-#    you may not use this file except in compliance with the License.
-#    You may obtain a copy of the License at
-#
-#        http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-#
-
-import shlex
-import subprocess
-
-from mobly.utils import stop_standing_subprocess
-
-
-class Bash:
-    """
-    Uses subprocess to execute bash commands
-    Intended to be instantiated and then only interacted with through instance methods
-    """
-
-    def __init__(self, command: str, sync: bool = False,
-                 capture_output: bool = False) -> None:
-        """
-        Run a bash command as a sub process
-        :param command: Command to run
-        :param sync: If True, wait for command to terminate
-        :param capture_output: Only applies to sync; if True, store stdout and stderr
-        """
-        self.command: str = command
-        self.sync = sync
-        self.capture_output = capture_output
-
-        self.args: list[str] = []
-        self._init_args()
-        self.proc = subprocess.run(self.args, capture_output=capture_output) if self.sync else None
-
-    def _init_args(self) -> None:
-        """Escape quotes, call bash, and prep command for subprocess args"""
-        command_escaped = self.command.replace('"', '\"')
-        self.args = shlex.split(f'/bin/bash -c "{command_escaped}"')
-
-    def command_is_running(self) -> bool:
-        return self.proc is not None and self.proc.poll() is None
-
-    def get_captured_output(self) -> str:
-        """Return captured output when the relevant instance var is set"""
-        return "" if not self.capture_output or not self.sync \
-            else self.proc.stdout.decode().strip()
-
-    def start_command(self) -> None:
-        if not self.sync and not self.command_is_running():
-            self.proc = subprocess.Popen(self.args)
-        else:
-            print(f'INFO {self.command} start requested while running')
-
-    def stop_command(self, soft: bool = False) -> None:
-        if self.command_is_running():
-            if soft:
-                self.proc.terminate()
-                if self.proc.stdout:
-                    self.proc.stdout.close()
-                if self.proc.stderr:
-                    self.proc.stderr.close()
-                self.proc.wait()
-            else:
-                stop_standing_subprocess(self.proc)
-        else:
-            print(f'INFO {self.command} stop requested while not running')
-        self.proc = None
diff --git a/src/tools/interop/idt/config.py b/src/tools/interop/idt/config.py
new file mode 100644
index 0000000..f084d49
--- /dev/null
+++ b/src/tools/interop/idt/config.py
@@ -0,0 +1,22 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+import logging
+
+enable_color = True
+log_level = logging.INFO
+py_major_version = 3
+py_minor_version = 11
diff --git a/src/tools/interop/idt/discovery/__init__.py b/src/tools/interop/idt/discovery/__init__.py
index f9a1aa6..b83e5df 100644
--- a/src/tools/interop/idt/discovery/__init__.py
+++ b/src/tools/interop/idt/discovery/__init__.py
@@ -14,8 +14,8 @@
 #    See the License for the specific language governing permissions and
 #    limitations under the License.
 #
-from .matter_ble import MatterBleScanner
-from .matter_dnssd import MatterDnssdListener
+from .ble import MatterBleScanner
+from .dnssd import MatterDnssdListener
 
 __all__ = [
     'MatterBleScanner',
diff --git a/src/tools/interop/idt/discovery/matter_ble.py b/src/tools/interop/idt/discovery/ble.py
similarity index 72%
rename from src/tools/interop/idt/discovery/matter_ble.py
rename to src/tools/interop/idt/discovery/ble.py
index 737842b..9549c74 100644
--- a/src/tools/interop/idt/discovery/matter_ble.py
+++ b/src/tools/interop/idt/discovery/ble.py
@@ -17,19 +17,23 @@
 
 import asyncio
 import datetime
-import logging
 import os
+import sys
 import time
 
 from bleak import AdvertisementData, BleakScanner, BLEDevice
 from bleak.exc import BleakDBusError
+from utils import log
+from utils.log import border_print
+
+logger = log.get_logger(__file__)
 
 
 class MatterBleScanner:
 
     def __init__(self, artifact_dir: str):
         self.artifact_dir = artifact_dir
-        self.logger = logging.getLogger(__file__)
+        self.logger = logger
         self.devices_seen_last_time: set[str] = set()
         self.devices_seen_this_time: set[str] = set()
         self.throttle_seconds = 1
@@ -50,7 +54,7 @@
             ts = datetime.datetime.now().isoformat(sep=' ', timespec='milliseconds')
             to_write = f"{ts}\n{to_write}\n\n"
             log_file.write(to_write)
-            print(to_write)
+            self.logger.info(to_write)
 
     @staticmethod
     def is_matter_device(service_uuid: str) -> bool:
@@ -59,7 +63,7 @@
 
     def handle_device_states(self) -> None:
         for device_id in self.devices_seen_last_time - self.devices_seen_this_time:
-            to_log = f"LOST {device_id}"
+            to_log = f"LOST {device_id}\n"
             self.write_device_log(device_id, to_log)
         self.devices_seen_last_time = self.devices_seen_this_time
         self.devices_seen_this_time = set()
@@ -67,18 +71,21 @@
     def log_ble_discovery(
             self,
             name: str,
-            bin_data: bytes,
+            bin_service_data: bytes,
             ble_device: BLEDevice,
             rssi: int) -> None:
-        loggable_data = bin_data.hex()
+        hex_service_data = bin_service_data.hex()
         if self.is_matter_device(name):
             device_id = f"{ble_device.name}_{ble_device.address}"
             self.devices_seen_this_time.add(device_id)
             if device_id not in self.devices_seen_last_time:
-                to_log = f"DISCOVERED\n{ble_device.name} {ble_device.address}"
-                to_log += f"{name}\n{loggable_data}\n"
+                to_log = "DISCOVERED\n"
+                to_log += f"BLE DEVICE NAME: {ble_device.name}\n"
+                to_log += f"BLE ADDR: {ble_device.address}\n"
+                to_log += f"NAME: {name}\n"
+                to_log += f"HEX SERVICE DATA: {hex_service_data}\n"
                 to_log += f"RSSI {rssi}\n"
-                to_log += self.parse_vid_pid(loggable_data)
+                to_log += self.parse_vid_pid(hex_service_data)
                 self.write_device_log(device_id, to_log)
 
     async def browse(self, scanner: BleakScanner) -> None:
@@ -86,19 +93,26 @@
         for device in devices.values():
             ble_device = device[0]
             ad_data = device[1]
-            for name, bin_data in ad_data.service_data.items():
+            for name, bin_service_data in ad_data.service_data.items():
                 self.log_ble_discovery(
-                    name, bin_data, ble_device, ad_data.rssi)
+                    name, bin_service_data, ble_device, ad_data.rssi)
         self.handle_device_states()
 
-    def browse_interactive(self) -> None:
-        scanner = BleakScanner()
-        self.logger.warning(
-            "Scanning BLE\nDCL Lookup: https://webui.dcl.csa-iot.org/")
+    async def browser_task(self, scanner) -> None:
         while True:
             try:
-                time.sleep(self.throttle_seconds)
-                asyncio.run(self.browse(scanner))
+                await asyncio.sleep(self.throttle_seconds)
+                await self.browse(scanner)
             except BleakDBusError as e:
-                self.logger.warning(e)
+                self.logger.critical(e)
                 time.sleep(self.error_seconds)
+
+    async def browse_interactive(self) -> None:
+        scanner = BleakScanner()
+        self.logger.warning(
+            "Scanning BLE\nDCL Lookup: https://webui.dcl.csa-iot.org/\n")
+        border_print("Press enter to stop!", important=True)
+        task = asyncio.create_task(self.browser_task(scanner))
+        await asyncio.get_event_loop().run_in_executor(
+            None, sys.stdin.readline)
+        task.cancel()
diff --git a/src/tools/interop/idt/discovery/dnssd.py b/src/tools/interop/idt/discovery/dnssd.py
new file mode 100644
index 0000000..04dd8de
--- /dev/null
+++ b/src/tools/interop/idt/discovery/dnssd.py
@@ -0,0 +1,354 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import os
+import traceback
+from dataclasses import dataclass
+from textwrap import dedent
+from typing import Callable
+
+from probe.ip_utils import get_addr_type
+from utils.artifact import create_standard_log_name, log
+from utils.log import add_border, border_print
+from zeroconf import ServiceBrowser, ServiceInfo, ServiceListener, Zeroconf
+
+logger = log.get_logger(__file__)
+
+
+@dataclass()
+class MdnsTypeInfo:
+    type: str
+    description: str
+
+
+commissioner = MdnsTypeInfo(
+    "COMMISSIONER",
+    "This is a service for a Matter commissioner aka. controller"
+)
+commissionable = MdnsTypeInfo(
+    "COMMISSIONABLE / EXTENDED DISCOVERY",
+    "This is a service to be used in the commissioning process and provides more info about the device."
+)
+operational = MdnsTypeInfo(
+    "OPERATIONAL",
+    "This is a service for a commissioned Matter device. It exposes limited info about the device."
+)
+border_router = MdnsTypeInfo(
+    "THREAD BORDER ROUTER",
+    "This is a service for a thread border router; may be used for thread+Matter devices."
+)
+
+_MDNS_TYPES = {
+    "_matterd._udp.local.": commissioner,
+    "_matterc._udp.local.": commissionable,
+    "_matter._tcp.local.": operational,
+    "_meshcop._udp.local.": border_router,
+}
+
+
+@dataclass()
+class RecordParser:
+    readable_name: str
+    explanation: str
+    parse: Callable[[str], str]
+
+
+# TODO: Meshcop parser
+
+class MatterTxtRecordParser:
+
+    def __init__(self):
+        self.parsers = {
+            "D": RecordParser("Discriminator",
+                              dedent("\
+                              Differentiates this instance of the device from others w/ same VID/PID that might be \n\
+                              in the environment."),
+                              MatterTxtRecordParser.parse_d),  # To hex
+            "VP": RecordParser("VID/PID",
+                               "The Vendor ID and Product ID (each are two bytes of hex) that identify this product.",
+                               MatterTxtRecordParser.parse_vp),  # Split + to hex
+            "CM": RecordParser("Commissioning mode",
+                               "Whether the device is in commissioning mode or not.",
+                               MatterTxtRecordParser.parse_cm),  # Decode
+            "DT": RecordParser("Device type",
+                               "Application type for this end device.",
+                               MatterTxtRecordParser.parse_dt),  # Decode
+            "DN": RecordParser("Device name",
+                               "Manufacturer provided device name. MAY match NodeLabel in Basic info cluster.",
+                               MatterTxtRecordParser.parse_pass_through),  # None
+            "RI": RecordParser("Rotating identifier",
+                               "Vendor specific, non-trackable per-device ID.",
+                               MatterTxtRecordParser.parse_pass_through),  # None
+            "PH": RecordParser("Pairing hint",
+                               dedent("\
+                               Given the current device state, follow these instructions to make the device \n\
+                               commissionable."),
+                               MatterTxtRecordParser.parse_ph),  # Decode
+            "PI": RecordParser("Pairing instructions",
+                               dedent("\
+                               Used with the Pairing hint. If the Pairing hint mentions N, this is the \n\
+                               value of N."),
+                               MatterTxtRecordParser.parse_pass_through),  # None
+            # General records
+            "SII": RecordParser("Session idle interval",
+                                "Message Reliability Protocol retry interval while the device is idle in milliseconds.",
+                                MatterTxtRecordParser.parse_pass_through),  # None
+            "SAI": RecordParser("Session active interval",
+                                dedent("\
+                                Message Reliability Protocol retry interval while the device is active \n\
+                                in milliseconds."),
+                                MatterTxtRecordParser.parse_pass_through),  # None
+            "SAT": RecordParser("Session active threshold",
+                                "Duration of time this device stays active after last activity in milliseconds.",
+                                MatterTxtRecordParser.parse_pass_through),  # None
+            "T": RecordParser("Supports TCP",
+                              "Whether this device supports TCP client and or Server.",
+                              MatterTxtRecordParser.parse_t),  # Decode
+        }
+        self.unparsed_records = ""
+        self.parsed_records = ""
+
+    def parse_single_record(self, key: str, value: str):
+        parser: RecordParser = self.parsers[key]
+        self.parsed_records += add_border(parser.readable_name + "\n")
+        self.parsed_records += parser.explanation + "\n\n"
+        try:
+            self.parsed_records += "PARSED VALUE: " + parser.parse(value) + "\n"
+        except Exception:
+            logger.error("Exception parsing TXT record, appending raw value")
+            logger.error(traceback.format_exc())
+            self.parsed_records += f"RAW VALUE: {value}\n"
+
+    def get_output(self) -> str:
+        unparsed_exp = "\nThe following TXT records were not parsed or explained:\n"
+        parsed_exp = "\nThe following was discovered about this device via TXT records:\n"
+        ret = ""
+        if self.unparsed_records:
+            ret += unparsed_exp + self.unparsed_records
+        if self.parsed_records:
+            ret += parsed_exp + self.parsed_records
+        return ret
+
+    def parse_records(self, info: ServiceInfo) -> str:
+        if info.properties is not None:
+            for name, value in info.properties.items():
+                try:
+                    name = name.decode("utf-8")
+                except UnicodeDecodeError:
+                    name = str(name)
+                try:
+                    value = value.decode("utf-8")
+                except UnicodeDecodeError:
+                    value = str(value)
+                if name not in self.parsers:
+                    self.unparsed_records += f"KEY: {name} VALUE: {value}\n"
+                else:
+                    self.parse_single_record(name, value)
+        return self.get_output()
+
+    @staticmethod
+    def parse_pass_through(txt_value: str) -> str:
+        return txt_value
+
+    @staticmethod
+    def parse_d(txt_value: str) -> str:
+        return hex(int(txt_value))
+
+    @staticmethod
+    def parse_vp(txt_value: str) -> str:
+        vid, pid = txt_value.split("+")
+        vid, pid = hex(int(vid)), hex(int(pid))
+        return f"VID: {vid}, PID: {pid}"
+
+    @staticmethod
+    def parse_cm(txt_value: str) -> str:
+        cm = int(txt_value)
+        mode_descriptions = [
+            "Not in commissioning mode",
+            "In passcode commissioning mode (standard mode)",
+            "In dynamic passcode commissioning mode",
+        ]
+        return mode_descriptions[cm]
+
+    @staticmethod
+    def parse_dt(txt_value: str) -> str:
+        application_device_types = {
+            # lighting
+            "0x100": "On/Off Light",
+            "0x101": "Dimmable Light",
+            "0x10C": "Color Temperature Light",
+            "0x10D": "Extended Color Light",
+            # smart plugs/outlets and other actuators
+            "0x10A": "On/Off Plug-in Unit",
+            "0x10B": "Dimmable Plug-In Unit",
+            "0x303": "Pump",
+            # switches and controls
+            "0x103": "On/Off Light Switch",
+            "0x104": "Dimmer Switch",
+            "0x105": "Color Dimmer Switch",
+            "0x840": "Control Bridge",
+            "0x304": "Pump Controller",
+            "0xF": "Generic Switch",
+            # sensors
+            "0x15": "Contact Sensor",
+            "0x106": "Light Sensor",
+            "0x107": "Occupancy Sensor",
+            "0x302": "Temperature Sensor",
+            "0x305": "Pressure Sensor",
+            "0x306": "Flow Sensor",
+            "0x307": "Humidity Sensor",
+            "0x850": "On/Off Sensor",
+            # closures
+            "0xA": "Door Lock",
+            "0xB": "Door Lock Controller",
+            "0x202": "Window Covering",
+            "0x203": "Window Covering Controller",
+            # HVAC
+            "0x300": "Heating/Cooling Unit",
+            "0x301": "Thermostat",
+            "0x2B": "Fan",
+            # media
+            "0x28": "Basic Video Player",
+            "0x23": "Casting Video Player",
+            "0x22": "Speaker",
+            "0x24": "Content App",
+            "0x29": "Casting Video Client",
+            "0x2A": "Video Remote Control",
+            # generic
+            "0x27": "Mode Select",
+        }
+        return application_device_types[hex((int(txt_value))).upper().replace("0X", "0x")]
+
+    @staticmethod
+    def parse_ph(txt_value: str) -> str:
+        pairing_hints = [
+            "Power Cycle",
+            "Custom commissioning flow",
+            "Use existing administrator (already commissioned)",
+            "Use settings menu on device",
+            "Use the PI TXT record hint",
+            "Read the manual",
+            "Press the reset button",
+            "Press Reset Button with application of power",
+            "Press Reset Button for N seconds",
+            "Press Reset Button until light blinks",
+            "Press Reset Button for N seconds with application of power",
+            "Press Reset Button until light blinks with application of power",
+            "Press Reset Button N times",
+            "Press Setup Button",
+            "Press Setup Button with application of power",
+            "Press Setup Button for N seconds",
+            "Press Setup Button until light blinks",
+            "Press Setup Button for N seconds with application of power",
+            "Press Setup Button until light blinks with application of power",
+            "Press Setup Button N times",
+        ]
+        ret = "\n"
+        b_arr = [int(b) for b in bin(int(txt_value))[2:]][::-1]
+        for i in range(0, len(b_arr)):
+            b = b_arr[i]
+            if b:
+                ret += pairing_hints[i] + "\n"
+        return ret
+
+    @staticmethod
+    def parse_t(txt_value: str) -> str:
+        return "TCP supported" if int(txt_value) else "TCP not supported"
+
+
+class MatterDnssdListener(ServiceListener):
+
+    def __init__(self, artifact_dir: str) -> None:
+        super().__init__()
+        self.artifact_dir = artifact_dir
+        self.logger = logger
+        self.discovered_matter_devices: [str, ServiceInfo] = {}
+
+    def write_log(self, line: str, log_name: str) -> None:
+        with open(self.create_device_log_name(log_name), "a+") as log_file:
+            log_file.write(line)
+
+    def create_device_log_name(self, device_name) -> str:
+        return os.path.join(
+            self.artifact_dir,
+            create_standard_log_name(f"{device_name}_dnssd", "txt"))
+
+    @staticmethod
+    def log_addr(info: ServiceInfo) -> str:
+        ret = add_border("This device has the following IP addresses\n")
+        for addr in info.parsed_scoped_addresses():
+            ret += f"{get_addr_type(addr)}: {addr}\n"
+        return ret
+
+    def handle_service_info(
+            self,
+            zc: Zeroconf,
+            type_: str,
+            name: str,
+            delta_type: str) -> None:
+        info = zc.get_service_info(type_, name)
+        self.discovered_matter_devices[name] = info
+        to_log = f"{name}\n"
+        update_str = f"\nSERVICE {delta_type}\n"
+        to_log += ("*" * (len(update_str) - 2)) + update_str
+        to_log += _MDNS_TYPES[type_].type + "\n"
+        to_log += _MDNS_TYPES[type_].description + "\n"
+        to_log += f"A/SRV TTL: {str(info.host_ttl)}\n"
+        to_log += f"PTR/TXT TTL: {str(info.other_ttl)}\n"
+        txt_parser = MatterTxtRecordParser()
+        to_log += txt_parser.parse_records(info)
+        to_log += self.log_addr(info)
+        self.logger.info(to_log)
+        self.write_log(to_log, name)
+
+    def add_service(self, zc: Zeroconf, type_: str, name: str) -> None:
+        self.handle_service_info(zc, type_, name, "ADDED")
+
+    def update_service(self, zc: Zeroconf, type_: str, name: str) -> None:
+        self.handle_service_info(zc, type_, name, "UPDATED")
+
+    def remove_service(self, zc: Zeroconf, type_: str, name: str) -> None:
+        to_log = f"Service {name} removed\n"
+        to_log += _MDNS_TYPES[type_].type + "\n"
+        to_log += _MDNS_TYPES[type_].description
+        if name in self.discovered_matter_devices:
+            del self.discovered_matter_devices[name]
+        self.logger.warning(to_log)
+        self.write_log(to_log, name)
+
+    def browse_interactive(self) -> None:
+        zc = Zeroconf()
+        ServiceBrowser(zc, list(_MDNS_TYPES.keys()), self)
+        try:
+            self.logger.warning(
+                dedent("\
+                \n\
+                Browsing Matter DNS-SD\n\
+                DCL Lookup: https://webui.dcl.csa-iot.org/\n\
+                See spec section 4.3 for details of Matter TXT records.\n"))
+            border_print("Press enter to stop!", important=True)
+            input("")
+        finally:
+            zc.close()
+
+    async def browse_once(self, browse_time_seconds: int) -> Zeroconf:
+        zc = Zeroconf()
+        ServiceBrowser(zc, list(_MDNS_TYPES.keys()), self)
+        await asyncio.sleep(browse_time_seconds)
+        zc.close()
+        return zc
diff --git a/src/tools/interop/idt/discovery/matter_dnssd.py b/src/tools/interop/idt/discovery/matter_dnssd.py
deleted file mode 100644
index 7dfec19..0000000
--- a/src/tools/interop/idt/discovery/matter_dnssd.py
+++ /dev/null
@@ -1,89 +0,0 @@
-#
-#    Copyright (c) 2023 Project CHIP Authors
-#    All rights reserved.
-#
-#    Licensed under the Apache License, Version 2.0 (the "License");
-#    you may not use this file except in compliance with the License.
-#    You may obtain a copy of the License at
-#
-#        http://www.apache.org/licenses/LICENSE-2.0
-#
-#    Unless required by applicable law or agreed to in writing, software
-#    distributed under the License is distributed on an "AS IS" BASIS,
-#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-#    See the License for the specific language governing permissions and
-#    limitations under the License.
-#
-
-import logging
-
-from zeroconf import ServiceBrowser, ServiceInfo, ServiceListener, Zeroconf
-
-_MDNS_TYPES = {
-    "_matterd._udp.local.": "COMMISSIONER",
-    "_matterc._udp.local.": "COMMISSIONABLE",
-    "_matter._tcp.local.": "OPERATIONAL",
-    "_meshcop._udp.local.": "THREAD_BORDER_ROUTER",
-}
-
-
-class MatterDnssdListener(ServiceListener):
-
-    def __init__(self, artifact_dir: str) -> None:
-        super().__init__()
-        self.artifact_dir = artifact_dir
-        self.logger = logging.getLogger(__file__)
-
-    @staticmethod
-    def log_addr(info: ServiceInfo) -> str:
-        ret = "\n"
-        for addr in info.parsed_scoped_addresses():
-            ret += f"{addr}\n"
-        return ret
-
-    @staticmethod
-    def log_vid_pid(info: ServiceInfo) -> str:
-        if info.properties is not None and b'VP' in info.properties:
-            vid_pid = str(info.properties[b'VP'])
-            vid_pid = vid_pid[2:len(vid_pid) - 1].split('+')
-            vid = hex(int(vid_pid[0]))
-            pid = hex(int(vid_pid[1]))
-            return f"\nVID: {vid} PID: {pid}\n"
-        return ""
-
-    def handle_service_info(
-            self,
-            zc: Zeroconf,
-            type_: str,
-            name: str,
-            delta_type: str) -> None:
-        info = zc.get_service_info(type_, name)
-        to_log = f"{name}\n"
-        if info.properties is not None:
-            for name, value in info.properties.items():
-                to_log += f"{name}:\t{value}\n"
-        update_str = f"\nSERVICE {delta_type}\n"
-        to_log += ("*" * (len(update_str) - 2)) + update_str
-        to_log += _MDNS_TYPES[type_]
-        to_log += self.log_vid_pid(info)
-        to_log += self.log_addr(info)
-        self.logger.info(to_log)
-
-    def add_service(self, zc: Zeroconf, type_: str, name: str) -> None:
-        self.handle_service_info(zc, type_, name, "ADDED")
-
-    def update_service(self, zc: Zeroconf, type_: str, name: str) -> None:
-        self.handle_service_info(zc, type_, name, "UPDATED")
-
-    def remove_service(self, zc: Zeroconf, type_: str, name: str) -> None:
-        to_log = f"Service {name} removed\n"
-        to_log += _MDNS_TYPES[type_]
-        self.logger.warning(to_log)
-
-    def browse_interactive(self) -> None:
-        zc = Zeroconf()
-        ServiceBrowser(zc, list(_MDNS_TYPES.keys()), self)
-        try:
-            input("Browsing Matter mDNS, press enter to stop\n")
-        finally:
-            zc.close()
diff --git a/src/tools/interop/idt/idt.py b/src/tools/interop/idt/idt.py
index b401f88..9ddb1dd 100644
--- a/src/tools/interop/idt/idt.py
+++ b/src/tools/interop/idt/idt.py
@@ -17,25 +17,38 @@
 
 import argparse
 import asyncio
-import logging
 import os
 import shutil
 import sys
 from pathlib import Path
 
-from capture import EcosystemController, EcosystemFactory, PacketCaptureRunner, PlatformFactory
-from capture.file_utils import border_print, create_file_timestamp, safe_mkdir
+import probe.runner as probe_runner
+from capture import PacketCaptureRunner, controller
 from discovery import MatterBleScanner, MatterDnssdListener
+from utils.artifact import create_file_timestamp, safe_mkdir
+from utils.host_platform import get_available_interfaces, verify_host_dependencies
+from utils.log import border_print
 
-logging.basicConfig(
-    format='%(asctime)s.%(msecs)03d %(levelname)s {%(module)s} [%(funcName)s]\n%(message)s \n',
-    level=logging.INFO)
+import config
+
+splash = '''\x1b[0m
+\x1b[32;1m┌────────┐\x1b[33;20m▪\x1b[32;1m \x1b[34;1m┌──────┐ \x1b[33;20m• \x1b[35;1m┌──────────┐ \x1b[33;20m●
+\x1b[32;1m│░░░░░░░░│  \x1b[34;1m│░░░░░░└┐ \x1b[33;20m゚\x1b[35;1m│░░░░░░░░░░│
+\x1b[32;1m└──┐░░┌──┘\x1b[33;20m۰\x1b[32;1m \x1b[34;1m│░░┌┐░░░│  \x1b[35;1m└───┐░░┌───┘
+\x1b[32;1m   │░░│     \x1b[34;1m│░░│└┐░░│\x1b[33;20m▫ \x1b[35;1m \x1b[33;20m۰\x1b[35;1m  │░░│  \x1b[33;20m。
+\x1b[32;1m \x1b[33;20m•\x1b[32;1m │░░│  \x1b[33;20m●  \x1b[34;1m│░░│┌┘░░│  \x1b[35;1m    │░░│
+\x1b[32;1m┌──┘░░└──┐  \x1b[34;1m│░░└┘░░░│  \x1b[35;1m    │░░│ \x1b[33;20m•
+\x1b[32;1m│░░░░░░░░│  \x1b[34;1m│░░░░░░┌┘\x1b[33;20m۰ \x1b[35;1m \x1b[33;20m▪\x1b[35;1m  │░░│
+\x1b[32;1m└────────┘\x1b[33;20m•\x1b[32;1m \x1b[34;1m└──────┘\x1b[33;20m。  \x1b[35;1m    └──┘ \x1b[33;20m▫
+\x1b[32;1m✰ Interop\x1b[34;1m  ✰ Debugging\x1b[35;1m   ✰ Tool
+\x1b[0m'''
 
 
 class InteropDebuggingTool:
 
     def __init__(self) -> None:
-
+        if config.enable_color:
+            print(splash)
         self.artifact_dir = None
         create_artifact_dir = True
         if len(sys.argv) == 1:
@@ -45,6 +58,8 @@
         elif len(sys.argv) >= 3 and (sys.argv[2] == "-h" or sys.argv[2] == "--help"):
             create_artifact_dir = False
 
+        verify_host_dependencies(["adb", "tcpdump"])
+
         if not os.environ['IDT_OUTPUT_DIR']:
             print('Missing required env vars! Use /scripts!!!')
             sys.exit(1)
@@ -60,25 +75,22 @@
             safe_mkdir(self.artifact_dir)
             border_print(f"Using artifact dir {self.artifact_dir}")
 
-        self.available_platforms = PlatformFactory.list_available_platforms()
+        self.available_platforms = controller.list_available_platforms()
         self.available_platforms_default = 'Android' if 'Android' in self.available_platforms else None
         self.platform_required = self.available_platforms_default is None
 
-        self.available_ecosystems = EcosystemFactory.list_available_ecosystems()
+        self.available_ecosystems = controller.list_available_ecosystems()
         self.available_ecosystems_default = 'ALL'
         self.available_ecosystems.append(self.available_ecosystems_default)
 
-        net_interface_path = "/sys/class/net/"
-        self.available_net_interfaces = os.listdir(net_interface_path) \
-            if os.path.exists(net_interface_path) \
-            else []
-        self.available_net_interfaces.append("any")
-        self.available_net_interfaces_default = "any"
+        self.available_net_interfaces = get_available_interfaces()
+        self.available_net_interfaces_default = "any" if "any" in self.available_net_interfaces else None
         self.pcap_artifact_dir = os.path.join(self.artifact_dir, "pcap")
         self.net_interface_required = self.available_net_interfaces_default is None
 
         self.ble_artifact_dir = os.path.join(self.artifact_dir, "ble")
         self.dnssd_artifact_dir = os.path.join(self.artifact_dir, "dnssd")
+        self.prober_dir = os.path.join(self.artifact_dir, "probes")
 
         self.process_args()
 
@@ -145,6 +157,10 @@
 
         capture_parser.set_defaults(func=self.command_capture)
 
+        prober_parser = subparsers.add_parser("probe",
+                                              help="Probe the environment for Matter and general networking info")
+        prober_parser.set_defaults(func=self.command_probe)
+
         args, unknown = parser.parse_known_args()
         if not hasattr(args, 'func'):
             parser.print_help()
@@ -154,10 +170,13 @@
     def command_discover(self, args: argparse.Namespace) -> None:
         if args.type[0] == "b":
             safe_mkdir(self.ble_artifact_dir)
-            MatterBleScanner(self.ble_artifact_dir).browse_interactive()
+            scanner = MatterBleScanner(self.ble_artifact_dir)
+            asyncio.run(scanner.browse_interactive())
+            self.zip_artifacts()
         else:
             safe_mkdir(self.dnssd_artifact_dir)
             MatterDnssdListener(self.dnssd_artifact_dir).browse_interactive()
+            self.zip_artifacts()
 
     def zip_artifacts(self) -> None:
         zip_basename = os.path.basename(self.artifact_dir)
@@ -165,10 +184,9 @@
                                            'zip',
                                            root_dir=self.artifact_dir)
         output_zip = shutil.move(archive_file, self.artifact_dir_parent)
-        print(f'Output zip: {output_zip}')
+        border_print(f'Output zip: {output_zip}')
 
     def command_capture(self, args: argparse.Namespace) -> None:
-
         pcap = args.pcap == 't'
         pcap_runner = None if not pcap else PacketCaptureRunner(
             self.pcap_artifact_dir, args.interface)
@@ -176,21 +194,24 @@
             border_print("Starting pcap")
             safe_mkdir(self.pcap_artifact_dir)
             pcap_runner.start_pcap()
-
-        asyncio.run(EcosystemFactory.init_ecosystems(args.platform,
-                                                     args.ecosystem,
-                                                     self.artifact_dir))
-        asyncio.run(EcosystemController.start())
-
-        border_print("Press enter twice to stop streaming", important=True)
-        input("")
-
+        asyncio.run(controller.init_ecosystems(args.platform,
+                                               args.ecosystem,
+                                               self.artifact_dir))
+        asyncio.run(controller.start())
+        asyncio.run(controller.run_analyzers())
         if pcap:
             border_print("Stopping pcap")
             pcap_runner.stop_pcap()
+        asyncio.run(controller.stop())
+        asyncio.run(controller.probe())
+        border_print("Checking error report")
+        controller.write_error_report(self.artifact_dir)
+        border_print("Compressing artifacts...")
+        self.zip_artifacts()
 
-        asyncio.run(EcosystemController.stop())
-        asyncio.run(EcosystemController.analyze())
-
-        border_print("Compressing artifacts, this may take some time!")
+    def command_probe(self, args: argparse.Namespace) -> None:
+        border_print("Starting generic Matter prober for local environment!")
+        safe_mkdir(self.dnssd_artifact_dir)
+        safe_mkdir(self.prober_dir)
+        probe_runner.run_probes(self.prober_dir, self.dnssd_artifact_dir)
         self.zip_artifacts()
diff --git a/src/tools/interop/idt/probe/__init__.py b/src/tools/interop/idt/probe/__init__.py
new file mode 100644
index 0000000..f44dcd9
--- /dev/null
+++ b/src/tools/interop/idt/probe/__init__.py
@@ -0,0 +1,25 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from dataclasses import dataclass
+
+
+@dataclass(repr=True)
+class ProbeTarget:
+    name: str
+    ip: str
+    port: str
diff --git a/src/tools/interop/idt/probe/config.py b/src/tools/interop/idt/probe/config.py
new file mode 100644
index 0000000..150e407
--- /dev/null
+++ b/src/tools/interop/idt/probe/config.py
@@ -0,0 +1,19 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+ping_count = 4
+dnssd_browsing_time_seconds = 4
diff --git a/src/tools/interop/idt/probe/ip_utils.py b/src/tools/interop/idt/probe/ip_utils.py
new file mode 100644
index 0000000..ca1e39e
--- /dev/null
+++ b/src/tools/interop/idt/probe/ip_utils.py
@@ -0,0 +1,50 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import ipaddress
+
+
+def is_ipv4(ip: str) -> bool:
+    try:
+        ipaddress.IPv4Address(ip)
+        return True
+    except ipaddress.AddressValueError:
+        return False
+
+
+def is_ipv6_ll(ip: str) -> bool:
+    try:
+        return ipaddress.IPv6Address(ip).is_link_local
+    except ipaddress.AddressValueError:
+        return False
+
+
+def is_ipv6(ip: str) -> bool:
+    try:
+        ipaddress.IPv6Address(ip)
+        return True
+    except ipaddress.AddressValueError:
+        return False
+
+
+def get_addr_type(ip: str) -> str:
+    if is_ipv4(ip):
+        return "V4"
+    elif is_ipv6_ll(ip):
+        return "V6 Link Local"
+    elif is_ipv6(ip):
+        return "V6"
diff --git a/src/tools/interop/idt/probe/linux.py b/src/tools/interop/idt/probe/linux.py
new file mode 100644
index 0000000..05a8adb
--- /dev/null
+++ b/src/tools/interop/idt/probe/linux.py
@@ -0,0 +1,48 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import probe.probe as p
+from utils.host_platform import get_ll_interface
+from utils.log import get_logger
+
+from . import config
+
+logger = get_logger(__file__)
+
+
+class ProberLinuxHost(p.GenericMatterProber):
+
+    def __init__(self, artifact_dir: str, dnssd_artifact_dir: str) -> None:
+        # TODO: Parity with macOS
+        super().__init__(artifact_dir, dnssd_artifact_dir)
+        self.logger = logger
+        self.ll_int = get_ll_interface()
+
+    def discover_targets_by_neighbor(self) -> None:
+        pass
+
+    def probe_v4(self, ipv4: str, port: str) -> None:
+        self.run_command(f"ping -c {config.ping_count} {ipv4}")
+
+    def probe_v6(self, ipv6: str, port: str) -> None:
+        self.run_command(f"ping -c {config.ping_count} -6 {ipv6}")
+
+    def probe_v6_ll(self, ipv6_ll: str, port: str) -> None:
+        self.run_command(f"ping -c {config.ping_count} -6 {ipv6_ll}%{self.ll_int}")
+
+    def get_general_details(self) -> None:
+        pass
diff --git a/src/tools/interop/idt/probe/mac.py b/src/tools/interop/idt/probe/mac.py
new file mode 100644
index 0000000..b13e5d5
--- /dev/null
+++ b/src/tools/interop/idt/probe/mac.py
@@ -0,0 +1,56 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import probe.probe as p
+from utils.host_platform import get_ll_interface
+from utils.log import get_logger
+
+from . import ProbeTarget, config
+
+logger = get_logger(__file__)
+
+
+class ProberMacHost(p.GenericMatterProber):
+
+    def __init__(self, artifact_dir: str, dnssd_artifact_dir: str) -> None:
+        # TODO: Build out additional probes
+        super().__init__(artifact_dir, dnssd_artifact_dir)
+        self.logger = logger
+        self.ll_int = get_ll_interface()
+
+    def discover_targets_by_neighbor(self) -> None:
+        pass
+
+    def probe_v4(self, target: ProbeTarget) -> None:
+        self.logger.info("Ping IPv4")
+        self.run_command(f"ping -c {config.ping_count} {target.ip}")
+
+    def probe_v6(self, target: ProbeTarget) -> None:
+        self.logger.info("Ping IPv6")
+        self.run_command(f"ping6 -c {config.ping_count} {target.ip}")
+
+    def probe_v6_ll(self, target: ProbeTarget) -> None:
+        self.logger.info("Ping IPv6 Link Local")
+        self.run_command(f"ping6 -c {config.ping_count} -I {self.ll_int} {target.ip}")
+
+    def get_general_details(self) -> None:
+        self.logger.info("Host interfaces")
+        self.run_command("ifconfig")
+        self.logger.info("v4 routes from host")
+        self.run_command("netstat -r -f inet -n")
+        self.logger.info("v6 routes from host")
+        self.run_command("netstat -r -f inet6 -n")
diff --git a/src/tools/interop/idt/probe/probe.py b/src/tools/interop/idt/probe/probe.py
new file mode 100644
index 0000000..6fcc92a
--- /dev/null
+++ b/src/tools/interop/idt/probe/probe.py
@@ -0,0 +1,100 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import asyncio
+import os.path
+from abc import ABC, abstractmethod
+
+from discovery import MatterDnssdListener
+from discovery.dnssd import ServiceInfo
+from utils.artifact import create_standard_log_name
+from utils.log import get_logger
+from utils.shell import Bash
+
+from . import ProbeTarget, config
+from .ip_utils import is_ipv4, is_ipv6, is_ipv6_ll
+
+logger = get_logger(__file__)
+
+
+class GenericMatterProber(ABC):
+
+    def __init__(self, artifact_dir: str, dnssd_artifact_dir: str) -> None:
+        self.artifact_dir = artifact_dir
+        self.dnssd_artifact_dir = dnssd_artifact_dir
+        self.logger = logger
+        self.targets: [GenericMatterProber.ProbeTarget] = []
+        self.output = os.path.join(self.artifact_dir,
+                                   create_standard_log_name("generic_probes", "txt"))
+        self.suffix = f"2>&1 | tee -a {self.output}"
+
+    def run_command(self, cmd: str, capture_output=False) -> Bash:
+        cmd = f"{cmd} {self.suffix}"
+        self.logger.debug(cmd)
+        bash = Bash(cmd, sync=True, capture_output=capture_output)
+        bash.start_command()
+        return bash
+
+    @abstractmethod
+    def probe_v4(self, target: ProbeTarget) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    def probe_v6(self, target: ProbeTarget) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    def probe_v6_ll(self, target: ProbeTarget) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    def discover_targets_by_neighbor(self) -> None:
+        raise NotImplementedError
+
+    @abstractmethod
+    def get_general_details(self) -> None:
+        raise NotImplementedError
+
+    def discover_targets_by_browsing(self) -> None:
+        browser = MatterDnssdListener(self.dnssd_artifact_dir)
+        asyncio.run(browser.browse_once(config.dnssd_browsing_time_seconds))
+        for name in browser.discovered_matter_devices:
+            info: ServiceInfo = browser.discovered_matter_devices[name]
+            for addr in info.parsed_scoped_addresses():
+                self.targets.append(ProbeTarget(name, addr, info.port))
+
+    def probe_single_target(self, target: ProbeTarget) -> None:
+        if is_ipv4(target.ip):
+            self.logger.debug(f"Probing v4 {target.ip}")
+            self.probe_v4(target)
+        elif is_ipv6_ll(target.ip):
+            self.logger.debug(f"Probing v6 ll {target.ip}")
+            self.probe_v6_ll(target)
+        elif is_ipv6(target.ip):
+            self.logger.debug(f"Probing v6 {target.ip}")
+            self.probe_v6(target)
+
+    def probe_targets(self) -> None:
+        for target in self.targets:
+            self.logger.info(f"Probing target {target}")
+            self.probe_single_target(target)
+
+    def probe(self) -> None:
+        self.discover_targets_by_browsing()
+        self.discover_targets_by_neighbor()
+        self.probe_targets()
+        self.get_general_details()
diff --git a/src/tools/interop/idt/probe/runner.py b/src/tools/interop/idt/probe/runner.py
new file mode 100644
index 0000000..ac37317
--- /dev/null
+++ b/src/tools/interop/idt/probe/runner.py
@@ -0,0 +1,28 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from utils.host_platform import is_mac
+
+from .linux import ProberLinuxHost
+from .mac import ProberMacHost
+
+
+def run_probes(artifact_dir: str, dnssd_dir: str) -> None:
+    if is_mac():
+        ProberMacHost(artifact_dir, dnssd_dir).probe()
+    else:
+        ProberLinuxHost(artifact_dir, dnssd_dir).probe()
diff --git a/src/tools/interop/idt/requirements.txt b/src/tools/interop/idt/requirements.txt
index fa7d89f..dac2cdf 100644
--- a/src/tools/interop/idt/requirements.txt
+++ b/src/tools/interop/idt/requirements.txt
@@ -1,3 +1,4 @@
 zeroconf==0.74.0
 bleak==0.21.1
-mobly==1.12.2
+psutil==5.9.6
+termcolor==2.3.0
diff --git a/src/tools/interop/idt/scripts/alias.sh b/src/tools/interop/idt/scripts/alias.sh
index 9ecbc86..e7b598b 100644
--- a/src/tools/interop/idt/scripts/alias.sh
+++ b/src/tools/interop/idt/scripts/alias.sh
@@ -41,6 +41,8 @@
 alias idt_clean_artifacts="idt_go && source idt/scripts/clean_artifacts.sh"
 alias idt_clean_all="idt_go && source idt/scripts/clean_all.sh"
 alias idt_create_vars="idt_go && source idt/scripts/create_vars.sh"
+alias idt_check_child="idt_go && source idt/scripts/check_child.sh"
+alias idt_clean_child="idt_go && source idt/scripts/clean_child.sh"
 
 alias idt="idt_go && \
 if [ -z $PYTHONPYCACHEPREFIX ]; then export PYTHONPYCACHEPREFIX=$IDT_SRC_PARENT/idt/pycache; fi && \
diff --git a/src/tools/interop/idt/scripts/check_child.sh b/src/tools/interop/idt/scripts/check_child.sh
new file mode 100644
index 0000000..8a54cd2
--- /dev/null
+++ b/src/tools/interop/idt/scripts/check_child.sh
@@ -0,0 +1,18 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+sudo ps -axf | grep -E "(tcpd|adb)"
diff --git a/src/tools/interop/idt/scripts/clean_all.sh b/src/tools/interop/idt/scripts/clean_all.sh
index 94b9521..d7703ab 100644
--- a/src/tools/interop/idt/scripts/clean_all.sh
+++ b/src/tools/interop/idt/scripts/clean_all.sh
@@ -16,7 +16,8 @@
 #
 
 cd idt
-rm -R venv/
-rm -R pycache/
+sudo rm -R venv/
+sudo rm -R pycache/
 sudo rm -R IDT_ARTIFACTS/
+sudo find . -type d -name "BUILD" -delete
 cd ..
diff --git a/src/tools/interop/idt/scripts/clean_child.sh b/src/tools/interop/idt/scripts/clean_child.sh
new file mode 100644
index 0000000..fc4b2e2
--- /dev/null
+++ b/src/tools/interop/idt/scripts/clean_child.sh
@@ -0,0 +1,19 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+sudo killall tcpdump
+sudo killall adb
diff --git a/src/tools/interop/idt/scripts/compilers.sh b/src/tools/interop/idt/scripts/compilers.sh
new file mode 100644
index 0000000..ee50284
--- /dev/null
+++ b/src/tools/interop/idt/scripts/compilers.sh
@@ -0,0 +1,21 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+sudo apt-get install gcc-arm-linux-gnueabi
+sudo apt-get install gcc-aarch64-linux-gnu
+sudo apt-get install byacc
+sudo apt-get install flex
diff --git a/src/tools/interop/idt/utils/__init__.py b/src/tools/interop/idt/utils/__init__.py
new file mode 100644
index 0000000..43ab9ac
--- /dev/null
+++ b/src/tools/interop/idt/utils/__init__.py
@@ -0,0 +1,25 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+from . import artifact, host_platform, log, shell
+
+__all__ = [
+    'artifact',
+    'host_platform',
+    'log',
+    'shell',
+]
diff --git a/src/tools/interop/idt/capture/file_utils.py b/src/tools/interop/idt/utils/artifact.py
similarity index 61%
rename from src/tools/interop/idt/capture/file_utils.py
rename to src/tools/interop/idt/utils/artifact.py
index 29aabfe..8749b1a 100644
--- a/src/tools/interop/idt/capture/file_utils.py
+++ b/src/tools/interop/idt/utils/artifact.py
@@ -15,14 +15,13 @@
 #    limitations under the License.
 #
 
+import os
 import time
 from pathlib import Path
-from typing import TextIO
 
+from . import log
 
-def add_border(to_print: str) -> str:
-    """Add star borders to important strings"""
-    return '\n' + '*' * len(to_print) + '\n' + to_print
+logger = log.get_logger(__file__)
 
 
 def create_file_timestamp() -> str:
@@ -30,23 +29,11 @@
     return time.strftime("%Y%m%d_%H%M%S")
 
 
-def create_standard_log_name(name: str, ext: str) -> str:
+def create_standard_log_name(name: str, ext: str, parent: str = "") -> str:
     """Returns the name argument wrapped as a standard log name"""
     ts = create_file_timestamp()
-    return f'idt_{ts}_{name}.{ext}'
+    return os.path.join(parent, f'idt_{ts}_{name}.{ext}')
 
 
 def safe_mkdir(dir_name: str) -> None:
     Path(dir_name).mkdir(parents=True, exist_ok=True)
-
-
-def print_and_write(to_print: str, file: TextIO) -> None:
-    print(to_print)
-    file.write(to_print)
-
-
-def border_print(to_print: str, important: bool = False) -> None:
-    len_borders = 64
-    border = f"\n{'_' * len_borders}\n"
-    i_border = f"\n{'!' * len_borders}\n" if important else ""
-    print(f"{border}{i_border}{to_print}{i_border}{border}")
diff --git a/src/tools/interop/idt/utils/host_platform.py b/src/tools/interop/idt/utils/host_platform.py
new file mode 100644
index 0000000..fd64256
--- /dev/null
+++ b/src/tools/interop/idt/utils/host_platform.py
@@ -0,0 +1,90 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import os
+import platform as host_platform
+import sys
+
+from utils import log
+from utils.log import border_print
+from utils.shell import Bash
+
+import config
+
+logger = log.get_logger(__file__)
+
+
+def is_mac():
+    p = host_platform.platform().lower()
+    return "darwin" in p or "mac" in p
+
+
+def get_ll_interface():
+    # TODO: Makes too many assumptions
+    if is_mac():
+        return "en0"
+    net_interface_path = "/sys/class/net/"
+    available_net_interfaces = os.listdir(net_interface_path) \
+        if os.path.exists(net_interface_path) \
+        else []
+    for interface in available_net_interfaces:
+        if "wl" in interface:
+            return interface
+
+
+def get_available_interfaces():
+    net_interface_path = "/sys/class/net/"
+    available_net_interfaces = os.listdir(net_interface_path) \
+        if os.path.exists(net_interface_path) \
+        else []
+    available_net_interfaces.append("any")
+    return available_net_interfaces
+
+
+def command_is_available(cmd_name) -> bool:
+    cmd = Bash(f"which {cmd_name}", sync=True, capture_output=True)
+    cmd.start_command()
+    return cmd.finished_success()
+
+
+def verify_host_dependencies(deps: [str]) -> None:
+    if not command_is_available("which"):
+        # TODO: Check $PATH explicitly as well
+        logger.critical("which is required to verify host dependencies, exiting as its not available!")
+        sys.exit(1)
+    missing_deps = []
+    for dep in deps:
+        logger.info(f"Verifying host dependency {dep}")
+        if not command_is_available(dep):
+            missing_deps.append(dep)
+    if missing_deps:
+        for missing_dep in missing_deps:
+            border_print(f"Missing dependency, please install {missing_dep}!", important=True)
+        sys.exit(1)
+
+
+def verify_py_version() -> None:
+    py_version_major = sys.version_info[0]
+    py_version_minor = sys.version_info[1]
+    have = f"{py_version_major}.{py_version_minor}"
+    need = f"{config.py_major_version}.{config.py_minor_version}"
+    if not (py_version_major == config.py_major_version
+            and py_version_minor >= config.py_minor_version):
+        logger.critical(
+            f"IDT requires python >= {need} but you have {have}")
+        logger.critical("Please install the correct version, delete idt/venv, and re-run!")
+        sys.exit(1)
diff --git a/src/tools/interop/idt/utils/log.py b/src/tools/interop/idt/utils/log.py
new file mode 100644
index 0000000..73dc4e0
--- /dev/null
+++ b/src/tools/interop/idt/utils/log.py
@@ -0,0 +1,79 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import logging
+from typing import TextIO
+
+from termcolor import colored
+
+import config
+
+_CONFIG_LEVEL = config.log_level
+
+_FORMAT_PRE_FSTRING = "%(asctime)s %(levelname)s {%(module)s} [%(funcName)s] "
+_FORMAT_PRE = colored(_FORMAT_PRE_FSTRING, "blue") if config.enable_color else _FORMAT_PRE_FSTRING
+_FORMAT_POST = "%(message)s"
+_FORMAT_NO_COLOR = _FORMAT_PRE_FSTRING+_FORMAT_POST
+
+FORMATS = {
+    logging.DEBUG: _FORMAT_PRE + colored(_FORMAT_POST, "blue"),
+    logging.INFO: _FORMAT_PRE + colored(_FORMAT_POST, "green"),
+    logging.WARNING: _FORMAT_PRE + colored(_FORMAT_POST, "yellow"),
+    logging.ERROR: _FORMAT_PRE + colored(_FORMAT_POST, "red", attrs=["bold"]),
+    logging.CRITICAL: _FORMAT_PRE + colored(_FORMAT_POST, "red",  "on_yellow", attrs=["bold"]),
+}
+
+
+class LoggingFormatter(logging.Formatter):
+
+    def format(self, record):
+        log_fmt = FORMATS.get(record.levelno) if config.enable_color else _FORMAT_NO_COLOR
+        formatter = logging.Formatter(log_fmt)
+        return formatter.format(record)
+
+
+def get_logger(logger_name) -> logging.Logger:
+    logger = logging.getLogger(logger_name)
+    logger.setLevel(_CONFIG_LEVEL)
+    ch = logging.StreamHandler()
+    ch.setLevel(_CONFIG_LEVEL)
+    ch.setFormatter(LoggingFormatter())
+    logger.addHandler(ch)
+    logger.propagate = False
+    return logger
+
+
+def border_print(to_print: str, important: bool = False) -> None:
+    len_borders = len(to_print)
+    border = f"\n{'_' * len_borders}\n"
+    i_border = f"\n{'!' * len_borders}\n" if important else ""
+    to_print = f"{border}{i_border}{to_print}{i_border}{border}"
+    if config.enable_color:
+        to_print = colored(to_print, "magenta")
+    print(to_print)
+
+
+def print_and_write(to_print: str, file: TextIO) -> None:
+    if config.enable_color:
+        print(colored(to_print, "green"))
+    else:
+        print(to_print)
+    file.write(to_print)
+
+
+def add_border(to_print: str) -> str:
+    return '\n' + '*' * len(to_print) + '\n' + to_print
diff --git a/src/tools/interop/idt/utils/shell.py b/src/tools/interop/idt/utils/shell.py
new file mode 100644
index 0000000..e2b0d27
--- /dev/null
+++ b/src/tools/interop/idt/utils/shell.py
@@ -0,0 +1,122 @@
+#
+#    Copyright (c) 2023 Project CHIP Authors
+#    All rights reserved.
+#
+#    Licensed under the Apache License, Version 2.0 (the "License");
+#    you may not use this file except in compliance with the License.
+#    You may obtain a copy of the License at
+#
+#        http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS,
+#    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#    See the License for the specific language governing permissions and
+#    limitations under the License.
+#
+
+import multiprocessing
+import shlex
+import subprocess
+
+import psutil
+
+from . import log
+
+logger = log.get_logger(__file__)
+
+
+class Bash:
+
+    def __init__(self, command: str, sync: bool = False,
+                 capture_output: bool = False,
+                 cwd: str = None) -> None:
+        """
+        Run a bash command as a sub process
+        :param command: Command to run
+        :param sync: If True, wait for command to terminate upon start_command()
+        :param capture_output: Only applies to sync; if True, store and suppress stdout and stderr
+        :param cwd: Set working directory of command
+        """
+        self.logger = logger
+        self.command: str = command
+        self.sync = sync
+        self.capture_output = capture_output
+        self.cwd = cwd
+
+        self.args: list[str] = []
+        self._init_args()
+        self.proc: None | subprocess.CompletedProcess | subprocess.Popen = None
+
+    def _init_args(self) -> None:
+        command_escaped = self.command.replace('"', '\"')
+        self.args = shlex.split(f'/bin/bash -c "{command_escaped}"')
+
+    def command_is_running(self) -> bool:
+        return self.proc is not None and self.proc.poll() is None
+
+    def get_captured_output(self) -> str:
+        return "" if not self.capture_output or not self.sync \
+            else self.proc.stdout.decode().strip()
+
+    def start_command(self) -> None:
+        if self.proc is None:
+            if self.sync:
+                self.proc = subprocess.run(self.args, capture_output=self.capture_output, cwd=self.cwd)
+            else:
+                self.proc = subprocess.Popen(self.args, cwd=self.cwd, stdin=subprocess.PIPE)
+        else:
+            self.logger.warning(f'"{self.command}" start requested more than once for same Bash instance!')
+
+    def term_with_sudo(self, proc: multiprocessing.Process) -> None:
+        self.logger.debug(f"SIGTERM {proc.pid} with sudo")
+        Bash(f"sudo kill {proc.pid}", sync=True).start_command()
+
+    def kill_with_sudo(self, proc: multiprocessing.Process) -> None:
+        self.logger.debug(f"SIGKILL {proc.pid} with sudo")
+        Bash(f"sudo kill -9 {proc.pid}", sync=True).start_command()
+
+    def term(self, proc: multiprocessing.Process) -> None:
+        if "sudo" in self.command:
+            self.term_with_sudo(proc)
+        else:
+            proc.terminate()
+
+    def kill(self, proc: multiprocessing.Process) -> None:
+        if "sudo" in self.command:
+            self.kill_with_sudo(proc)
+        else:
+            proc.kill()
+
+    def stop_single_proc(self, proc: multiprocessing.Process) -> None:
+        self.logger.debug(f"Killing process {proc.pid}")
+        try:
+            self.logger.debug("Sending SIGTERM")
+            self.term(proc)
+            proc.wait(3)
+        except psutil.TimeoutExpired:
+            self.logger.error("SIGTERM timeout expired")
+            try:
+                self.logger.debug("Sending SIGKILL")
+                self.kill(proc)
+                proc.wait(3)
+            except psutil.TimeoutExpired:
+                self.logger.critical(f"SIGKILL timeout expired, could not kill pid  {proc.pid}")
+
+    def stop_command(self) -> None:
+        if self.command_is_running():
+            psutil_proc = psutil.Process(self.proc.pid)
+            suffix = f"{psutil_proc.pid} for command {self.command}"
+            self.logger.debug(f"Stopping children of {suffix}")
+            for child_proc in psutil_proc.children(recursive=True):
+                self.stop_single_proc(child_proc)
+            self.logger.debug(f"Killing root proc {suffix}")
+            self.stop_single_proc(psutil_proc)
+        else:
+            self.logger.warning(f'{self.command} stop requested while not running')
+
+    def finished_success(self) -> bool:
+        if not self.sync:
+            return not self.command_is_running() and self.proc.returncode == 0
+        else:
+            return self.proc is not None and self.proc.returncode == 0