diff --git a/assets/tutorials/first-project/awaiting-setup.png b/assets/tutorials/first-project/awaiting-setup.png new file mode 100644 index 0000000000..73e40edd17 Binary files /dev/null and b/assets/tutorials/first-project/awaiting-setup.png differ diff --git a/assets/tutorials/first-project/camera-config-json.png b/assets/tutorials/first-project/camera-config-json.png new file mode 100644 index 0000000000..fee9d49932 Binary files /dev/null and b/assets/tutorials/first-project/camera-config-json.png differ diff --git a/assets/tutorials/first-project/camera-config-panel.png b/assets/tutorials/first-project/camera-config-panel.png new file mode 100644 index 0000000000..94c0a6707e Binary files /dev/null and b/assets/tutorials/first-project/camera-config-panel.png differ diff --git a/assets/tutorials/first-project/camera-test-panel.png b/assets/tutorials/first-project/camera-test-panel.png new file mode 100644 index 0000000000..69dcf2adbc Binary files /dev/null and b/assets/tutorials/first-project/camera-test-panel.png differ diff --git a/assets/tutorials/first-project/data-service-search.png b/assets/tutorials/first-project/data-service-search.png new file mode 100644 index 0000000000..847489534a Binary files /dev/null and b/assets/tutorials/first-project/data-service-search.png differ diff --git a/assets/tutorials/first-project/fleet-add-machine.png b/assets/tutorials/first-project/fleet-add-machine.png new file mode 100644 index 0000000000..1a367c295f Binary files /dev/null and b/assets/tutorials/first-project/fleet-add-machine.png differ diff --git a/assets/tutorials/first-project/model-service-config.png b/assets/tutorials/first-project/model-service-config.png new file mode 100644 index 0000000000..3a24611742 Binary files /dev/null and b/assets/tutorials/first-project/model-service-config.png differ diff --git a/assets/tutorials/first-project/select-model-dialog.png b/assets/tutorials/first-project/select-model-dialog.png new file mode 100644 index 0000000000..3fdf4c7bf1 Binary files /dev/null and b/assets/tutorials/first-project/select-model-dialog.png differ diff --git a/assets/tutorials/first-project/sim-config-page.png b/assets/tutorials/first-project/sim-config-page.png new file mode 100644 index 0000000000..d033c96884 Binary files /dev/null and b/assets/tutorials/first-project/sim-config-page.png differ diff --git a/assets/tutorials/first-project/sim-config-running.png b/assets/tutorials/first-project/sim-config-running.png new file mode 100644 index 0000000000..a9c829574e Binary files /dev/null and b/assets/tutorials/first-project/sim-config-running.png differ diff --git a/assets/tutorials/first-project/sim-viewer-config-button.png b/assets/tutorials/first-project/sim-viewer-config-button.png new file mode 100644 index 0000000000..99588ba243 Binary files /dev/null and b/assets/tutorials/first-project/sim-viewer-config-button.png differ diff --git a/assets/tutorials/first-project/sim-viewer.png b/assets/tutorials/first-project/sim-viewer.png new file mode 100644 index 0000000000..1410640442 Binary files /dev/null and b/assets/tutorials/first-project/sim-viewer.png differ diff --git a/assets/tutorials/first-project/viam-app-live.png b/assets/tutorials/first-project/viam-app-live.png new file mode 100644 index 0000000000..3b0632e33e Binary files /dev/null and b/assets/tutorials/first-project/viam-app-live.png differ diff --git a/assets/tutorials/first-project/vision-data-capture.png b/assets/tutorials/first-project/vision-data-capture.png new file mode 100644 index 0000000000..8c6e7de446 Binary files /dev/null and b/assets/tutorials/first-project/vision-data-capture.png differ diff --git a/assets/tutorials/first-project/vision-service-config.png b/assets/tutorials/first-project/vision-service-config.png new file mode 100644 index 0000000000..19aa5373cc Binary files /dev/null and b/assets/tutorials/first-project/vision-service-config.png differ diff --git a/assets/tutorials/first-project/vision-service-created.png b/assets/tutorials/first-project/vision-service-created.png new file mode 100644 index 0000000000..bb81c2ffca Binary files /dev/null and b/assets/tutorials/first-project/vision-service-created.png differ diff --git a/docs/operate/hello-world/first-project/_index.md b/docs/operate/hello-world/first-project/_index.md index 3a4704cc2d..cd98053ff3 100644 --- a/docs/operate/hello-world/first-project/_index.md +++ b/docs/operate/hello-world/first-project/_index.md @@ -9,7 +9,7 @@ description: "Build a complete quality inspection system with Viam—from camera date: "2025-01-30" --- -**Time:** ~60 minutes +**Time:** ~45 minutes ## Before You Begin @@ -36,9 +36,9 @@ In this tutorial you will work through a series of tasks that are common to many | Part | Time | What You'll Do | | ---------------------------------- | ------- | ------------------------------------------------------ | -| [Part 1: Vision Pipeline](part-1/) | ~15 min | Set up camera, ML model, and vision service | -| [Part 2: Data Capture](part-2/) | ~10 min | Configure automatic image capture and cloud sync | -| [Part 3: Control Logic](part-3/) | ~15 min | Generate module, write inspection logic, test from CLI | +| [Part 1: Vision Pipeline](part-1/) | ~10 min | Set up camera, ML model, and vision service | +| [Part 2: Data Capture](part-2/) | ~5 min | Configure automatic image capture and cloud sync | +| [Part 3: Control Logic](part-3/) | ~10 min | Generate module, write inspection logic, test from CLI | | [Part 4: Deploy a Module](part-4/) | ~10 min | Deploy module, configure detection data capture | | [Part 5: Productize](part-5/) | ~10 min | Build monitoring dashboard with Teleop | @@ -46,12 +46,11 @@ In this tutorial you will work through a series of tasks that are common to many **[Part 1: Vision Pipeline](part-1/)** (~15 min) -- [1.1 Verify Your Machine is Online](part-1/#11-verify-your-machine-is-online) -- [1.2 Locate Your Machine Part](part-1/#12-locate-your-machine-part) -- [1.3 Configure the Camera](part-1/#13-configure-the-camera) -- [1.4 Test the Camera](part-1/#14-test-the-camera) -- [1.5 Add an ML Model Service](part-1/#15-add-an-ml-model-service) -- [1.6 Add a Vision Service](part-1/#16-add-a-vision-service) +- [1.1 Find Your Machine Part](part-1/#11-find-your-machine-part) +- [1.2 Configure the Camera](part-1/#12-configure-the-camera) +- [1.3 Test the Camera](part-1/#13-test-the-camera) +- [1.4 Add an ML Model Service](part-1/#14-add-an-ml-model-service) +- [1.5 Add a Vision Service](part-1/#15-add-a-vision-service) **[Part 2: Data Capture](part-2/)** (~10 min) diff --git a/docs/operate/hello-world/first-project/gazebo-setup.md b/docs/operate/hello-world/first-project/gazebo-setup.md index f6ca59b172..ac6a12593d 100644 --- a/docs/operate/hello-world/first-project/gazebo-setup.md +++ b/docs/operate/hello-world/first-project/gazebo-setup.md @@ -13,166 +13,66 @@ This guide walks you through setting up the Gazebo simulation used in the [Your ## Prerequisites - **Docker Desktop** installed and running -- A free [Viam account](https://app.viam.com) - ~5GB disk space for the Docker image -## Step 1: Build the Docker Image +## Step 1: Pull the Docker Image The simulation runs in a Docker container with Gazebo Harmonic and viam-server pre-installed. -**Clone the simulation repository:** - -```bash -git clone https://github.com/viamrobotics/can-inspection-simulation.git -cd can-inspection-simulation -``` - -**Build the Docker image:** - ```bash -docker build -t gz-harmonic-viam . +docker pull ghcr.io/viamrobotics/can-inspection-simulation:latest-local ``` -This takes 5-10 minutes depending on your internet connection. - -## Step 2: Create a Machine in Viam - -1. Go to [app.viam.com](https://app.viam.com) and log in -2. Click the **Locations** tab -3. Click **+ Add machine** -4. Name it `inspection-station-1` -5. Click **Add machine** - -## Step 3: Create a credentials file - -1. Click the **Awaiting setup** button -2. Click **Machine cloud credentials** to copy your machine's credentials -3. In the `can-inspection-simulation` directory, create a file called `station1-viam.json` -4. Paste your machine's credentials into this file and save - -## Step 4: Start the Container +This downloads the pre-built image, which takes about a minute depending on your internet connection. -{{< tabs >}} -{{% tab name="Mac/Linux" %}} +## Step 2: Start the Container ```bash docker run --name gz-station1 -d \ -p 8080:8080 -p 8081:8081 -p 8443:8443 \ - -v "$(pwd)/station1-viam.json:/etc/viam.json" \ - gz-harmonic-viam + ghcr.io/viamrobotics/can-inspection-simulation:latest-local ``` -{{% /tab %}} -{{% tab name="Windows (PowerShell)" %}} - -```powershell -docker run --name gz-station1 -d ` - -p 8080:8080 -p 8081:8081 -p 8443:8443 ` - -v "${PWD}\station1-viam.json:/etc/viam.json" ` - gz-harmonic-viam -``` - -{{% /tab %}} -{{< /tabs >}} - -## Step 5: Verify the Setup - -**Check container logs:** - -```bash -docker logs gz-station1 -``` - -Look for: - -- "Can Inspection Station 1 Running!" -- viam-server startup messages - -**View the simulation:** +## Step 3: Verify the Simulation Open your browser to `http://localhost:8081` -You should see a web-based 3D view of the inspection station with: +You should see two live camera feeds from the inspection station: -- A conveyor belt -- Cans moving along the belt -- An overhead camera view +{{}} -{{}} +## Step 4: Create a Machine in Viam -**Verify machine connection:** - -1. Go to [app.viam.com](https://app.viam.com) -2. Click on `inspection-station-1` -3. The status indicator should show **Live** (in green) - -## Troubleshooting - -{{< expand "Container won't start" >}} -**Check if ports are in use:** - -{{< tabs >}} -{{% tab name="Mac/Linux" %}} - -```bash -lsof -i :8080 -lsof -i :8081 -``` - -{{% /tab %}} -{{% tab name="Windows (PowerShell)" %}} - -```powershell -netstat -ano | findstr :8080 -netstat -ano | findstr :8081 -``` - -{{% /tab %}} -{{< /tabs >}} +1. Go to [app.viam.com](https://app.viam.com) and create a free account or log in +2. Click the **Locations** tab +3. Click **+ Add machine**, name it `inspection-station-1`, and click **Add machine** -If something is using these ports, stop it or use different port mappings. -{{< /expand >}} + {{}} -{{< expand "Machine shows Offline in Viam" >}} +## Step 5: Configure Machine Credentials -1. Check container is running: `docker ps` -2. Check logs for errors: `docker logs gz-station1` -3. Verify credentials in your config file match the Viam app -4. Try restarting: `docker restart gz-station1` - {{< /expand >}} +1. In the Viam app, click the **Awaiting setup** button on your new machine and click **Machine cloud credentials** to copy the credentials JSON -{{< expand "Simulation viewer is blank or slow" >}} + {{}} -- The web viewer requires WebGL support -- Try a different browser (Chrome usually works best) -- Check your system has adequate resources (4GB+ RAM recommended) - {{< /expand >}} +2. In the simulation viewer, click the **Configuration** button in the upper right corner -## Container Management + {{}} -**Stop the container:** +3. Paste your machine's credentials into the **Viam Configuration (viam.json)** text area and click **Update and Restart** -```bash -docker stop gz-station1 -``` + {{}} -**Start a stopped container:** - -```bash -docker start gz-station1 -``` + A green banner will confirm the configuration was updated successfully and the status indicator will change to **Running**. -**Remove the container (to recreate):** + {{}} -```bash -docker rm gz-station1 -``` +## Step 6: Verify Machine Connection -**View logs:** +Go back to your machine's page in the Viam app. +The status indicator should now show **Live**. -```bash -docker logs -f gz-station1 -``` +{{}} ## Ready to Continue diff --git a/docs/operate/hello-world/first-project/part-1.md b/docs/operate/hello-world/first-project/part-1.md index 5aa8718717..daa106d2ba 100644 --- a/docs/operate/hello-world/first-project/part-1.md +++ b/docs/operate/hello-world/first-project/part-1.md @@ -16,7 +16,8 @@ date: "2025-01-30" ## Prerequisites -Before starting this tutorial, you need the can inspection simulation running. Follow the **[Gazebo Simulation Setup Guide](../gazebo-setup/)** to: +Before starting this tutorial, you need the can inspection simulation running. +Follow the **[Gazebo Simulation Setup Guide](../gazebo-setup/)** to: 1. Build the Docker image with Gazebo Harmonic 2. Create a machine in Viam and get credentials @@ -25,58 +26,55 @@ Before starting this tutorial, you need the can inspection simulation running. F Once you see "Can Inspection Simulation Running!" in the container logs and your machine shows **Live** in the Viam app, return here to continue. {{< alert title="What you're working with" color="info" >}} -The simulation runs Gazebo Harmonic inside a Docker container. It simulates a conveyor belt with cans (some dented) passing under an inspection camera. viam-server runs on the Linux virtual machine inside the container and connects to Viam's cloud, just like it would on a physical machine. Everything you configure in the Viam app applies to the simulated hardware. +The simulation runs Gazebo Harmonic inside a Docker container. +It simulates a conveyor belt with cans (some dented) passing under an inspection camera. +viam-server runs on the Linux virtual machine inside the container and connects to Viam's cloud, just like it would on a physical machine. +Everything you configure in the Viam app applies to the simulated hardware. {{< /alert >}} -## 1.1 Verify Your Machine is Online +## 1.1 Find Your Machine Part -If you followed the [setup guide](../gazebo-setup/), your machine should already be online. - -1. Open [app.viam.com](https://app.viam.com) (the "Viam app") -2. Navigate to your machine (for example, `inspection-station-1`) -3. Verify the status indicator shows **Live** -4. Click the **Configure** tab if not already selected +In the Viam app, make sure the **Configure** tab for your machine is selected. {{}} -Ordinarily, after creating a machine in Viam, you would download and install `viam-server` together with the cloud credentials for your machine. For this tutorial, we've already installed `viam-server` and launched it in the simulation Docker container. - -## 1.2 Locate Your Machine Part - -Your machine is online but empty. To configure your machine, you will add components and services to your machine part in the Viam app. Your machine part is the compute hardware for your robot. This might be a PC, Mac, Raspberry Pi, or another computer. +Your machine is online but empty. +To configure it, you'll add components and services to your **machine part**. +A machine part is the compute hardware for your robot. +In this tutorial, your machine part is a virtual machine running Linux in the Docker container. -In the case of this tutorial, your machine part is a virtual machine running Linux in the Docker container. +Find `inspection-station-1-main` in the **Configure** tab. -Find `inspection-station-1-main` in the **Configuration** tab. - -## 1.3 Configure the Camera +## 1.2 Configure the Camera You'll now add the camera as a _component_. {{< expand "What's a component?" >}} -In Viam, a **component** is any piece of hardware: cameras, motors, arms, sensors, grippers. You configure components by declaring what they are, and Viam handles the drivers and communication. +In Viam, a **component** is any piece of hardware: cameras, motors, arms, sensors, grippers. +You configure components by declaring what they are, and Viam handles the drivers and communication. -**The power of Viam's component model:** All cameras expose the same API—USB webcams, Raspberry Pi camera modules, IP cameras, simulated cameras. Your application code uses the same `GetImages()` method regardless of the underlying hardware. Swap hardware by changing configuration, not code. +**The power of Viam's component model:** All cameras expose the same API—USB webcams, Raspberry Pi camera modules, IP cameras, simulated cameras. +Your application code uses the same `GetImages()` method regardless of the underlying hardware. +Swap hardware by changing configuration, not code. {{< /expand >}} ### Add a camera component To add the camera component to your machine part: -1. Click the **+** button and select **Component or service** -2. Click **Camera** -3. Search for `gz-camera` -4. Select `gz-camera:rgb-camera` -5. Click **Add module** -6. Enter `inspection-cam` for the name +1. Click the **+** button and select **Configuration block** +2. Search for `gz-camera` +3. Select `gz-camera:rgb-camera` +4. Click **Add component** +5. Enter `inspection-cam` for the name +6. Click **Add component** -{{< expand "Why were two items added to my machine part?" >}} -After adding the camera component, you will see two items appear under your machine part. One is the actual camera hardware (`inspection-cam`) that you will use through the Viam camera API. The other is the software module (`gz-camera`) that implements this API for the specific model of camera you are using. All components that are supported through modules available in the Viam registry will appear this way in the **Configuration** tab. For built-in components, such as webcams, you will not also see a module appear in the configuration. -{{< /expand >}} +{{}} ### Configure the camera -To configure your camera component to work with the camera in the simulation, you need to specify the correct camera ID. Most components require a few configuration parameters. +To configure your camera component to work with the camera in the simulation, you need to specify the correct camera ID. +Most components require a few configuration parameters. 1. In the **JSON Configuration** section, add: @@ -88,13 +86,20 @@ To configure your camera component to work with the camera in the simulation, yo 2. Click **Save** in the top right +{{}} + {{< alert title="What happened behind the scenes" color="info" >}} -You declared "this machine has a camera called `inspection-cam`" by editing the configuration in the Viam app. When you clicked **Save**, `viam-server` loaded the camera module, added a camera component, and made the camera available through Viam's standard camera API. Software you write, other services, and user interface components will use the API to get the images they need. Using the API as an abstraction means that everything still works if you swap cameras. +You declared "this machine has an attached camera called `inspection-cam`" by editing the configuration in the Viam app. +When you clicked **Save**, `viam-server` loaded the camera module which implements the camera API for the specific model of camera we are using. +It also added a camera component, and made the camera available through Viam's standard camera API. +Software you write, other services, and user interface components will use the API to get the images they need. +Using the API as an abstraction means that everything still works if you swap cameras. {{< /alert >}} -## 1.4 Test the Camera +## 1.3 Test the Camera -Verify the camera is working. Every component in Viam has a built-in test card right in the configuration view. +Verify the camera is working. +Every component in Viam has a built-in test card right in the configuration view. ### Open the test panel @@ -102,39 +107,36 @@ Verify the camera is working. Every component in Viam has a built-in test card r 2. Look for the **Test** section at the bottom of the camera's configuration panel 3. Click **Test** to expand the camera's test card -The camera component test card uses the camera API to add an image feed to the Viam app, enabling you to determine whether your camera is working. You should see a live video feed from the simulated camera. This is an overhead view of the conveyor/staging area. +The camera component test card uses the camera API to add an image feed to the Viam app, enabling you to determine whether your camera is working. +You should see a live video feed from the simulated camera. +This is an overhead view of the conveyor/staging area. + +{{}} {{< alert title="Checkpoint" color="success" >}} -Your camera is working. You can stream video and capture images from the simulated inspection station. +Your camera is working. +You can stream video and capture images from the simulated inspection station. {{< /alert >}} -## 1.5 Add an ML Model Service - -Now you'll add machine learning to run inference on your camera feed. You'll configure two services: - -1. **ML model service**—Loads a trained model for the inference task -2. **Vision service**—Connects the camera to the model and returns detections +## 1.4 Add an ML Model Service -{{< expand "Components versus services" >}} +Now you'll add machine learning to run inference on your camera feed. +You'll configure two services: -- **Components** are hardware: cameras, motors, arms -- **Services** are capabilities: vision (ML inference), motion (arm kinematics), custom control logic - -Services often _use_ components. A **vision service** takes images from a camera, runs them through an ML model, and returns structured results—detections with bounding boxes and labels, or classifications with confidence scores. - -The **ML model service** loads a trained model (TensorFlow, ONNX, or PyTorch) and exposes an `Infer()` method. The vision service handles the rest: converting camera images to tensors, calling the model, and interpreting outputs into usable detections. -{{< /expand >}} +- **ML model service**—Loads a trained model for the inference task +- **Vision service**—Connects the camera to the ML model and returns detections ### Create the ML model service -1. Click the **Configure** tab -2. Click **+** next to your machine part -3. Select **Component or service** -4. Select **ML model** -5. Select `TFLite CPU` -6. Click **Add module** -7. Name it `model-service` -8. Click **Create** +1. Click **+** next to your machine part +2. Select **Configuration block** +3. Search for `tflite` +4. Select `tflight_cpu/tflight_cpu` +5. Click **Add component** +6. Name it `model-service` +7. Click **Add component** + +{{}} ### Select a model from the registry @@ -142,25 +144,32 @@ Configure the `model-service` ML model service you just included in your configu 1. In the `model-service` configuration panel, click **Select model** 2. Search for `can-defect-detection` and select it from the list (a model that classifies cans as PASS or FAIL based on defect detection) + + {{}} + 3. Click **Choose** to save the model selection 4. Click **Save** in the upper right corner to save your configuration {{< alert title="Your own models" color="tip" >}} -For a different application, you'd train a model on your specific data and upload it to the registry. The registry handles versioning and deployment of ML models across your fleet. +For a different application, you'd train a model on your specific data and upload it to the registry. +The registry handles versioning and deployment of ML models across your fleet. {{< /alert >}} -## 1.6 Add a Vision Service +## 1.5 Add a Vision Service Now add a vision service that connects your camera to the ML model service. ### Create the vision service 1. Click **+** next to your machine part -2. Select **Component or service** +2. Select **Configuration block** 3. Search for `vision` -4. Select **vision / ML Model** -5. Name it `vision-service` -6. Click **Create** +4. Select **mlmodel** +5. Click **Add component** +6. Name it `vision-service` +7. Click **Add component** + +{{}} ### Link the camera and model in the vision service @@ -170,6 +179,8 @@ Now add a vision service that connects your camera to the ML model service. 4. Find the **Attributes** section and set **Minimum confidence threshold** to 0.75 5. Click **Save** in the upper right corner +{{}} + ### Test the vision service 1. Find the **Test** section at the bottom of the `vision-service` configuration panel @@ -180,16 +191,16 @@ Now add a vision service that connects your camera to the ML model service. {{}} -{{< alert title="What you've built" color="info" >}} -A complete ML inference pipeline. The vision service grabs an image from the camera, runs it through the TensorFlow Lite model, and returns structured detection results. This same pattern works for any ML task—object detection, classification, segmentation—you just swap the model. -{{< /alert >}} - {{< alert title="Checkpoint" color="success" >}} -You've configured a complete ML inference pipeline including a camera, model service, and vision service through the Viam app. The system can detect defective cans. Next, you'll set up continuous data capture so every detection is recorded and queryable. -{{< /alert >}} +You've configured a complete ML inference pipeline that can detect defective cans. + +The ML model service loads a trained model and exposes an `Infer()` method, while the vision service handles the rest—grabbing images from the camera, running them through the model, and returning structured detections with bounding boxes, labels, and confidence scores. + +This pattern works for any ML task. +Swap the model for object detection, classification, or segmentation without changing the pipeline. +You can also swap one camera for another with one configuration change. -{{< alert title="Explore the JSON configuration" color="tip" >}} -Everything you configured through the UI is stored as JSON. Click **JSON** in the upper left of the Configure tab to see the raw configuration. You'll see your camera, ML model service, and vision service defined with their attributes. As configurations grow more complex, the JSON view helps you understand how components and services connect. +Next, you'll set up continuous data capture so every detection is recorded and queryable. {{< /alert >}} **[Continue to Part 2: Data Capture →](../part-2/)** diff --git a/docs/operate/hello-world/first-project/part-2.md b/docs/operate/hello-world/first-project/part-2.md index 4d13d2044d..9cf415c985 100644 --- a/docs/operate/hello-world/first-project/part-2.md +++ b/docs/operate/hello-world/first-project/part-2.md @@ -12,7 +12,7 @@ date: "2025-01-30" **Skills:** Data capture configuration, cloud sync, browsing captured data. -**Time:** ~10 min +**Time:** ~5 min For inspection applications such as this one, monitoring defect detection is important both to ensure production line health and product quality. You want to ensure the vision model is detecting a very high percentage of defects and quickly detect any problems. @@ -28,11 +28,16 @@ Data gets buffered locally, synced to the cloud at an interval you configure, an **Include the data service in your machine configuration:** 1. Click **+** next to **inspection-station-1-main** in the **Configure** tab -2. Click **Component or service** -3. Select **data management** -4. Name it `data-service` -5. Click **Create** -6. **Save** your updated machine configuration +2. Click **Configuration block** +3. Search for `data` +4. Select **builtin** (this is the built-in **Data Manager** service) + + {{}} + +5. Click **Add component** +6. Name it `data-service` +7. Click **Add component** +8. **Save** your updated machine configuration The default configuration options for the data service are correct for our application so we can move on to capturing data from the vision service. @@ -50,10 +55,14 @@ The default configuration options for the data service are correct for our appli 1. In the **Data capture** section of the `vision-service` configuration panel you should now see a collapsible component labeled **Latest capture** with a day and time specified 2. Click on **Latest capture** and view the most recent image captured -Your machine is now capturing detection results and images every 2 seconds and syncing them to the Viam cloud application. Once synced to the cloud, the data is removed from your machine to free up storage. +Your machine is now capturing detection results and images every 2 seconds and syncing them to the Viam cloud application. +Once synced to the cloud, the data is removed from your machine to free up storage. + +{{}} {{< alert title="Tip" color="tip" >}} -Click **JSON** in the Configure tab to see how data capture settings appear in the raw configuration. Each component and service with data capture enabled has a `service_configs` entry containing `capture_methods`. +Click **JSON** in the Configure tab to see how data capture settings appear in the raw configuration. +Each component and service with data capture enabled has a `service_configs` entry containing `capture_methods`. {{< /alert >}} ## 2.2 View Captured Data @@ -86,10 +95,13 @@ Data capture is now running in the background: - Syncs to cloud automatically - Available for visual review and filtering -This foundation records everything your vision pipeline sees. In Part 3, you'll write custom control logic to act on detections. Later, you'll configure tabular data capture to enable SQL queries on detection results. +This foundation records everything your vision pipeline sees. +In Part 3, you'll write custom control logic to act on detections. +Later, you'll configure tabular data capture to enable SQL queries on detection results. {{< alert title="Checkpoint" color="success" >}} -Your system captures every detection as an image. Data syncs to the cloud where you can browse, filter, and review results. +Your system captures every detection as an image. +Data syncs to the cloud where you can browse, filter, and review results. {{< /alert >}} **[Continue to Part 3: Control Logic →](../part-3/)**