-
Notifications
You must be signed in to change notification settings - Fork 5
NUbook Page on Mocap #339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Draft
willburgin
wants to merge
16
commits into
main
Choose a base branch
from
burgin/mocap
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Draft
NUbook Page on Mocap #339
Changes from all commits
Commits
Show all changes
16 commits
Select commit
Hold shift + click to select a range
4b9953b
Create 06-mocap.mdx
willburgin 1b5be4b
Added images from mocap
willburgin be9d371
fixed image error
willburgin d47d236
Fixed images again.
willburgin 148bcd8
Added ground plane image.
willburgin c80aa23
Change image jpg to png
willburgin 16d34dc
Push
willburgin df2cd14
Update 06-mocap.mdx
willburgin 62d9de3
Update 06-mocap.mdx
willburgin bf43038
Update 06-mocap.mdx
willburgin 5e3c0c4
Merge branch 'main' into burgin/mocap
willburgin b58d9c3
Added prettier formatting
willburgin 7225741
Merge branch 'burgin/mocap' of https://github.com/NUbots/NUbook into …
willburgin 660f8f7
Updated to include IP address information and configuration of network.
willburgin 500d97f
Update 06-mocap.mdx
willburgin 515cdf0
Update 06-mocap.mdx
willburgin File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,205 @@ | ||
| --- | ||
| section: Guides | ||
| chapter: Tools | ||
| title: Setting Up Motion Capture | ||
| description: Setup instructions for Motive 2.3.1 | ||
| slug: /guides/tools/mocap-setup | ||
| authors: | ||
| - Will Burgin (@willburgin) | ||
| --- | ||
|
|
||
| [Motive](https://optitrack.com/software/motive/) is the software package used to run our OptiTrack motion capture system. This guide explains how to set up and operate the system using **Motive 2.3.1**. By the end of this guide, you should know how to use the Motion Capture System to obtain ground truth data. | ||
|
|
||
| --- | ||
|
|
||
| ## Prerequisites | ||
|
|
||
| <details> | ||
|
|
||
| <summary>Hardware</summary> | ||
|
|
||
| - **Dedicated MSI Laptop** (Windows 10, Motive 2.3.1 pre-installed) | ||
| - **Motive License USB dongle** (must be plugged into the MSI laptop for Motive to launch) | ||
| - **Motion capture cameras** (10 units, connected via PoE switch/modem) | ||
| - **Calibration wand** | ||
| - **Calibration square** | ||
| - **Reflective markers** | ||
|
|
||
| </details> | ||
|
|
||
| <details> | ||
|
|
||
| <summary>Network</summary> | ||
|
|
||
| - **Dedicated modem/router** for the motion capture system (isolated network) | ||
| - Ethernet cables for connecting cameras, modem, and laptop | ||
|
|
||
| </details> | ||
|
|
||
| --- | ||
|
|
||
| ## Connecting the Isolated Network | ||
|
|
||
| > _Note: This step may not need to be repeated every time. If the motion capture network is already configured, skip to the next section._ | ||
|
|
||
| 1. Connect the **motion capture modem/router** to the ethernet wall ports. | ||
| 2. Connect the **MSI laptop** to this modem via ethernet. | ||
| 3. At the **main lab modem**, disconnect the **black ethernet cable** leading from the wall port, and connect the **cream coloured cable** to the wall port. | ||
| 4. Next, unplug the **green cable** that connects the modem to the network hub, and connect the other end of the **cream coloured cable** in it's place. | ||
|
Comment on lines
+45
to
+48
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. We are cooked if the cable colours ever change. I think it is better to associate the cables with the devices they connect instead of just the colour |
||
|
|
||
| <Alert type="warning"> | ||
|
|
||
| Doing this temporarily removes ethernet access from the main lab network. Double-check that no one else is relying on the main network via ethernet before switching. | ||
|
|
||
| </Alert> | ||
|
|
||
| 5. Once complete, the motion capture system is on its own isolated network. The MSI laptop and all cameras should now communicate exclusively through this network. | ||
|
|
||
| --- | ||
|
|
||
| ## Configuring Motive | ||
|
|
||
| 1. Open the **MSI laptop** and boot into **Windows 10**. | ||
| 2. Insert the **Motive USB license dongle** (if not already inserted). | ||
| 3. Launch **Motive 2.3.1** (allow a few seconds for the license to be detected). | ||
| 4. In the Motive interface, check the **camera list**. You should see **10 connected cameras**. | ||
| - If cameras are missing, revisit the **Connecting the Isolated Network** step. | ||
|
|
||
| The initial view of the software should look something like the figure below. You can toggle between camera view vs perspective view via the highlighted red box. | ||
|  | ||
|
|
||
| --- | ||
|
|
||
| ## Camera Calibration | ||
|
|
||
| Before recording any datasets, the camera system must be calibrated to ensure accurate tracking. | ||
|
|
||
| <Alert type="info"> | ||
|
|
||
| Recalibrate **whenever a camera is moved or bumped**, even slightly. | ||
| Best practice: **calibrate before every data recording session**, as the process only takes a few minutes. | ||
|
|
||
| </Alert> | ||
|
|
||
| 1. Clear the capture field of all objects. | ||
| 2. In Motive, go to **Camera View**, and click **Mask Visible**. | ||
|
|
||
| - This masks out static background reflections. | ||
|
|
||
| <Alert type="warning"> | ||
|
|
||
| Ensure no reflective markers you intend to use (e.g., calibration wand) are visible before applying the mask. | ||
|
|
||
| </Alert> | ||
|
|
||
| 3. In Motive, select **all 10 cameras** in the camera pane and click **Start Wanding**. | ||
| - The 3D viewport will highlight red. | ||
| - A calibration table will appear, showing the number of samples collected per camera. Aim for roughly equal counts across all cameras. | ||
| 4. Power on the **calibration wand** (switch on the back) once you are on the field. | ||
| 5. Move the wand smoothly through the capture volume. | ||
| - Refer to [this tutorial video (at 1:26)](https://www.youtube.com/watch?v=TZrhw9SoeEI) for demonstration. | ||
| 6. Once all cameras have adequate coverage, stop the wanding process by powering off the wand before leaving the field. This ensures that the cameras don't track the wand outside our desired capture volume. | ||
|
|
||
| <Alert type="info"> | ||
|
|
||
| While you are performing the calibration, look at each camera and notice that when it detects the calibration wand, it will begin to glow a green ring around the LEDs. The completeness of this ring indicates how many unique samples the camera has recieved, which is important for calibration. This can be used as an indicator when to stop the calibration. | ||
|
|
||
| </Alert> | ||
|
|
||
| 7. Click **Calculate** to finalise calibration. | ||
| - Watch the result message carefully. Aim for **Exceptional** calibration quality (green status) as shown in the image below. | ||
|  | ||
|
|
||
| <Alert type="warning"> | ||
|
|
||
| If calibration quality is below _Exceptional_, repeat the wanding process with slower and wider coverage. | ||
|
|
||
| </Alert> | ||
|
|
||
| ### Setting The Ground Plane | ||
|
|
||
| Following Camera Calibration, you may notice that the 3D view of the cameras within Motive is skewed. This is because the system does not know where to reference our ground plane. To fix this, we must use the calibration square to set our ground plane. | ||
|
|
||
| 1. Obtain the calibration square and place in the middle of the field, setting the Z direction to face the carpark 8 side of the ES building as shown in the below. | ||
|
|
||
|  | ||
|
|
||
| 2. Select **Set Ground Plane**. You will be prompted to save the calibration file. Ensure that the directory is `Desktop->calibration` and hit save. You should see the 3D view update so that the cameras are now referenced from this plane. | ||
|
|
||
| The calibration process is now complete. | ||
|
|
||
| --- | ||
|
|
||
| ## Creating a Rigid Body | ||
|
|
||
| We are now ready to create our first **rigid body**. In our case, we can define a rigid body as _a physical object with fixed markers whose relative distances remain constant, ensuring it doesn't change shape during movement_. | ||
|
|
||
| We need to obtain a robot and some OptiTrack reflective markers to create a rigid body. | ||
|
|
||
| 1. Place the reflective markers on the **torso** of the robot. When placing these markers it is recommended you place 4 on the **front** of the robot's torso, and two at the **back**. Ensure they are **not** symmetrically placed and are reasonably spread apart. | ||
|
|
||
| <Alert type="info"> | ||
|
|
||
| The torso is our key reference frame. We express the torso’s pose in the world as **H**<sup>t</sup><sub>w</sub> (torso in the world frame). | ||
| By tracking the torso, we ensure all motion data is consistently expressed in the robot’s primary body frame. | ||
|
|
||
| </Alert> | ||
|
|
||
| 2. Place the robot in the centre of the field, and run `keyboardwalk`. | ||
| 3. Once the robot is standing, we can create a rigid body in Motive. To do so, highlight the markers that are visible on the robot, right click, hover over `Rigid Body` and select `Create From Selected Markers`. | ||
|
|
||
| <Alert type="warning"> | ||
|
|
||
| Because we don't have 'optimal' lighting conditions in the laboratory, you may notice some _fake_ markers displayed in Motive. This may be caused by the reflections from the robot's metal components. To differentiate between markers that are _real_ or _fake_, you should move the robot in a way that minimises the reflections, or until the _fake_ markers disappear. | ||
|
|
||
| </Alert> | ||
|
|
||
| You have now succesfully created a rigid body. You can alter the settings of the rigid body by highlighting all the markers encompassing the rigid body and view the tab that appears in the bottom right of the screen labelled `General Settings`. Ensure that the `Streaming ID` is set to 1, and leave the others as default. | ||
|
|
||
| --- | ||
|
|
||
| ## Recording a Ground Truth dataset | ||
|
|
||
| Before we can record our ground truth dataset, we must ensure that we are receiving messages from our Motion Capture system. | ||
|
|
||
| ### IP Configuration | ||
|
|
||
| We require the robot, our computer running the binary, and the mocap system to be on the same network. The desired network is **robocup-x**. | ||
|
|
||
| 1. Connect to the robot using the portable screen and keyboards available. | ||
| 2. Run `sudo ./robocupconfiguration` | ||
| 3. Change the network IP address to ensure that the three systems are on the same subnet. | ||
| 4. Change the network to `robocup-x` and configure the network. | ||
|
|
||
| ### Recording a dataset | ||
|
|
||
| 1. Navigate to `NatNet.yaml` and ensure the multicast_address is set to `239.255.42.99`. Ensure `dump_packets: false` otherwise we will dump every packet onto the robot during recording **(we don't want this)**. | ||
| 2. Navigate to `SensorFilter.yaml` and change `use_ground_truth: true`. | ||
| 3. Configure with `natnet`, build the code and install onto the robot. | ||
| 4. Run the `natnet` role and view the console. If everything was successful, you should see `Connected to X.X.X.X (Motive 2.3.1.1) over NatNet 3.1`. | ||
|
|
||
| <Alert type="warning"> | ||
|
|
||
| If you see `No ground truth data received, but use_ground_truth is true`, ensure the previous step was completed correctly. | ||
|
|
||
| </Alert> | ||
|
|
||
| <Alert type="info"> | ||
|
|
||
| You should also test this within NUsight. Move the robot around and verify it is moving sensibly given that it should be using the ground truth data. | ||
|
|
||
| </Alert> | ||
|
|
||
| We are now ready to record a ground truth dataset! | ||
|
|
||
| 1. Navigate to `DataLogging.yaml` and add `message.localisation.RobotPoseGroundTruth: true`. | ||
|
|
||
| <Alert type="info"> | ||
|
|
||
| If you want to record a dataset that compares ground truth to another module, you must set the messages to `true` that your module requires. You will also need change `use_ground_true: false` to ensure we don't override our odometry messages with ground truth data. | ||
|
|
||
| </Alert> | ||
|
|
||
| 2. Navigate to the role you will be running, and add `input::NatNet`, `localisation::Mocap` and `support::logging::DataLogging`. | ||
| 3. Configure your role, build and install on the robot. | ||
| 4. Run the binary and extract the generated nbs file and it's `.idx` that appears in the `logging` folder using `scp`. | ||
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add photos for these? Also the image for the usb dongle would be nice too. It's cause these are things that are easily moved and can be lost. If we need to find them, a picture should help