A CLI tool to render and execute Robot Framework and PyATS tests using Jinja templating. The framework supports two test execution engines:
- Robot Framework: Language-agnostic syntax with Jinja templating for dynamically rendered test suites
- PyATS: Cisco's Python-based test automation framework for network infrastructure validation
Both test types can be executed together (default) or independently using development flags.
$ nac-test --help
Usage: nac-test [OPTIONS]
A CLI tool to render and execute Robot Framework and PyATS tests using Jinja
templating.
Additional Robot Framework options can be passed at the end of the command to
further control test execution (e.g., --variable, --listener, --loglevel).
These are appended to the pabot invocation. Pabot-specific options and test
files/directories are not supported and will result in an error.
╭─ Options ────────────────────────────────────────────────────────────────────╮
│ * --data -d PATH Path to data YAML files. │
│ [env var: NAC_TEST_DATA] [required] │
│ * --templates -t DIRECTORY Path to test templates. │
│ [env var: NAC_TEST_TEMPLATES] │
│ [required] │
│ * --output -o DIRECTORY Path to output directory. │
│ [env var: NAC_TEST_OUTPUT] [required]│
│ --filters -f DIRECTORY Path to Jinja filters. │
│ [env var: NAC_TEST_FILTERS] │
│ --tests DIRECTORY Path to Jinja tests. │
│ [env var: NAC_TEST_TESTS] │
│ --include -i TEXT Selects test cases by tag (include). │
│ [env var: NAC_TEST_INCLUDE] │
│ --exclude -e TEXT Selects test cases by tag (exclude). │
│ [env var: NAC_TEST_EXCLUDE] │
│ --processes INTEGER Number of parallel processes. │
│ [env var: NAC_TEST_PROCESSES] │
│ --render-only Only render tests without executing. │
│ [env var: NAC_TEST_RENDER_ONLY] │
│ --dry-run Dry run flag (robot dry run mode). │
│ [env var: NAC_TEST_DRY_RUN] │
│ --pyats [DEV] Run only PyATS tests. │
│ [env var: NAC_TEST_PYATS] │
│ --robot [DEV] Run only Robot Framework tests.│
│ [env var: NAC_TEST_ROBOT] │
│ --max-parallel-devices INTEGER Max devices for parallel SSH/D2D. │
│ [env var: NAC_TEST_MAX_PARALLEL...] │
│ --minimal-reports Reduce HTML report size (80-95%). │
│ [env var: NAC_TEST_MINIMAL_REPORTS] │
│ --diagnostic Wrap execution with diagnostic │
│ collection script for troubleshooting│
│ --verbose Enables verbose output for nac-test, │
│ Robot and PyATS execution. |
│ [env var: NAC_TEST_VERBOSE] │
│ --merged-data-file… -m TEXT Filename for merged data model. │
│ [default: merged_data_model_test...] │
│ --loglevel -l [DEBUG|...] Log level. [default: WARNING] │
│ --version Display version number. │
│ --help Show this message and exit. │
╰──────────────────────────────────────────────────────────────────────────────╯
- Data Merging: All YAML files from
--datapaths are merged into a single data model - Test Discovery: The framework discovers both Robot templates (
.robot,.j2) and PyATS tests (.py) in the--templatesdirectory - Robot Rendering: Jinja templates are rendered using the merged data model
- Test Execution: Both Robot Framework and PyATS tests execute in parallel
- Report Generation: HTML reports and artifacts are generated in the
--outputdirectory
For Robot Framework tests, Pabot executes test suites in parallel. The --skiponfailure non-critical argument is used by default, meaning failed tests with a non-critical tag show up as "skipped" in the final report.
Platform Requirements:
- Linux: Python 3.10 or higher
- macOS: Python 3.12 or higher (earlier versions have known incompatibilities)
- Windows: Python 3.10 or higher, Robot tests only
Don't have the right Python version? See Python 3 Installation & Setup Guide, or install using:
brew install python@3.12uv python install 3.12pyenv install 3.12
nac-test can be installed in a virtual environment using pip or uv:
# Using pip
pip install nac-test
# Using uv (recommended, also for install performance reasons)
uv tool install nac-testThe following Robot libraries are included with nac-test:
- robotframework-requests
- robotframework-jmespath
- robotframework-jsonlibrary
- robotframework-pabot for parallel test execution
Any other libraries can of course be added via pip or uv.
When working with feature branches or pre-release versions that aren't yet published to PyPI, you must install packages in editable mode from local source. This is required because pip install nac-test only works for released versions on PyPI.
- Python 3.10+
uvinstalled (Installation Guide)- Local clones of the required repositories
For PyATS-based testing, you need both packages:
| Package | Purpose |
|---|---|
nac-test |
Core test orchestration framework |
nac-test-pyats-common |
Architecture-specific adapters (ACI, SD-WAN, Catalyst Center) - required for PyATS tests |
From a workspace containing both repositories:
cd /path/to/testing-for-nac # or your workspace root
# Install both packages in editable mode (order matters - nac-test first)
uv pip install -e ./nac-test -e ./nac-test-pyats-commonOr install them separately:
# 1. Install nac-test (core framework) first
cd /path/to/nac-test
uv pip install -e .
# 2. Then install nac-test-pyats-common (depends on nac-test)
cd /path/to/nac-test-pyats-common
uv pip install -e .To include testing, linting, and type-checking tools:
cd /path/to/nac-test
uv pip install -e ".[dev]"
cd /path/to/nac-test-pyats-common
uv pip install -e ".[dev]"The [dev] extra includes pytest, ruff, mypy, bandit, and test coverage tools.
If you're working in an architecture-specific repository (e.g., nac-sdwan-terraform, nac-catalystcenter-terraform):
cd /path/to/nac-sdwan-terraform # or nac-catalystcenter-terraform
# Install both frameworks from relative paths
uv pip install -e ../nac-test -e ../nac-test-pyats-common- Editable mode (
-eflag): Code changes take effect immediately without reinstalling - Installation order matters: Always install
nac-testbeforenac-test-pyats-common - Both packages required: PyATS tests import from both
nac_testandnac_test_pyats_common - Feature branches: Use editable installs since unreleased versions aren't on PyPI
uv pip list | grep nac-test
# Should show both packages with local file paths:
# nac-test X.Y.Z /path/to/nac-test
# nac-test-pyats-common X.Y.Z /path/to/nac-test-pyats-commonThe file paths confirm editable installations from local source.
Values in YAML files can be encrypted using Ansible Vault. This requires Ansible (ansible-vault command) to be installed and the following two environment variables to be defined:
export ANSIBLE_VAULT_ID=dev
export ANSIBLE_VAULT_PASSWORD=Password123
ANSIBLE_VAULT_ID is optional, and if not defined will be omitted.
The !env YAML tag can be used to read values from environment variables.
root:
name: !env VAR_NAMEdata.yaml located in ./data folder:
---
root:
children:
- name: ABC
param: value
- name: DEF
param: valuetest1.robot located in ./templates folder:
*** Settings ***
Documentation Test1
*** Test Cases ***
{% for child in root.children | default([]) %}
Test {{ child.name }}
Should Be Equal {{ child.param }} value
{% endfor %}
After running nac-test with the following parameters:
nac-test --data ./data --templates ./templates --output ./testsThe following rendered Robot test suite can be found in the ./tests folder:
*** Settings ***
Documentation Test1
*** Test Cases ***
Test ABC
Should Be Equal value value
Test DEF
Should Be Equal value value
As well as the test results and reports:
$ tree -L 2 tests
tests
├── combined_summary.html
├── robot_results/
│ ├── log.html
│ ├── output.xml
│ ├── report.html
│ ├── summary_report.html
│ └── xunit.xml
├── log.html -> robot_results/log.html
├── output.xml -> robot_results/output.xml
├── report.html -> robot_results/report.html
├── xunit.xml -> robot_results/xunit.xml
├── pabot_results/
└── test1.robotNote: Root-level log.html, output.xml, report.html, and xunit.xml are links to the corresponding files in robot_results/ for backward compatibility.
In addition to Robot Framework, nac-test supports PyATS-based tests for network infrastructure validation. PyATS tests are Python files that inherit from architecture-specific base classes and validate network state against the data model.
PyATS tests support multiple Cisco architectures, each requiring specific environment variables:
| Architecture | Controller | Environment Variables |
|---|---|---|
| ACI | APIC | ACI_URL, ACI_USERNAME, ACI_PASSWORD |
| SD-WAN | SD-WAN Manager | SDWAN_URL, SDWAN_USERNAME, SDWAN_PASSWORD |
| Catalyst Center | Catalyst Center | CC_URL, CC_USERNAME, CC_PASSWORD |
For D2D (Direct-to-Device) SSH tests, IOS-XE device credentials are also required:
| Test Type | Environment Variables |
|---|---|
| SD-WAN D2D | IOSXE_USERNAME, IOSXE_PASSWORD (in addition to SD-WAN Manager credentials) |
| Catalyst Center D2D | IOSXE_USERNAME, IOSXE_PASSWORD (in addition to Catalyst Center credentials) |
PyATS tests are organized into two categories:
| Type | Location | Description |
|---|---|---|
| API Tests | tests/ (not under d2d/) |
Tests against controllers via REST API |
| D2D Tests | tests/d2d/ |
Direct-to-Device SSH tests against network devices |
# Set environment variables for your architecture (SD-WAN example)
export SDWAN_URL=https://sdwan-manager.example.com
export SDWAN_USERNAME=admin
export SDWAN_PASSWORD=yourpassword
# For D2D/SSH tests, also set IOS-XE device credentials
export IOSXE_USERNAME=admin
export IOSXE_PASSWORD=devicepassword
# Run all tests (Robot + PyATS combined)
nac-test -d ./data -t ./tests -o ./output
# Run only PyATS tests (development mode)
nac-test -d ./data -t ./tests -o ./output --pyats
# Run only Robot Framework tests (development mode)
nac-test -d ./data -t ./tests -o ./output --robotNote: the --pyats and --robot are experimental development arguments which might be removed in later versions.
PyATS tests generate:
- HTML Reports: Detailed test results with pass/fail status per verification item
- JSON Results: Machine-readable results for CI/CD integration
- Archive Files: Compressed test artifacts (
.zip)
Example output structure:
$ tree -L 3 results
results
├── combined_summary.html
├── robot_results/
└── pyats_results/
├── api/
│ ├── html_reports/
│ └── results.json
└── d2d/
├── html_reports/
└── results.jsonBefore test execution, nac-test merges all YAML data files into a single data model. This merged file serves as the single source of truth for both Robot Framework templating and PyATS test validation.
- All files from
--datapaths are recursively loaded - YAML structures are deep-merged (later files override earlier ones)
- The merged result is written to the output directory
- Both Robot and PyATS tests reference this merged data
By default, the merged file is named merged_data_model_test_variables.yaml. You can customize this:
nac-test -d ./data -t ./tests -o ./output -m my_custom_data.yamlThe merged data model is available to:
- Robot templates: Via Jinja templating during render phase
- PyATS tests: Via the
MERGED_DATA_MODEL_TEST_VARIABLES_FILEPATHenvironment variable
For faster development cycles, you can run only one test framework at a time:
Run only PyATS tests, skipping Robot Framework:
nac-test -d ./data -t ./tests -o ./output --pyatsThis is useful when:
- Developing or debugging PyATS test files
- You don't have Robot templates in your test directory
- You want faster iteration on API/D2D tests
Run only Robot Framework tests, skipping PyATS:
nac-test -d ./data -t ./tests -o ./output --robotThis is useful when:
- Developing or debugging Robot templates
- You don't have PyATS tests in your test directory
- You want faster iteration on Robot test suites
Note: Using both --pyats and --robot simultaneously is not allowed and will result in an error.
For CI/CD pipelines with artifact size constraints, use the --minimal-reports flag:
nac-test -d ./data -t ./tests -o ./output --minimal-reportsThis reduces HTML report size by 80-95% by only including detailed command outputs for failed or errored tests. Passed tests show summary information without full API response bodies.
For Direct-to-Device (D2D) tests that connect to network devices via SSH, you can control parallelization:
# Automatically calculate based on system resources (default)
nac-test -d ./data -t ./tests -o ./output --pyats
# Limit to specific number of parallel device connections
nac-test -d ./data -t ./tests -o ./output --pyats --max-parallel-devices 10The --max-parallel-devices option sets an upper limit on concurrent SSH connections to prevent overwhelming network devices or exhausting system resources.
Custom Jinja filters can be used by providing a set of Python classes where each filter is implemented as a separate Filter class in a .py file located in the --filters path. The class must have a single attribute named name, the filter name, and a classmethod() named filter which has one or more arguments. A sample filter can be found below.
class Filter:
name = "filter1"
@classmethod
def filter(cls, data):
return str(data) + "_filtered"Custom Jinja tests can be used by providing a set of Python classes where each test is implemented as a separate Test class in a .py file located in the --tests path. The class must have a single attribute named name, the test name, and a classmethod() named test which has one or more arguments. A sample test can be found below.
class Test:
name = "test1"
@classmethod
def test(cls, data1, data2):
return data1 == data2Special rendering directives exist to render a single test suite per (YAML) list item. The directive can be added to the Robot template as a Jinja comment following this syntax:
{# iterate_list <YAML_PATH_TO_LIST> <LIST_ITEM_ID> <JINJA_VARIABLE_NAME> #}
After running nac-test with the data from the previous example and the following template:
{# iterate_list root.children name child_name #}
*** Settings ***
Documentation Test1
*** Test Cases ***
{% for child in root.children | default([]) %}
{% if child.name == child_name %}
Test {{ child.name }}
Should Be Equal {{ child.param }} value
{% endif %}
{% endfor %}
The following test suites will be rendered:
$ tree -L 2 tests
tests
├── ABC
│ └── test1.robot
└── DEF
└── test1.robotA similar directive exists to put the test suites in a common folder though with a unique filename.
{# iterate_list_folder <YAML_PATH_TO_LIST> <LIST_ITEM_ID> <JINJA_VARIABLE_NAME> #}
The following test suites will be rendered:
$ tree -L 2 tests
tests
└── test1
├── ABC.robot
└── DEF.robotAn additional directive exists to render a single test suite per (YAML) list item in chunks, which is useful for handling large datasets by splitting them across multiple template files. This is a variant of iterate_list that would still create separate folders.
Note: This directive is experimental and may change in future versions. It is not subject to semantic versioning guarantees.
{# iterate_list_chunked <YAML_PATH_TO_LIST> <LIST_ITEM_ID> <JINJA_VARIABLE_NAME> <OBJECT_PATH> <CHUNK_SIZE> #}
All objects under the OBJECT_PATH will be counted and if their number is greater than the specified chunk size, the list will be split into multiple test suites with suffix _2, _3, etc.
Consider the following example:
---
root:
children:
- name: ABC
param: value
nested_children:
- name: Child1
param: value
- name: Child2
param: value
- name: Child3
param: value
- name: DEF
param: value
nested_children:
- name: Child1
param: valueAfter running nac-test with this data from the previous and the following template:
{# iterate_list_chunked root.children name child_name nested_children 2 #}
*** Settings ***
Documentation Test1
*** Test Cases ***
{% for child in root.children | default([]) %}
{% if child.name == child_name %}
Test {{ child.name }}
Should Be Equal {{ child.param }} value
{% for nested_child in child.nested_children | default([]) %}
Test {{ child.name }} Child {{ nested_child.name }}
Should Be Equal {{ nested_child.param }} value
{% endfor %}
{% endif %}
{% endfor %}
Objects from the nested_children path will be counted and if their number is greater than the specified chunk size (2), the list will be split into multiple test suites with suffix _002, _003, etc. The following test suites will be rendered:
$ tree -L 2 tests
tests
├── ABC
│ ├── test1_001.robot
│ └── test1_002.robot
└── DEF
└── test1_001.robotIt is possible to include and exclude test cases by tag names with the --include and --exclude CLI options. These options are directly passed to the Pabot/Robot executor and are documented here.
The number of parallel processes used by pabot can be controlled via the --processes option:
nac-test -d data/ -t templates/ -o output/ --processes 4If not specified, pabot uses max(2, cpu_count) as the default number of processes. You can also set this via the NAC_TEST_PROCESSES environment variable.
This option applies to both suite-level and test-level parallelization (see next section).
By default, nac-test (via pabot) executes test suites (i.e., each robot file) in parallel. Each suite runs in its own process, and the --processes option controls how many suites can run simultaneously.
Suite-level parallelization may be inefficient for test suites containing multiple long-running test cases (e.g., >10 seconds each). If your test cases are independent and can run concurrently, you can enable test-level parallelization by adding the following metadata to the suite's settings:
*** Settings ***
Metadata Test Concurrency TrueNote: This approach benefits only long-running tests. For short tests, the scheduling overhead and log collection may offset any performance gains.
Tip: The Test Concurrency metadata is case-insensitive (test concurrency, TEST CONCURRENCY, etc.).
Implementation: nac-test checks the rendered robot files for the Metadata setting and instructs pabot to run each test within the respective suite in parallel (using pabot's --testlevelsplit --ordering ordering.txt arguments). You can inspect the ordering.txt file in the output directory.
Disabling test-level parallelization: Set the environment variable NAC_TEST_DISABLE_TESTLEVELSPLIT=true to disable this feature.
You can pass additional Robot Framework options to nac-test using the -- separator. These options are forwarded to the pabot/Robot Framework execution. This enables advanced use cases like custom variables and listeners:
# Pass custom variables (note the -- separator before Robot options)
nac-test -d data/ -t templates/ -o output/ -- --variable MY_VAR:value
# Multiple variables
nac-test -d data/ -t templates/ -o output/ -- --variable VAR1:value1 --variable VAR2:value2
# Add a listener
nac-test -d data/ -t templates/ -o output/ -- --listener MyListener.py
# Combine multiple options
nac-test -d data/ -t templates/ -o output/ -- --variable ENV:prod --listener MyListener
# Override the default --skiponfailure behavior
nac-test -d data/ -t templates/ -o output/ -- --skiponfailure criticalImportant:
- The
--separator is required before any Robot Framework options - Some Robot Framework options are controlled by nac-test and cannot be passed via
--:--include/-i→ use nac-test's-i/--include--exclude/-e→ use nac-test's-e/--exclude--outputdir/-d→ use nac-test's-o/--output--output/-o,--log/-l,--report/-r,--xunit/-x→ controlled internally--dryrun→ use nac-test's--dry-run
- Pabot-specific options (like
--testlevelsplit,--pabotlib, etc.) and test file paths are not allowed and will result in an error with exit code 252
See the Robot Framework User Guide for all available options.
Breaking change in nac-test 2.0: The --loglevel argument is now a nac-test option that controls the overall logging verbosity, not a pass-through Robot Framework argument. Robot Framework's log level is automatically set to DEBUG when nac-test's --loglevel is set to DEBUG; otherwise, Robot uses its default log level.
If you need fine-grained control over Robot Framework's log level independently from nac-test's log level, use the -- separator to pass Robot's --loglevel option directly:
# nac-test at INFO, Robot Framework at TRACE
nac-test -d data/ -t templates/ -o output/ --loglevel INFO -- --loglevel TRACE
# nac-test at WARNING (default), Robot Framework at DEBUG
nac-test -d data/ -t templates/ -o output/ -- --loglevel DEBUGnac-test mostly follows Robot Framework exit code conventions to provide meaningful feedback for CI/CD pipelines:
| Exit Code | Meaning | Description |
|---|---|---|
| 0 | Success | All tests passed, no errors |
| 1-250 | Test failures | Number of failed tests (capped at 250) |
| 2 | Invalid nac-test arguments | Invalid or conflicting nac-test CLI arguments (aligns with POSIX/Typer convention) |
| 252 | Invalid Robot Framework arguments or no tests found | Robot Framework invalid arguments or no tests executed |
| 253 | Execution interrupted | Test execution was interrupted (Ctrl+C, etc.) |
| 255 | Execution error | Framework crash or infrastructure error |
(we only follow mostly as we deviate in using 2 for invalid nac-test arguments, and don't use 251).
The --verbose flag enables verbose mode for troubleshooting test execution:
nac-test -d ./data -t ./tests -o ./output --verboseWhen enabled, verbose mode:
- Sets nac-test log level to DEBUG (can be overridden by setting
--loglevel) - Enables verbose output for
pabotexecution (shows the Robot console output) - Sets Robot Framework loglevel to DEBUG for additional debug information in the
execution (unless overridden by
--loglevel) - Shows additional progress information and console output during PyATS test execution.
The pyATS loglevel will follow the
--loglevelsetting, so you can reduce the output for example via--verbose --loglevel ERRORwhich limits pyATS debugging output to ERROR information
In addition to CLI options, nac-test supports several environment variables for advanced tuning:
| Variable | Default | Description |
|---|---|---|
NAC_TEST_PYATS_PROCESSES |
Auto (CPU-based) | Number of parallel PyATS worker processes |
NAC_TEST_PYATS_MAX_CONNECTIONS |
Auto (resource-based) | Maximum concurrent API connections |
NAC_TEST_PYATS_API_CONCURRENCY |
10 | Concurrent API requests per worker |
NAC_TEST_PYATS_SSH_CONCURRENCY |
5 | Concurrent SSH connections per worker |
NAC_TEST_PYATS_OUTPUT_BUFFER_LIMIT |
10485760 | Output buffer size in bytes (10MB) |
NAC_TEST_PYATS_KEEP_REPORT_DATA |
unset | Keep intermediate JSONL/archive files |
NAC_TEST_PYATS_OVERFLOW_DIR |
/tmp/nac_test_overflow | Directory for overflow files when memory limits exceeded |
| Variable | Default | Description |
|---|---|---|
NAC_TEST_DISABLE_TESTLEVELSPLIT |
unset | Disable test-level parallelization for Robot |
| Variable | Default | Description |
|---|---|---|
NAC_TEST_VERBOSE |
unset | Enable verbose mode: verbose output and retain intermediate files (see NAC_TEST_PYATS_KEEP_REPORT_DATA) |
If you're experiencing issues with nac-test (crashes, unexpected errors, test failures), use the --diagnostic flag to collect comprehensive diagnostic information.
The diagnostic flag:
- Collects system information, Python environment, and package versions
- Captures error logs and crash reports (especially useful for macOS issues)
- Automatically masks credentials before generating output
- Produces a single
.tar.gzfile you can safely attach to GitHub issues
Simply add --diagnostic to your existing nac-test command:
# 1. Activate your virtual environment
source .venv/bin/activate
# 2. Set your environment variables (as you normally would for nac-test)
# Example for SD-WAN:
export SDWAN_URL=https://your-sdwan-manager.example.com
export SDWAN_USERNAME=admin
export SDWAN_PASSWORD=your-password
# 3. Run nac-test with the --diagnostic flag
nac-test -d ./data -t ./tests -o ./output --pyats --diagnosticThe diagnostic flag will wrap your nac-test execution and generate a nac-test-diagnostics-XXXXXX.tar.gz file containing all diagnostic information with sensitive data automatically masked.