A comprehensive Ansible collection for deploying containerized development and testing services using Podman, Quadlets, and systemd integration.
Part of the SOLTI Ansible Collections Suite - This collection provides containerized testing and development infrastructure services.
| Attribute | Value |
|---|---|
| Collection | jackaltx.solti_containers |
| Container Runtime | Podman (rootless) |
| Service Management | Systemd user services via Quadlets |
| Network | ct-net (shared container network with DNS) |
| Management Scripts | manage-svc.sh (lifecycle), svc-exec.sh (operations) |
| Deployment Targets | localhost, remote hosts via SSH |
| Configuration Files | inventory.yml, ansible.cfg |
| Orchestrator | ../mylab/ (site-specific deployment automation) |
| Related Collections | solti-monitoring, solti-ensemble |
Modern development requires lightweight, ephemeral services that can be quickly deployed, tested, and removed. Virtual machines are too heavy for rapid iteration cycles. This collection addresses the need for:
- Consistent deployment patterns across different services
- Lightweight testing environments using containers instead of VMs
- Easy service lifecycle management (prepare β deploy β verify β remove)
- Standardized configuration with security best practices
- Rapid iteration for development and testing workflows
I always use the latest version of the container. This can be painful sometimes. I did not intend for these to be long running services. They are here to satisfy a need for rapid development on the developer machine.
You may ask why not a Debian distro? That is because Podman is RHEL-focused and the versions on Debian are behind. Debian distributions are generally better at Docker. I prefer to have non-privileged containers.
There are two places I test: localhost and a Fedora server VM named podman.
There are two scripts that manage container lifecycle:
-
manage-svc.sh- Service lifecycle management:prepare: Create data directories and apply SELinux contexts (RHEL/Fedora)deploy: Deploy pod and containers using Podman quadletsremove: Stop and remove containers (preserves data by default)
-
svc-exec.sh- Task execution (verify, check_upgrade, configure)
# Deploy container stack
./manage-svc.sh <service> prepare && ./manage-svc.sh <service> deploy
# Verify the container stack
./svc-exec.sh <service> verify
# Check for updated container
./svc-exec.sh <service> check_upgrade
# Clean up (preserves data)
./manage-svc.sh <service> remove
# Full removal using env variables
DELETE_DATA=true DELETE_IMAGES=true ./manage-svc.sh <service> removeAvailable services:
Core Infrastructure:
- traefik - SSL reverse proxy with automatic Let's Encrypt
- hashivault - Secrets management
Data Stores:
- redis - Key-value cache and message broker
- elasticsearch - Search and analytics engine
- minio - S3-compatible object storage
- mongodb - Document database
- influxdb3 - Time-series metrics database
Applications:
- mattermost - Team communication platform
- grafana - Metrics visualization dashboards
- gitea - Lightweight Git hosting
- obsidian - Note-taking and knowledge management
Legacy/Disabled:
- wazuh - Security monitoring (disabled - container issues)
# Deploy to remote host (e.g., podma)
./manage-svc.sh -h podma -i inventory/podma.yml redis prepare
./manage-svc.sh -h podma -i inventory/podma.yml redis deploy
# Verify remote service
./svc-exec.sh -h podma -i inventory/podma.yml redis verify
# Clean up remote host
./manage-svc.sh -h podma -i inventory/podma.yml redis removeNote:
manage-svc.shwill prompt for your sudo password. This is required because containers create files with elevated ownership that your user cannot modify without privileges.
| Service | Purpose | Ports | SSL Domain | Role Path |
|---|---|---|---|---|
| Traefik | HTTP reverse proxy with SSL termination | 8080, 8443, 9999 | *.domain.com |
roles/traefik |
| HashiVault | Secrets management and credential storage | 8200, 8201 | vault.domain.com |
roles/hashivault |
| Service | Purpose | Ports | SSL Domain | Role Path |
|---|---|---|---|---|
| Redis | Key-value cache and message broker | 6379, 8081 | redis-ui.domain.com |
roles/redis |
| Elasticsearch | Search and analytics engine for logs | 9200, 8088 | elasticsearch.domain.com |
roles/elasticsearch |
| MinIO | S3-compatible object storage | 9000, 9001 | minio.domain.com |
roles/minio |
| MongoDB | NoSQL document database | 27017 | mongodb.domain.com |
roles/mongodb |
| InfluxDB3 | Time-series database for metrics | 8086 | influxdb.domain.com |
roles/influxdb3 |
| Service | Purpose | Ports | SSL Domain | Role Path |
|---|---|---|---|---|
| Mattermost | Team communication and notifications | 8065 | mattermost.domain.com |
roles/mattermost |
| Grafana | Metrics visualization and dashboards | 3001 | grafana.domain.com |
roles/grafana |
| Gitea | Lightweight Git hosting service | 3000, 2222 | gitea.domain.com |
roles/gitea |
| Obsidian | Note-taking and knowledge management | TBD | obsidian.domain.com |
roles/obsidian |
| Service | Status | Purpose |
|---|---|---|
| Jepson | π Planned | Fuzzing framework for security testing |
| Trivy | π Planned | Container vulnerability scanner |
This collection is designed to work both standalone and integrated with the SOLTI orchestrator:
- Standalone Mode: Use
manage-svc.shandsvc-exec.shdirectly in this repository - Orchestrator Mode: The ../mylab/ orchestrator provides:
- Centralized inventory management across all SOLTI collections
- Workflow automation (e.g., deploy-fleur-workflow.sh)
- Site-specific credentials and tokens (kept out of this collection)
- Cross-collection service coordination
Key Principle: This collection contains generic, reusable roles. Site-specific data lives in ../mylab/.
All services follow a consistent pattern based on the _base role:
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Service Layer β
βββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββ€
β Redis β Elasticsearch β Mattermost β MinIO β
β (Testing) β (Analytics) β (Communication)β (Storage) β
βββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Infrastructure Layer β
βββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββ€
β Traefik β HashiVault β _base β Quadlets β
β (SSL Proxy) β (Secrets) β (Common) β (systemd) β
βββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββ
β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Platform Layer β
βββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββββββ¬ββββββββββββββ€
β Podman β systemd β SELinux β Network β
β (Containers) β (Services) β (Security) β (ct-net) β
βββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββββββ΄ββββββββββββββ
graph TD
A[Developer] --> B[manage-svc.sh]
A --> C[svc-exec.sh]
B --> D[prepare]
B --> E[deploy]
B --> F[remove]
C --> G[verify]
C --> H[configure]
C --> I[backup]
D --> J[_base/prepare]
E --> K[_base/containers]
F --> L[_base/cleanup]
J --> M[Directories]
J --> N[SELinux]
K --> O[Quadlets]
K --> P[systemd]
subgraph "Service Layer"
Q[Redis]
R[Elasticsearch]
S[Mattermost]
T[MinIO]
end
subgraph "Infrastructure"
U[Traefik SSL]
V[HashiVault]
W[Container Network]
end
O --> Q
O --> R
O --> S
O --> T
Q --> U
R --> U
S --> U
T --> U
# System preparation (one-time per service)
./manage-svc.sh <service> prepare
# Deploy and start service
./manage-svc.sh <service> deploy
# Remove service (preserves data by default)
./manage-svc.sh <service> removeNote: Requires sudo password - containers create files with elevated ownership requiring privilege escalation for prepare/deploy/remove operations.
# Execute verification tasks
./svc-exec.sh <service> verify
# Run service-specific tasks
./svc-exec.sh <service> configure
./svc-exec.sh <service> backup
./svc-exec.sh <service> initialize
# Use sudo for privileged operations
./svc-exec.sh -K <service> <task>- Dynamic playbook generation - Creates Ansible playbooks on-the-fly
- Inventory integration - Uses your inventory variables and defaults
- Error handling - Preserves generated playbooks on failure for debugging
- Cleanup automation - Removes successful temporary playbooks
- Flexible task execution - Any role task file can be executed independently
When Traefik is deployed, all services automatically get SSL termination:
# Deploy Traefik first
./manage-svc.sh traefik prepare
./manage-svc.sh traefik deploy
# Now all other services get automatic SSL
./manage-svc.sh redis deploy # β https://redis-ui.yourdomain.com
./manage-svc.sh mattermost deploy # β https://mattermost.yourdomain.comPoint wildcard DNS to your development machine:
*.yourdomain.com β 192.168.1.100
All services use a common network with consistent DNS:
service_network: "ct-net"
service_dns_servers: ["1.1.1.1", "8.8.8.8"]
service_dns_search: "yourdomain.com"- OS: RHEL 9+, CentOS 9+, Debian 12+, Ubuntu 22.04+
- Podman: 4.x or later
- systemd: User services enabled (
loginctl enable-linger $USER) - Memory: 4GB RAM minimum, 8GB recommended
- Storage: 20GB free space for service data
- CPU: 4+ cores for multiple concurrent services
- Memory: 16GB for full service stack
- Storage: SSD storage for better I/O performance
- Network: Stable internet for Let's Encrypt certificates
# RHEL/CentOS/Rocky Linux
sudo dnf install podman ansible-core python3-pip
pip3 install --user containers.podman
# Debian/Ubuntu
sudo apt install podman ansible python3-pip
pip3 install --user containers.podman
# Enable user services
loginctl enable-linger $USERsolti-containers/
βββ roles/ # Service role definitions
β βββ _base/ # Common functionality
β β βββ tasks/
β β β βββ prepare.yml # Directory and permission setup
β β β βββ networks.yml # Container networking
β β β βββ cleanup.yml # Service removal
β β βββ defaults/main.yml # Common defaults
β β
β βββ redis/ # Redis key-value store
β βββ elasticsearch/ # Search and analytics
β βββ hashivault/ # Secrets management
β βββ mattermost/ # Team communication
β βββ traefik/ # SSL reverse proxy
β βββ minio/ # S3-compatible storage
β
βββ inventory.yml # Service configuration
βββ ansible.cfg # Ansible settings
βββ manage-svc.sh # Service lifecycle management
βββ svc-exec.sh # Task execution wrapper
βββ README.md # This file
Each service role follows this structure:
roles/<service>/
βββ defaults/main.yml # Default variables
βββ handlers/main.yml # Service restart handlers
βββ meta/main.yml # Role metadata
βββ tasks/
β βββ main.yml # Role entry point
β βββ prepare.yml # System preparation
β βββ prerequisites.yml # Configuration setup
β βββ quadlet_rootless.yml # Container deployment
β βββ verify.yml # Health verification
β βββ <service-specific>.yml # Custom tasks
βββ templates/
βββ <service>.conf.j2 # Service configuration
βββ <service>.env.j2 # Environment variables
- Rootless containers - All services run without root privileges
- SELinux integration - Proper security contexts on RHEL systems
- Network isolation - Services communicate via dedicated container network
- Resource limits - Memory and CPU constraints prevent resource exhaustion
- Localhost binding - Services bind to 127.0.0.1 by default
- Password protection - All services require authentication
- SSL/TLS encryption - Traefik provides automatic HTTPS
- API tokens - Role-based access where supported
- Volume encryption - Data stored in user directories with proper permissions
- Backup integration - Services support data backup and restore
- Secrets management - HashiVault integration for credential storage
# Create isolated test environment
./manage-svc.sh redis deploy
./manage-svc.sh elasticsearch deploy
# Run your tests against the services
pytest tests/ --redis-url=localhost:6379 --es-url=localhost:9200
# Analyze results
./svc-exec.sh redis verify
./svc-exec.sh elasticsearch verify
# Clean up
./manage-svc.sh redis remove
./manage-svc.sh elasticsearch remove# Test single service changes
./manage-svc.sh myservice prepare
./manage-svc.sh myservice deploy
./svc-exec.sh myservice verify
# Make changes to role
vim roles/myservice/tasks/main.yml
# Redeploy with changes
./manage-svc.sh myservice deploy
./svc-exec.sh myservice verifyThe -K flag combined with data preservation enables rapid "iterate until you get it right" workflow:
# Initial deployment
./manage-svc.sh elasticsearch deploy
# Test and discover issues
./svc-exec.sh elasticsearch verify
# Remove container (data preserved by default)
./manage-svc.sh elasticsearch remove
# Modify role: edit tasks, templates, configuration
vim roles/elasticsearch/tasks/prerequisites.yml
# Redeploy with your changes - data still intact
./manage-svc.sh elasticsearch deploy
# Your test data, indices, and configurations persist!
# Repeat this cycle until working correctlyKey Benefits:
- Data persists across cycles: Elasticsearch indices, Mattermost channels, database records remain intact
- Faster iteration: No need to recreate test data after each change
- True testing: Work with realistic data throughout development
Data-Centric Services (benefit from persistence):
- elasticsearch (indices, mappings)
- mattermost (channels, messages, users)
- minio (buckets, objects)
- hashivault (secrets, policies)
Stateless Services (less critical):
- redis (just cache)
- traefik (just proxy configuration)
When to Reset Data:
# Set in inventory.yml for full cleanup
elasticsearch_delete_data: true
./manage-svc.sh elasticsearch remove # Removes data directories# Deploy full stack
for service in traefik redis elasticsearch mattermost; do
./manage-svc.sh $service prepare
./manage-svc.sh $service deploy
done
# Run integration tests
./svc-exec.sh traefik verify
for service in redis elasticsearch mattermost; do
./svc-exec.sh $service verify
done
# Test cross-service communication
./test-integration.shEach service supports backup operations:
# Backup service data
./svc-exec.sh <service> backup
# Backup with compression
./svc-exec.sh <service> backup --compress
# Backup to specific location
./svc-exec.sh <service> backup --dest /backup/location- Data preservation -
removecommand preserves data by default - Complete cleanup - Set
<SERVICE>_DELETE_DATA=trueto remove all data - Volume management - Data stored in
~/service-data/directories - Migration support - Data directories can be moved/copied between systems
# Backup critical service data
for service in hashivault mattermost; do
./svc-exec.sh $service backup
done
# Restore from backup
./svc-exec.sh <service> restore --from /backup/location
# Verify restored service
./svc-exec.sh <service> verify# Check all service status
systemctl --user status | grep -E "(redis|elasticsearch|mattermost)"
# Resource utilization
podman stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}"
# Network connectivity
./svc-exec.sh traefik verify# Centralized logging via Elasticsearch
./manage-svc.sh elasticsearch deploy
# Configure log forwarding (example)
for service in redis mattermost; do
./svc-exec.sh $service configure-logging
done
# Search logs via Elasticvue
open https://elasticsearch.yourdomain.com:8088# Service-specific metrics
./svc-exec.sh redis info
./svc-exec.sh elasticsearch stats
./svc-exec.sh mattermost metrics
# Container resource usage
podman stats --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}\t{{.BlockIO}}"Services won't start
# Check systemd status
systemctl --user status <service>-pod
# Check container logs
podman logs <service>-svc
# Verify directory permissions
ls -la ~/<service>-data/SSL certificates not working
# Check Traefik status
./svc-exec.sh traefik verify
# Verify DNS configuration
dig +short *.yourdomain.com
# Check certificate logs
podman logs traefik-svc | grep -i acmeNetwork connectivity issues
# Check container network
podman network inspect ct-net
# Test inter-service communication
podman exec redis-svc ping elasticsearch-svc
# Verify port bindings
ss -tlnp | grep -E "(6379|9200|8065)"# Enable debug logging
export SOLTI_DEBUG=1
# Run with verbose output
./manage-svc.sh <service> deploy -vvv
# Check generated playbooks
ls -la tmp/<service>-*.yml- Follow the Service Template
- Implement the standard task files
- Add Traefik integration labels
- Include comprehensive verification tasks
- Update management scripts as needed
- Consistency - Follow established patterns
- Documentation - Include comprehensive README
- Testing - Add verification tasks
- Security - Implement proper access controls
- Integration - Support Traefik SSL and HashiVault secrets
# Test role syntax
ansible-playbook --syntax-check roles/<service>/tasks/main.yml
# Test deployment
./manage-svc.sh <service> prepare
./manage-svc.sh <service> deploy
./svc-exec.sh <service> verify
# Test cleanup
./manage-svc.sh <service> removeTest services across multiple platforms (Debian 12, Rocky 9, Ubuntu 24) using nested containers:
# Test service on all platforms
./run-podman-tests.sh --services redis
# Test specific platform
./run-podman-tests.sh --platform uut-deb12 --services redis
# Test multiple services
./run-podman-tests.sh --services "redis,traefik,hashivault"
# View test results
tail -f verify_output/latest_test.out
cat verify_output/debian/consolidated_test_report.mdSee molecule/README.md for comprehensive testing documentation.
- Service Template - Template for creating new services
- Traefik Integration Guide - SSL setup and configuration
- Security Best Practices - Security recommendations
- Performance Tuning - Optimization guidelines
MIT License - See LICENSE file for details.
Jackaltx - Created for development testing workflows with significant assistance from Claude AI for pattern development and documentation.
This project aims to provide:
- Lightweight alternatives to heavy VM-based development environments
- Consistent patterns for containerized service deployment
- Easy-to-use management interfaces for rapid iteration
- Production-ready security and monitoring capabilities
- Educational examples of modern container orchestration
- Issues: Report bugs or request features via GitHub issues
- Documentation: Comprehensive README files in each role
- Community: Share your service implementations and improvements
This section provides structured information for AI documentation tools and code assistants.
collection_name: jackaltx.solti_containers
collection_type: ansible_collection
namespace: jackaltx
version: 1.0.0
container_runtime: podman
service_manager: systemd
deployment_mode: rootless
network_model: shared_bridge| File | Purpose | Used By |
|---|---|---|
| manage-svc.sh | Service lifecycle (prepare/deploy/remove) | Human operators, orchestrator |
| svc-exec.sh | Task execution (verify/configure) | Human operators, CI/CD |
| inventory.yml | Service configuration variables | Ansible playbooks |
| ansible.cfg | Ansible settings and vault config | Ansible engine |
| roles/_base/ | Common functionality for all services | All service roles |
| CLAUDE.md | AI assistant context and patterns | Claude Code, AI tools |
All service roles follow this standard structure:
roles/<service>/
βββ defaults/main.yml # Default variables
βββ handlers/main.yml # systemd restart handlers
βββ tasks/
β βββ main.yml # Entry point (includes other tasks)
β βββ prepare.yml # Directory/permission setup
β βββ prerequisites.yml # Service-specific config
β βββ quadlet_rootless.yml # Container deployment
β βββ verify.yml # Health checks
βββ templates/
β βββ <service>.conf.j2 # Configuration files
β βββ <service>.env.j2 # Environment variables
βββ README.md # Service-specific documentation
# Network Configuration
service_network: "ct-net"
service_dns_servers: ["1.1.1.1", "8.8.8.8"]
service_dns_search: "{{ domain }}"
domain: "example.com"
# Service Paths
<service>_data_dir: "{{ ansible_env.HOME }}/<service>-data"
<service>_config_dir: "{{ <service>_data_dir }}/config"
# Container Lifecycle
<service>_delete_data: false # Preserve data on remove
<service>_delete_images: false # Preserve images on remove
# Security
ansible_become: true # Required for prepare/deploy/remove
secure_logging: true # Hide credentials in logs- Parent Project: ../README.md - SOLTI Collections Suite overview
- Orchestrator: ../mylab/README.md - Deployment automation
- Monitoring: ../solti-monitoring/README.md - Metrics and logging
- Ensemble: ../solti-ensemble/README.md - Shared infrastructure services
Happy containerizing! π