Consistent, emoji-enhanced logging patterns and configuration across all Discogsography services
π Back to Main | π Documentation Index | π Emoji Guide
Discogsography uses a standardized logging approach with emoji prefixes for visual clarity and quick issue identification. All services use consistent logging controlled by the LOG_LEVEL environment variable.
flowchart LR
subgraph "Service"
Code[Application Code]
Logger[Logger Instance]
end
subgraph "Outputs"
Console[Console Output<br/>with Emojis]
File[Log Files<br/>/logs/*.log]
end
subgraph "Analysis"
Monitor[Real-time Monitoring]
Debug[Debug Analysis]
Errors[Error Tracking]
end
Code -->|logger.info/error/warn| Logger
Logger --> Console
Logger --> File
Console --> Monitor
File --> Debug
File --> Errors
style Code fill:#e3f2fd,stroke:#2196f3,stroke-width:2px
style Console fill:#e8f5e9,stroke:#4caf50,stroke-width:2px
style File fill:#fff3e0,stroke:#ff9800,stroke-width:2px
All services in the Discogsography platform use the LOG_LEVEL environment variable for consistent logging control.
| Level | Description | Use Case |
|---|---|---|
DEBUG |
Detailed diagnostic information | Development and troubleshooting |
INFO |
General informational messages | Production (default) |
WARNING |
Warning messages for potential issues | Production monitoring |
ERROR |
Error messages for failures | Production alerts |
CRITICAL |
Critical errors requiring immediate attention | Production alerts |
Default: If LOG_LEVEL is not set, all services default to INFO.
# Development with debug logging
export LOG_LEVEL=DEBUG
# Production with info logging (default)
export LOG_LEVEL=INFO
# Error-only logging
export LOG_LEVEL=ERRORservices:
my-service:
environment:
LOG_LEVEL: INFOdocker run -e LOG_LEVEL=DEBUG discogsography/service:latestAll Python services (api, brainzgraphinator, brainztableinator, common, dashboard, explore, graphinator, insights, mcp-server, schema-init, tableinator) use structlog configured via setup_logging() from common/config.py. Use structlog.get_logger() β not logging.getLogger():
import structlog
from common import setup_logging
# Call once at service startup β reads LOG_LEVEL from environment, defaults to INFO
setup_logging("service_name", log_file=Path("/logs/service.log"))
# Get a logger in any module
logger = structlog.get_logger(__name__)
logger.info("π Service starting...")Features:
- Structured JSON logging with emoji indicators
- Correlation IDs from contextvars
- Service-specific context (name, environment)
- File and console output
- Automatic suppression of verbose third-party logs
The Rust extractor uses Rust's tracing framework and maps Python log levels to Rust equivalents:
| Python Level | Rust Level | Notes |
|---|---|---|
| DEBUG | debug | Detailed diagnostic info |
| INFO | info | General messages (default) |
| WARNING | warn | Warning messages |
| ERROR | error | Error messages |
| CRITICAL | error | Mapped to error (Rust has no critical) |
Configuration:
# Debug logging
LOG_LEVEL=DEBUG cargo run
# Production logging
LOG_LEVEL=INFO cargo runImplementation (main.rs):
let log_level = std::env::var("LOG_LEVEL")
.unwrap_or_else(|_| "INFO".to_string())
.to_uppercase();
let rust_level = match log_level.as_str() {
"DEBUG" => "debug",
"INFO" => "info",
"WARNING" | "WARN" => "warn",
"ERROR" => "error",
"CRITICAL" => "error",
_ => "info"
};{
"timestamp": "2024-01-15T10:30:45.123456Z",
"level": "info",
"logger": "graphinator",
"event": "π Service starting...",
"service": "graphinator",
"environment": "production",
"lineno": 1210
}{
"timestamp": "2024-01-15T10:30:45.123456Z",
"level": "INFO",
"target": "extractor",
"message": "π Starting Rust-based Discogs data extractor with high performance",
"line": 59
}Format: logger.{level}("{emoji} {message}")
- Always include exactly one space after the emoji
- Use consistent emojis for similar operations
- Choose emojis that visually represent the action
| Emoji | Usage | Example |
|---|---|---|
| π | Service startup | logger.info("π Starting extractor service...") |
| π | Service shutdown | logger.info("π Shutting down gracefully") |
| π§ | Configuration/Setup | logger.info("π§ Configuring database connections") |
| π₯ | Health check server | logger.info("π₯ Health server started on port 8000") |
| Emoji | Usage | Example |
|---|---|---|
| β | Operation success | logger.info("β
All files processed successfully") |
| πΎ | Data saved | logger.info("πΎ Saved 1000 records to database") |
| π | Metadata loaded | logger.info("π Loaded configuration from disk") |
| π | New version/data | logger.info("π Found new Discogs data release") |
| Emoji | Usage | Example |
|---|---|---|
| β | Error occurred | logger.error("β Failed to connect to database") |
| Warning | logger.warning("β οΈ Retry attempt 3/5") |
|
| π¨ | Critical issue | logger.critical("π¨ Out of memory") |
| β© | Skipped operation | logger.info("β© Skipped duplicate record") |
| Emoji | Usage | Example |
|---|---|---|
| π | Processing | logger.info("π Processing batch 5/10") |
| β³ | Waiting | logger.info("β³ Waiting for messages...") |
| π | Progress/Stats | logger.info("π Processed 5000/10000 records") |
| β° | Scheduled task | logger.info("β° Running periodic check") |
| Emoji | Usage | Example |
|---|---|---|
| π₯ | Download start | logger.info("π₯ Starting download of releases.xml") |
| β¬οΈ | Downloading | logger.info("β¬οΈ Downloaded 50MB/200MB") |
| π | File operation | logger.info("π Created output.json") |
| π | Searching/Query execution | logger.info("π Checking for updates...") or logger.debug("π Executing Neo4j query") |
| Emoji | Usage | Example |
|---|---|---|
| π° | RabbitMQ | logger.info("π° Connected to RabbitMQ") |
| π | Neo4j | logger.info("π Connected to Neo4j database") |
| π | PostgreSQL | logger.info("π Connected to PostgreSQL") |
| π | Network/API | logger.info("π Fetching from Discogs API") |
| π | Database index setup | logger.info("π Neo4j indexes created/verified") |
import structlog
logger = structlog.get_logger(__name__) # Use structlog, not logging.getLogger()
async def start_service():
logger.info("π Starting dashboard service")
try:
logger.info("π§ Initializing database connections")
await init_databases()
logger.info("β
Database connections established")
logger.info("π₯ Starting health check server on port 8000")
await start_health_server()
logger.info("β³ Waiting for messages...")
await process_messages()
except Exception as e:
logger.error(f"β Service startup failed: {e}")
raise
finally:
logger.info("π Shutting down service")async def process_batch(items: list[dict]) -> None:
total = len(items)
for i, item in enumerate(items, 1):
if i % 1000 == 0:
logger.info(f"π Processed {i}/{total} items")
try:
await process_item(item)
except DuplicateError:
logger.debug(f"β© Skipped duplicate item {item['id']}")
except Exception as e:
logger.warning(f"β οΈ Failed to process item {item['id']}: {e}")
logger.info(f"β
Batch processing complete: {total} items")async def connect_services():
# RabbitMQ
logger.info("π° Connecting to RabbitMQ...")
try:
await connect_rabbitmq()
logger.info("π° RabbitMQ connection established")
except Exception as e:
logger.error(f"β RabbitMQ connection failed: {e}")
raise
# Neo4j
logger.info("π Connecting to Neo4j...")
try:
await connect_neo4j()
logger.info("π Neo4j connection established")
except Exception as e:
logger.error(f"β Neo4j connection failed: {e}")
raiseasync def download_file(url: str, filename: str):
logger.info(f"π₯ Starting download: {filename}")
try:
total_size = await get_file_size(url)
downloaded = 0
async for chunk in download_chunks(url):
downloaded += len(chunk)
progress = (downloaded / total_size) * 100
if progress % 10 == 0: # Log every 10%
logger.info(f"β¬οΈ Downloading {filename}: {progress:.0f}%")
logger.info(f"β
Download complete: {filename}")
logger.info(f"π Saved to: {filename}")
except Exception as e:
logger.error(f"β Download failed: {e}")
raise# DEBUG - Detailed diagnostic info
logger.debug("π Checking cache for key: user_123")
# INFO - General informational messages
logger.info("π Service started successfully")
# WARNING - Warning conditions
logger.warning("β οΈ Queue depth exceeding threshold")
# ERROR - Error conditions
logger.error("β Database connection lost")
# CRITICAL - Critical conditions
logger.critical("π¨ System out of memory")# Include relevant context
logger.info(f"πΎ Saved artist: id={artist_id}, name={artist_name}")
# Use structured logging where appropriate
logger.info(
"π Processing stats", extra={"processed": 1000, "failed": 5, "duration": 45.2}
)# β
Good: Consistent format
logger.info("π Starting service")
logger.info("π§ Loading configuration")
logger.info("β
Service ready")
# β Bad: Inconsistent format
logger.info("πStarting service") # Missing space
logger.info("π§ Loading configuration") # Extra space
logger.info("Service ready") # Missing emojitry:
result = await risky_operation()
except SpecificError as e:
# Include operation context
logger.error(f"β Failed to process record {record_id}: {e}")
# Re-raise or handle appropriately
raise
except Exception as e:
# Log unexpected errors with full context
logger.exception(f"β Unexpected error in operation: {e}")
raiseAll services use setup_logging() from common.config, which configures structlog with JSON output, reads LOG_LEVEL from the environment, and sets up file + console handlers:
import structlog
from pathlib import Path
from common import setup_logging
# Call once at service startup
setup_logging("service_name", log_file=Path("/logs/service_name.log"))
# Get a logger in each module
logger = structlog.get_logger(__name__)π‘ Tip: Never call
logging.basicConfig()directly in a service βsetup_logging()handles everything including structlog configuration, third-party log suppression, and log file rotation.
JSON logging is handled automatically by structlog via setup_logging(). The configured JSONRenderer uses orjson for efficient serialization. No custom JSONFormatter is needed.
-
Check environment variable is set:
docker exec <container> printenv LOG_LEVEL
-
Verify service startup logs:
docker logs <container> | head -20
-
Check for explicit level parameter (Python):
# This overrides LOG_LEVEL setup_logging("service", level="WARNING")
- Set
LOG_LEVEL=WARNINGorLOG_LEVEL=ERROR - Check third-party library log levels are suppressed (handled automatically)
- Set
LOG_LEVEL=DEBUG - Restart the service
- Monitor logs:
docker logs -f <container>
# Using just command
just check-errors
# Manual grep
grep "β" logs/*.log
# Count errors by type
grep -o "β [^:]*" logs/*.log | sort | uniq -c# Monitor extractor progress (Rust β uses Docker logging only, no file-based log)
docker-compose logs extractor-discogs | grep "π" | tail -n 10
# Check completion
grep "β
" logs/*.log | grep "complete"# β Don't: No emoji
logger.info("Starting service")
# β Don't: Wrong emoji for context
logger.error("β
Connection failed") # Success emoji for error
# β Don't: Multiple spaces
logger.info("π Starting service")
# β Don't: Emoji at end
logger.info("Starting service π")
# β Don't: Multiple emojis
logger.info("π π§ Starting and configuring")Old:
environment:
RUST_LOG: extractor=info,lapin=warnNew:
environment:
LOG_LEVEL: INFOOld:
cargo run --verboseNew:
LOG_LEVEL=DEBUG cargo run- Development: Use
DEBUGfor detailed diagnostic information - Staging: Use
INFOto match production behavior - Production: Use
INFOorWARNINGdepending on volume - Incident Response: Temporarily set to
DEBUGfor affected services - Case Insensitive: LOG_LEVEL values are case-insensitive (
debug==DEBUG) - Container Logs: All logs go to stdout/stderr for container orchestration
- File Logs: Python services also write to
/logs/<service>.loginside containers
Lifecycle: π Start | π Stop | π§ Configure | π₯ Health
Success: β
Complete | πΎ Saved | π Loaded | π New
Errors: β Error | β οΈ Warning | π¨ Critical | β© Skip
Progress: π Processing | β³ Waiting | π Stats | β° Scheduled
Data: π₯ Download | β¬οΈ Downloading | π File | π Search/Query
Services: π° RabbitMQ | π Neo4j | π PostgreSQL | π Network
π‘ Tip: Set
LOG_LEVEL=DEBUGto see detailed diagnostic logs including database queries marked with π
- Emoji Guide - Complete emoji reference for the project
- Monitoring Guide - Real-time monitoring and debugging
- Troubleshooting Guide - Common issues and solutions
- Configuration Guide - Complete environment variable reference
Last Updated: 2026-03-07
Remember: Consistent logging makes debugging easier and operations smoother! π―