Deterministic multi-layer audio mixing system written in C++ implementing:
- frequency-aware gain compensation
- role-based layer weighting
- LUFS loudness normalization (EBU R128 style)
- adaptive gain staging
- transparent limiter
- real-time and offline rendering
The engine is designed for procedural / generative audio systems where the spectral content of the mix changes dynamically and consistent perceived loudness must be maintained automatically.
[ procedural audio layers ]
↓
[ smart mix engine ]
↓
[ LUFS normalization ]
↓
[ limiter ]
↓
[ final output ]
Automatically compensates gain depending on spectral content to reduce perceived loudness imbalance caused by frequency differences.
Uses:
- spectral analysis
- frequency-dependent gain curves
- equal-loudness inspired compensation
Each audio layer has a semantic role that affects its contribution to the final mix.
Example roles:
- tonal
- ambience
- binaural
- noise
- texture
This allows the mix engine to maintain a stable balance regardless of the number or type of layers.
Implements perceptual loudness measurement based on LUFS (Loudness Units relative to Full Scale).
Pipeline includes:
- K-weighting filter
- RMS integration
- loudness estimation
- adaptive gain normalization
This keeps the final output consistent even when input layers change.
Dynamic gain staging ensures:
- no clipping
- stable output level
- predictable mix behavior
Final stage limiter protects output from peaks while preserving audio clarity.
Features:
- soft knee
- fast attack
- controlled release
- transparent peak limiting
The mixing system uses a deterministic DSP pipeline:
Audio Layers
↓
Role Weighting
↓
Frequency Compensation
↓
Loudness Measurement (LUFS)
↓
Adaptive Gain
↓
Limiter
↓
Final Mix
This design ensures predictable output regardless of procedural input parameters.
Core modules:
src/
├── frequency_compensator
│ spectral gain normalization
│
├── layer_weighting
│ role-based gain control
│
├── loudness_meter
│ LUFS measurement
│
├── limiter
│ peak protection
│
└── mixing_engine
multi-layer audio processing
Each component is implemented as an independent DSP module and can be integrated into existing audio engines.
Example configuration for a procedural audio session:
{
"target_lufs": -16,
"layers": [
{
"role": "tonal",
"frequency": 432
},
{
"role": "ambience",
"preset": "forest"
},
{
"role": "binaural",
"beat_frequency": 6
}
]
}The engine processes these layers automatically and produces a balanced mix.
The system is designed for:
- procedural audio engines
- generative music systems
- meditation / wellness audio
- adaptive game audio
- automated long-form audio generation
- background soundscape generation
The engine supports:
- real-time playback
- offline rendering
- long-form audio generation (30+ minutes)
Suitable for server-side rendering or standalone audio applications.
Core stack:
- C++
- real-time DSP
- modular audio processing
- JSON-based configuration
Optional integration:
- JUCE
- custom audio engines
- server-side audio pipelines
The system was designed with the following principles:
- deterministic DSP behavior
- predictable output loudness
- modular architecture
- easy integration into existing engines
- minimal external dependencies
Potential extensions:
- multi-band loudness normalization
- adaptive EQ balancing
- dynamic layer prioritization
- spectral masking compensation
- smarter ambience weighting
- C++26 compatible compiler (MSVC 2022, GCC 13+, Clang 16+)
- CMake 3.20+
- vcpkg package manager
- JUCE 8.0.7 (automatically installed via vcpkg)
# Clone the repository
git clone https://github.com/revitalyr/cpp-audio-dsp-smart-mixer.git
cd cpp-audio-dsp-smart-mixer
# Configure with CMake and vcpkg
cmake -S . -B build -DCMAKE_TOOLCHAIN_FILE="d:/tools/vcpkg/scripts/buildsystems/vcpkg.cmake" -DVCPKG_TARGET_TRIPLET=x64-windows -DBUILD_EXAMPLES=1
# Build
cmake --build build
# Run examples
./build/simple_demo.exe
./build/cxx26_demo.exe
./build/juce_status_demo.exe#include "audio_engine.h"
using namespace AudioDSP;
// Create audio engine
auto engine = std::make_unique<MixingEngine>(44100, 512);
// Create audio layer
auto layer = std::make_shared<AudioLayer>("Test Layer");
layer->m_buffer = AudioBuffer(2, 2048, 44100);
layer->m_enabled = true;
// Add layer and process
engine->add_layer(layer);
std::vector<AudioLayerPtr> layers = {layer};
AudioBuffer output(2, 4096, 44100);
engine->process_audio(layers, output);
// Get statistics
auto stats = engine->get_mixing_statistics();
std::cout << "Active layers: " << stats.num_active_layers << "\n";
std::cout << "CPU usage: " << stats.cpu_usage_percent << "%\n";MIT License
Vitaly Reshetyuk
C++ developer focused on:
- real-time systems
- audio DSP
- performance-critical software
- procedural generation