Skip to content

Add KittenTTS backend (Nano 15M + Mini 82M)#409

Open
Alex-Wengg wants to merge 4 commits intomainfrom
kittentts-integration
Open

Add KittenTTS backend (Nano 15M + Mini 82M)#409
Alex-Wengg wants to merge 4 commits intomainfrom
kittentts-integration

Conversation

@Alex-Wengg
Copy link
Member

@Alex-Wengg Alex-Wengg commented Mar 21, 2026

Summary

  • Adds KittenTTS as a third TTS backend alongside Kokoro and PocketTTS
  • Supports two variants: Nano (15M params) and Mini (82M params, with speed control)
  • Reuses Kokoro G2P pipeline for phonemization — no espeak dependency
  • CoreML models auto-download from alexwengg/kittentts-coreml on first use
  • 8 voices available: expr-voice-{2,3,4,5}-{m,f}
  • 21 unit tests (tokenizer + manager)

Usage

# Nano
swift run fluidaudiocli tts "Hello world" --backend kitten-nano --voice expr-voice-3-f -o output.wav

# Mini with speed control
swift run fluidaudiocli tts "Hello world" --backend kitten-mini --speed 0.8 -o output.wav

Benchmarks (M2, warm cache, longer text)

Nano Mini
Inference 7.48s 2.81s
Audio duration 8.35s 8.80s
Inference RTFx 1.12x 3.13x

New files

Sources/FluidAudio/TTS/KittenTTS/
├── KittenTTSError.swift
├── KittenTtsConstants.swift
├── KittenTtsManager.swift
└── Pipeline/
    ├── KittenTtsModelStore.swift
    └── KittenTtsSynthesizer.swift
Tests/FluidAudioTests/TTS/KittenTTS/
├── KittenTtsManagerTests.swift
└── KittenTtsTokenizerTests.swift

Modified files

  • ModelNames.swift — repos, filenames, voices for KittenTTS
  • TtsBackend.swift.kittenTts(KittenTtsVariant) case
  • TTSCommand.swift — CLI dispatch + --speed flag

Closes #49 (requested by @Josscii)


Open with Devin

@claude
Copy link
Contributor

claude bot commented Mar 21, 2026

Claude finished @Alex-Wengg's task —— View job


KittenTTS Backend PR Review ✅

I've completed a comprehensive review of this KittenTTS integration PR and submitted my feedback through GitHub's review system.

Final Assessment

✅ APPROVED - This is a high-quality implementation that adds KittenTTS as a third TTS backend alongside Kokoro and PocketTTS.

Key Strengths

  • Clean Architecture: Proper actor-based concurrency, smart reuse of Kokoro G2P pipeline
  • Strong Performance: 1.12x RTFx (Nano), 3.13x RTFx (Mini) with good benchmarks
  • Consistent API: Follows established FluidAudio patterns with speed control for Mini variant
  • Comprehensive Testing: 21 unit tests covering tokenizer and manager functionality
  • Good Documentation: Clear Swift docs with usage examples

Minor Considerations Noted

  • Logging subsystem could be more specific (com.fluidaudio.tts.kittentts)
  • Speed parameter validation could be added for Mini variant
  • Overall very clean implementation following project standards

The PR adds valuable functionality without technical debt or breaking changes. The single-shot StyleTTS2 approach with two model variants (15M Nano, 82M Mini) provides good options for different use cases.


Copy link
Contributor

@claude claude bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall Assessment

This is a well-architected and high-quality implementation that adds KittenTTS as a third TTS backend. The code follows FluidAudio's established patterns and maintains consistency with existing TTS backends.

Strengths

🏗️ Architecture

  • Clean separation of concerns with proper actor-based concurrency
  • Smart reuse of Kokoro's G2P pipeline (no espeak dependency)
  • Efficient model selection based on token count (5s vs 10s variants)

🚀 Performance

  • Strong benchmarks: 1.12x RTFx (Nano), 3.13x RTFx (Mini)
  • CPU+GPU configuration avoids ANE float16 precision artifacts
  • Single-shot synthesis for low latency

🎛️ API Design

  • Consistent interface matching Kokoro/PocketTTS patterns
  • Speed control for Mini variant (nice feature addition)
  • Comprehensive error handling with descriptive messages

Code Quality

  • Proper Swift documentation with examples
  • 21 unit tests covering tokenizer and manager
  • Follows project's coding standards and patterns

Minor Considerations

Logging Subsystem:

KittenTtsModelStore.swift:10 - Consider using com.fluidaudio.tts.kittentts instead of com.fluidaudio.tts for better log filtering

Error Handling:

Speed parameter validation could be added for Mini variant to prevent extreme values

CLI Integration:

The --speed flag is nicely integrated but only applies to Mini - this is clearly documented

Recommendation

✅ APPROVE - This PR is ready to merge. The implementation is solid, well-tested, and follows all established patterns. It adds valuable functionality without introducing technical debt or breaking changes.

Copy link
Contributor

@devin-ai-integration devin-ai-integration bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 5 additional findings in Devin Review.

Open in Devin Review

wordToPhonemes: lexicons.word,
caseSensitiveLexicon: lexicons.caseSensitive,
customLexicon: nil,
targetTokens: 500,
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot Mar 21, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Long text silently truncated: phonemize flattens all chunks but inference drops tokens beyond maxTokens

The phonemize function at Sources/FluidAudio/TTS/KittenTTS/Pipeline/KittenTtsSynthesizer.swift:193-210 uses KokoroChunker.chunk with targetTokens: 70 to split text into chunks, then flattens ALL chunk phonemes into a single [String] array. For any text that produces multiple chunks (e.g., a paragraph), the flattened phoneme count can easily exceed 140 tokens. However, the inference functions runNanoInference and runMiniInference allocate a fixed-size input of n = maxTokens (70 or 140) and silently drop all tokens beyond that limit via inputIdsPtr[i] = i < tokenIds.count ? tokenIds[i] : padTokenId (lines 236, 303). This means the user gets truncated audio with no error or warning. By contrast, Kokoro's synthesizer (KokoroSynthesizer.swift:499-508) correctly synthesizes each chunk independently and concatenates the results. KittenTTS should either synthesize each chunk separately and concatenate, or reject/warn when the token count exceeds the model's capacity.

Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Runtime: 3m8s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 669.5x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 682.6x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 12.4x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 5m 9s • 2026-03-22T17:10:17.191Z

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 14.5% <20% Diarization Error Rate (lower is better)
RTFx 4.82x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 9.688 4.5 Fetching diarization models
Model Compile 4.152 1.9 CoreML compilation
Audio Load 0.033 0.0 Loading audio file
Segmentation 21.249 9.8 VAD + speech detection
Embedding 216.490 99.5 Speaker embedding extraction
Clustering (VBx) 0.883 0.4 Hungarian algorithm + VBx clustering
Total 217.598 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 14.5% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 238.6s processing • Test runtime: 4m 9s • 03/22/2026, 01:07 PM EST

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (180.0 KB)

Runtime: 0m24s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality may differ from Apple Silicon.

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 0.00% Average Word Error Rate
WER (Med) 0.00% Median Word Error Rate
RTFx 0.00x Real-time factor (higher = faster)
Total Audio 0.0s Total audio duration processed
Total Time 0.0s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.000s Average chunk processing time
Max Chunk Time 0.000s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m17s • 03/22/2026, 12:56 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 24.52x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 9.374 21.9 Fetching diarization models
Model Compile 4.017 9.4 CoreML compilation
Audio Load 0.047 0.1 Loading audio file
Segmentation 12.834 30.0 Detecting speech regions
Embedding 21.390 50.0 Extracting speaker voices
Clustering 8.556 20.0 Grouping same speakers
Total 42.796 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 42.8s diarization time • Test runtime: 5m 33s • 03/22/2026, 01:10 PM EST

@github-actions
Copy link

github-actions bot commented Mar 21, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.90x
test-other 1.19% 0.00% 3.90x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.99x
test-other 1.40% 0.00% 3.72x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.69x Streaming real-time factor
Avg Chunk Time 1.309s Average time to process each chunk
Max Chunk Time 1.391s Maximum chunk processing time
First Token 1.581s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.61x Streaming real-time factor
Avg Chunk Time 1.419s Average time to process each chunk
Max Chunk Time 1.577s Maximum chunk processing time
First Token 1.388s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m19s • 03/22/2026, 01:00 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@Josscii
Copy link
Contributor

Josscii commented Mar 22, 2026

Hi, I tested it, and it failed with following:

Building for debugging...
[1/1] Write swift-version--3CB7CFEC50E0D141.txt
Build of product 'fluidaudiocli' complete! (0.24s)
[23:20:22.052] [INFO] [FluidAudio.Main] Host environment: macOS Version 26.3.1 (a) (Build 25D771280a), arch=arm64, chip=Apple M2 Pro, cores=10/10, mem=32 GB, rosetta=false
[23:20:22.053] [INFO] [FluidAudio.KittenTtsModelStore] KittenTTS nano models found in cache
[23:20:22.175] [INFO] [FluidAudio.KittenTtsModelStore] Loaded kittentts_5s.mlmodelc
[23:20:22.287] [INFO] [FluidAudio.KittenTtsModelStore] Loaded kittentts_10s.mlmodelc
[23:20:22.287] [INFO] [FluidAudio.KittenTtsModelStore] KittenTTS nano models loaded in 0.23s
[23:20:22.287] [NOTICE] [FluidAudio.KittenTtsManager] KittenTtsManager initialized
[23:20:22.287] [INFO] [FluidAudio.KittenTtsSynthesizer] KittenTTS synthesizing: 'Hello world'
[23:20:22.288] [INFO] [FluidAudio.KokoroVocabulary] Loaded 114 vocabulary entries from integer map
[23:20:22.291] [ERROR] [FluidAudio.TTSCommand] KittenTTS synthesis failed: processingFailed("Missing lexicon cache (expected us_lexicon_cache.json)")
[23:20:22.291] [INFO] [FluidAudio.Main] Peak memory usage (process-wide): 0.205 GB

Alex-Wengg added a commit that referenced this pull request Mar 22, 2026
KittenTTS reuses Kokoro's G2P pipeline for phonemization, which requires
us_lexicon_cache.json. The loadSimplePhonemeDictionary() method was
attempting to load the cache without first downloading it, causing a
"Missing lexicon cache" error on first use.

Changes:
- Add TtsResourceDownloader.ensureLexiconFile() call before loading cache
- Auto-downloads us_lexicon_cache.json from HuggingFace on first use
- Add kitten-tts-test.yml workflow to verify both Nano/Mini variants

Fixes issue reported by @Josscii in PR #409 comment
@Alex-Wengg Alex-Wengg force-pushed the kittentts-integration branch from a700ec3 to 9ca3c3f Compare March 22, 2026 15:59
devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link

github-actions bot commented Mar 22, 2026

KittenTTS Smoke Test

Test Results

Variant Status Output Size
Nano (15M) 55.1 KB
Mini (82M) 178.2 KB

Dependencies

Component Status Size
Build -
Lexicon cache (us_lexicon_cache.json) 10.0 MB
Kokoro G2P pipeline -

Note: KittenTTS reuses Kokoro's G2P pipeline for phonemization. This test verifies the lexicon cache auto-downloads correctly and both Nano/Mini variants can synthesize audio.

@Alex-Wengg Alex-Wengg force-pushed the kittentts-integration branch from c4806b0 to 9ca3c3f Compare March 22, 2026 16:09
Alex-Wengg added a commit that referenced this pull request Mar 22, 2026
## Summary

Fixes CI failure in `test-tts` workflow caused by missing `source_noise`
input after PR #411 merged.

PR #411 (Kokoro ANE optimization) updated the Kokoro CoreML models to
fp16, which introduced a new required input `source_noise` that the
inference code wasn't providing.

## Changes

- Add `source_noise` tensor [1, sampleRate*duration, 9] with random
Float16 values
- Update both synthesis pipeline and warm-up prediction  
- Size adapts to model variant: 5s (120k samples) or 15s (360k samples)
- Use multiarray pooling for memory efficiency

## Error Fixed

```
Feature source_noise is required but not specified.
```

## Test Plan

- [x] Cherry-picked from commit c8a5056 (originally on
feature/qwen3-tts-coreml)
- [ ] CI `test-tts` workflow should pass
- [ ] Verify Kokoro TTS synthesis completes successfully

Fixes the CI failure blocking PR #409 and other PRs.
<!-- devin-review-badge-begin -->

---

<a href="https://app.devin.ai/review/fluidinference/fluidaudio/pull/412"
target="_blank">
  <picture>
<source media="(prefers-color-scheme: dark)"
srcset="https://static.devin.ai/assets/gh-open-in-devin-review-dark.svg?v=1">
<img
src="https://static.devin.ai/assets/gh-open-in-devin-review-light.svg?v=1"
alt="Open with Devin">
  </picture>
</a>
<!-- devin-review-badge-end -->
Alex-Wengg added a commit that referenced this pull request Mar 22, 2026
KittenTTS reuses Kokoro's G2P pipeline for phonemization, which requires
us_lexicon_cache.json. The loadSimplePhonemeDictionary() method was
attempting to load the cache without first downloading it, causing a
"Missing lexicon cache" error on first use.

Changes:
- Add TtsResourceDownloader.ensureLexiconFile() call before loading cache
- Auto-downloads us_lexicon_cache.json from HuggingFace on first use
- Add kitten-tts-test.yml workflow to verify both Nano/Mini variants

Fixes issue reported by @Josscii in PR #409 comment
@Alex-Wengg Alex-Wengg force-pushed the kittentts-integration branch from 9ca3c3f to 817ed87 Compare March 22, 2026 16:11
devin-ai-integration[bot]

This comment was marked as resolved.

@Josscii
Copy link
Contributor

Josscii commented Mar 22, 2026

Another issue I found is that some simple words is spoken weirdly. For example, the test phrase Hello world, the Hello is wrong. Is this the model issue?

@Alex-Wengg Alex-Wengg force-pushed the kittentts-integration branch from f6c4521 to 817ed87 Compare March 22, 2026 16:33
@Alex-Wengg
Copy link
Member Author

Another issue I found is that some simple words is spoken weirdly. For example, the test phrase Hello world, the Hello is wrong. Is this the model issue?

could you give me the wav file for examination and which model type. Mini has more parameters vs nano

Integrates KittenTTS as a third TTS backend alongside Kokoro and PocketTTS.
Reuses Kokoro G2P for phonemization. CoreML models auto-download from
alexwengg/kittentts-coreml on first use.

Closes #49 (KittenTTS request from @Josscii)
KittenTTS reuses Kokoro's G2P pipeline for phonemization, which requires
us_lexicon_cache.json. The loadSimplePhonemeDictionary() method was
attempting to load the cache without first downloading it, causing a
"Missing lexicon cache" error on first use.

Changes:
- Add TtsResourceDownloader.ensureLexiconFile() call before loading cache
- Auto-downloads us_lexicon_cache.json from HuggingFace on first use
- Add kitten-tts-test.yml workflow to verify both Nano/Mini variants

Fixes issue reported by @Josscii in PR #409 comment
Add 'kitten' backend option that defaults to Mini (82M params) instead
of requiring explicit 'kitten-mini' flag. Users can still use
'kitten-nano' for the smaller 15M model.

Rationale:
- Mini has better quality (3.13x RTF vs 1.12x for Nano)
- Mini supports speed control, Nano does not
- 82M is still relatively small and runs well on Apple Silicon

Changes:
- Add 'kitten' and 'kittentts' backend options → .kittenTts(.mini)
- Update help text to show 'kitten (Mini 82M)' option
- KittenTtsManager already defaults to .mini in its initializer
@Alex-Wengg Alex-Wengg force-pushed the kittentts-integration branch from 817ed87 to 762a3de Compare March 22, 2026 16:40
@Josscii
Copy link
Contributor

Josscii commented Mar 22, 2026

Fixes four issues identified in PR #409 review:

1. Token truncation: Reduce targetTokens from 500 to 70
   - KittenTTS models support max 70 tokens (5s) or 140 tokens (10s)
   - Using 500 caused silent audio cutoff for longer inputs
   - Now uses conservative 70 token limit to fit all variants

2. Missing exit code: Add exit(1) on synthesis failure
   - runKittenTts() was logging errors but not exiting
   - CI smoke tests were reporting PASSED even on failures
   - Now properly exits with code 1 on error

3. Cache path mismatch: Fix CI workflow cache path
   - Workflow specified 'kittentts' but models store under 'kittentts-coreml'
   - Prevented effective caching across CI runs
   - Updated to correct path: ~/.cache/fluidaudio/Models/kittentts-coreml

4. Code style: Replace nested if-statements with guard
   - tokenize() used nested if-statements violating project guidelines
   - Replaced with early-exit guard statements per style guide
   - Cleaner control flow, consistent with codebase patterns

Addresses feedback from Devin review comment #4106567814
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Model Support Requests

2 participants