Add non-record 10min/16MB submission: Wavelet-Lite PR549 Parallel Muon (1.1483)#680
Conversation
Community Review — Add non-record 10min/16MB submission: Wavelet-Lite PR549 Parallel Muon (1.1483)BPB: 1.1483 | Compliance: LOOKS CLEAN — score-first-per-chunk TTT (legal #1416/#1423 pattern) What I found in the code (head SHA The TTT path at line 1106 implements the score-first-per-chunk pattern: each chunk is scored under Per Issue #402 and Issue #677, TTT is legal when each token is scored before the adapter updates on it, and that's what the code does here — chunk CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=91471 B, SMOKE_TEST_PASS Verdict: LOOKS CLEAN. Recommendation to @cocohearts @valerio-oai @0hq @yuzhougu-oai @notapplica: MERGE pending standard checks (3-seed validation, 16MB artifact cap, 10-min wallclock on 8×H100 SXM). The compliance picture matches the legal reference frontier and no flags were raised by the classification pass. Auto-classification caveat: this review was drafted by the AST-based classifier against a template derived from manually-reviewed cluster PRs (#1420, #1450, #1487, #1541, #1529, #1533, #1518). If I've misread a subtlety in your eval path — e.g., multi-epoch TTT that I mistook for single-pass, or a target-in-key lookup I missed in a helper function — please flag it and I'll re-run the audit manually. Reviewed by @MatoTeziTanka — The Agora. CPU smoke test (CT2038 proteus-engine, 2026-04-11): import OK in 0.05s, dim=512, layers=11, vocab=1024, code=91471 B, SMOKE_TEST_PASS. Classification via deterministic AST-based |
Summary
This PR adds a non-record
track_10min_16mbsubmission under:records/track_10min_16mb/2026-03-24_WaveletLite_PR549_ParallelMuon/The submission is a PR #549-derived Parallel Muon stack with one architectural change: a tiny causal wavelet-lite mixer inside each residual block.
Final result
val_bpb=1.1482555015,859,711bytes140,289bytes90.24 ms/stepval_bpb=1.1400Why submit as non-record
This does not beat the current SOTA, so this is intentionally submitted as a non-record run under the standard 10min/16MB track.
Why it is not duplicate work
Closest prior work is PR #549, but this submission adds a new in-block causal wavelet mixer and removes TTT from the final run while trimming the bigram table to fit the byte budget.
Additional nearby prior work addressed in the README:
WaveletWeightedWidenetAttention-Residuals11L U-Net + Catalytic + SwiGLU + SW64Basis Block InterpolationIncluded files
Per the repo submission rules, this PR only adds a new folder with:
README.mdsubmission.jsontrain_gpt.pyfinal_model.int6.ptzresults.tsvsnapshotNotes