Skip to content

cvsp-lab/MoE-GS

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MoE-GS: Mixture of Experts for Dynamic Gaussian Splatting

ICLR 2026

arXiv Project Page

In-Hwan Jin1* · Hyeongju Mun1* · Joonsoo Kim2 · Kugjin Yun2 · Kyeongbo Kong1†
1 Pusan National University 2 Electronics and Telecommunications Research Institute
* Equal contribution     Corresponding author



Summary: Unified Mixture-of-Experts framework for dynamic Gaussian Splatting with a volume-aware pixel router for adaptive expert blending.

Contents

  1. Setup
  2. Preprocess Datasets
  3. Stage 1: Expert Training
  4. Stage 2: Router Training
  5. BibTeX



Setup

Environment Setup

Installation through pip is recommended. First, set up your Python environment:

conda create -n MoE-GS python=3.9
conda activate MoE-GS

Make sure to install CUDA and PyTorch versions that match your CUDA environment. We've tested on NVIDIA RTX A6000 with PyTorch version 2.0.1. Please refer https://pytorch.org/ for further information.

pip install torch

The remaining packages can be installed with:

pip install --upgrade setuptools cython wheel
pip install -r requirements.txt



Preprocess Datasets

For dataset preprocessing, we follow STG for both the N3V and Technicolor datasets.

Neural 3D Video Dataset

First, download the dataset from here. You will need colmap environment for preprocess. To setup dataset preprocessing environment, run scrips:

./scripts/env_setup.sh

To preprocess dataset, run script:

./scripts/preprocess_all_n3v.sh <path to dataset>

Technicolor dataset

Download the dataset from here. To setup dataset preprocessing environment, run scrips:

./scripts/preprocess_all_techni.sh <path to dataset>

Please refer STG for further information.



Stage 1: Expert Training

We use Ex4DGS, E-D3DGS, 4DGaussians, and STG as candidate experts for MoE-GS.
All experts are pretrained using their original configurations, except for STG.
For STG on N3V Dataset, we split the frames into 0–149 and 150–299 to handle GPU memory limits,
and modify the feature splatting process to use spherical harmonics (SHs) with the Ex4DGS rasterizer.

STG Model Directory Structure

The pretrained STG models are organized as follows:

<Path to STG model>
|---<scene>/
|   |---<scene>_0to149/
|   |---<scene>_150to299/

Stage 2: Router Training

N3V Dataset

You can train MoE-GS(n=3,4) by running the following command:

python train_E4.py --config "configs/N3V/<scene>.json" \
    --source_path <location>/<scene> \
    --model_path <path to Ex4DGS model>/<scene> \
    --emb_path <path to E-D3DGS model>/<scene> \
    --stg_path <path to STG model>/<scene> \
    --fgaussian_path <path to 4DGaussians model>/<scene> \
    --save_path <path to save model>

python train_E3.py --config "configs/N3V/<scene>.json" \
    --source_path <location>/<scene> \
    --model_path <path to Ex4DGS model>/<scene> \
    --emb_path <path to E-D3DGS model>/<scene> \
    --fgaussian_path <path to 4DGaussians model>/<scene> \
    --save_path <path to save model>

You can render MoE-GS(n=3,4) by running the following command:


python render_E4.py --skip_train \
    --source_path <location>/<scene> \
    --save_path <path to save model> \
    --iteration <2000|5000>

python render_E3.py --skip_train \
    --source_path <location>/<scene> \
    --save_path <path to save model> \
    --iteration <2000|5000>

Technicolor Dataset

You can train MoE-GS(n=3) by running the following command:


python train_E3_tech.py --config "configs/techni/<scene>.json" \
    --source_path <location>/<scene> \
    --model_path <path to Ex4DGS model>/<scene> \
    --emb_path <path to E-D3DGS model>/<scene> \
    --stg_path <path to STG model>/<scene> \
    --save_path <path to save model>

You can render MoE-GS(n=3) by running the following command:

python render_E3_tech.py --skip_train \
    --source_path <location>/<scene> \
    --save_path <path to save model> \
    --iteration <2000|5000>

Options:

  • --y_offset Vertical offset for viewpoint shifting (used for N3V).
  • --focal_mm Camera focal length in millimeters (used for Technicolor).

BibTeX

@inproceedings{jinmoe,
    title={MoE-{GS}: Mixture of Experts for Dynamic Gaussian Splatting},
    author={In-Hwan Jin and Hyeongju Mun and Joonsoo Kim and Kugjin Yun and Kyeongbo Kong},
    booktitle={The Fourteenth International Conference on Learning Representations},
    year={2026},
}

About

Official code of "MoE-GS: Mixture of Experts for Dynamic Gaussian Splatting (ICLR 2026)"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors