In-Hwan Jin1*
·
Hyeongju Mun1*
·
Joonsoo Kim2
·
Kugjin Yun2
·
Kyeongbo Kong1†
1 Pusan National University 2 Electronics and Telecommunications Research Institute
* Equal contribution † Corresponding author
Summary: Unified Mixture-of-Experts framework for dynamic Gaussian Splatting with a volume-aware pixel router for adaptive expert blending.
Installation through pip is recommended. First, set up your Python environment:
conda create -n MoE-GS python=3.9
conda activate MoE-GSMake sure to install CUDA and PyTorch versions that match your CUDA environment. We've tested on NVIDIA RTX A6000 with PyTorch version 2.0.1. Please refer https://pytorch.org/ for further information.
pip install torchThe remaining packages can be installed with:
pip install --upgrade setuptools cython wheel
pip install -r requirements.txtFor dataset preprocessing, we follow STG for both the N3V and Technicolor datasets.
First, download the dataset from here. You will need colmap environment for preprocess. To setup dataset preprocessing environment, run scrips:
./scripts/env_setup.shTo preprocess dataset, run script:
./scripts/preprocess_all_n3v.sh <path to dataset>Download the dataset from here. To setup dataset preprocessing environment, run scrips:
./scripts/preprocess_all_techni.sh <path to dataset>Please refer STG for further information.
We use Ex4DGS, E-D3DGS, 4DGaussians, and STG as candidate experts for MoE-GS.
All experts are pretrained using their original configurations, except for STG.
For STG on N3V Dataset, we split the frames into 0–149 and 150–299 to handle GPU memory limits,
and modify the feature splatting process to use spherical harmonics (SHs) with the Ex4DGS rasterizer.
The pretrained STG models are organized as follows:
<Path to STG model>
|---<scene>/
| |---<scene>_0to149/
| |---<scene>_150to299/
You can train MoE-GS(n=3,4) by running the following command:
python train_E4.py --config "configs/N3V/<scene>.json" \
--source_path <location>/<scene> \
--model_path <path to Ex4DGS model>/<scene> \
--emb_path <path to E-D3DGS model>/<scene> \
--stg_path <path to STG model>/<scene> \
--fgaussian_path <path to 4DGaussians model>/<scene> \
--save_path <path to save model>
python train_E3.py --config "configs/N3V/<scene>.json" \
--source_path <location>/<scene> \
--model_path <path to Ex4DGS model>/<scene> \
--emb_path <path to E-D3DGS model>/<scene> \
--fgaussian_path <path to 4DGaussians model>/<scene> \
--save_path <path to save model>
You can render MoE-GS(n=3,4) by running the following command:
python render_E4.py --skip_train \
--source_path <location>/<scene> \
--save_path <path to save model> \
--iteration <2000|5000>
python render_E3.py --skip_train \
--source_path <location>/<scene> \
--save_path <path to save model> \
--iteration <2000|5000>
You can train MoE-GS(n=3) by running the following command:
python train_E3_tech.py --config "configs/techni/<scene>.json" \
--source_path <location>/<scene> \
--model_path <path to Ex4DGS model>/<scene> \
--emb_path <path to E-D3DGS model>/<scene> \
--stg_path <path to STG model>/<scene> \
--save_path <path to save model>
You can render MoE-GS(n=3) by running the following command:
python render_E3_tech.py --skip_train \
--source_path <location>/<scene> \
--save_path <path to save model> \
--iteration <2000|5000>
Options:
--y_offsetVertical offset for viewpoint shifting (used for N3V).--focal_mmCamera focal length in millimeters (used for Technicolor).
@inproceedings{jinmoe,
title={MoE-{GS}: Mixture of Experts for Dynamic Gaussian Splatting},
author={In-Hwan Jin and Hyeongju Mun and Joonsoo Kim and Kugjin Yun and Kyeongbo Kong},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
}