PyTorch implementations for unpaired video-to-video translation.
The code was written by Bryan Adam Gunawan.
Note: The current software works well with PyTorch 1.4. Check out the older branch that supports PyTorch 0.1-0.3.
- Linux or macOS
- Python 3
- CPU or NVIDIA GPU + CUDA CuDNN
- Clone this repo:
git clone https://github.com/bryanadamg/contrastive-vid2vid
cd contrastive-vid2vid- Install PyTorch and other dependencies
- Download dataset (e.g. maps):
- Train a CUT model:
python train.py --dataroot ./datasets/utopilot_sun2rain_downscaled --name utopilot_sun2rain_reduced --CUT_mode CUT --dataset_mode unaligned_triplet --load_size 270 --crop_size 256 --batch_size 2
python train.py --gpu_ids -1 --dataroot ./datasets/utopilot_sun2rain_downscaled --netG swin_unet --crop_size 224 --name test1 --CUT_mode CUT --dataset_mode unaligned_triplet --model swin_unet_cut --display_id -1 --num_threads 0
python train.py --dataroot /root/autodl-fs/utopilot_sun2rain/ --netG swin_unet --crop_size 224 --name first_test --CUT_mode CUT --dataset_mode unaligned_triplet --model swin_unet_cutTo see more intermediate results, check out ./checkpoints/maps_cyclegan/web/index.html.
- Test the model:
#!./scripts/test_cyclegan.sh
python test.py --dataroot ./datasets/docker_dataset --name docker --CUT_mode CUT --dataset_mode unaligned_triplet --phase train
python test.py --dataroot ./datasets/utopilot_sun2rain_downscaled --gpu_ids -1 --netG swin_unet --name fourth_test --CUT_mode CUT --dataset_mode unaligned_triplet --model swin_unet_cut --num_threads 0 --phase test --num_test 300 --crop_size 224 --load_size 224 --preprocess resize --epoch 80
python test.py --dataroot ./datasets/utopilot_sun2rain_downscaled --gpu_ids -1 --name utopilot_sun2rain_reduced --CUT_mode CUT --dataset_mode unaligned_triplet --num_threads 0 --phase test --num_test 300- The test results will be saved to a html file here:
./results/maps_cyclegan/latest_test/index.html. - Local ssh to view visdom server:
ssh -CNgv -L 8097:127.0.0.1:8097 root@connect.southb.gpuhub.com -p 38759python -m pytorch_fid [path to real test images] [path to generated images]If you plan to implement custom models and dataset for your new applications, we provide a dataset template and a model template as a starting point.
If you use this code for your research, please cite our papers.
@inproceedings{CycleGAN2017,
title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks},
author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},
year={2017}
}
@inproceedings{isola2017image,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
booktitle={Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on},
year={2017}
}
Code adapted from:
Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.
Jun-Yan Zhu*, Taesung Park*, Phillip Isola, Alexei A. Efros. In ICCV 2017. (* equal contributions) [Bibtex]
Image-to-Image Translation with Conditional Adversarial Networks.
Phillip Isola, Jun-Yan Zhu, Tinghui Zhou, Alexei A. Efros. In CVPR 2017. [Bibtex]
contrastive-unpaired-translation (CUT)
CycleGAN-Torch |
pix2pix-Torch | pix2pixHD|
BicycleGAN | vid2vid | SPADE/GauGAN
iGAN | GAN Dissection | GAN Paint
Our code is inspired by pytorch-DCGAN.