PyTorch implementation for Moment Matching for Multi-Source Domain Adaptation (ICCV2019 Oral). This repository contains some code from Maximum Classifier Discrepancy for Domain Adaptation. If you find this repository useful for you, please also consider cite the MCD paper!
The code has been tested on Python 3.6+PyTorch 0.3. To run the training and testing code, use the following script:
- Install PyTorch (Works on Version 0.3) and dependencies from http://pytorch.org.
- Install Torch vision from the source.
- Install torchnet as follows
pip install git+https://github.com/pytorch/tnt.git@master
Since many researchers have sent us emails for Digit-Five data. We share the Digit-Five dataset we use in our experiments in the following download link:
https://drive.google.com/open?id=1A4RJOFj4BJkmliiEL7g9WzNIDUHLxfmm
Keep in mind that the Mnist-M dataset is generated by ourselves, thus this subset may be different from the one released by DANN paper.
If you find the Digit-Five dataset useful for your research, please cite our paper.
The DomainNet dataset can be downloaded from the following link: http://ai.bu.edu/M3SDA/
We are also organizing a TaskCV and VisDA chanllenge in conjunction with ICCV 2019, Seoul, Korea, based on this dataset. See more details with the following link: http://ai.bu.edu/visda-2019/
If you use this code for your research, please cite our paper
@inproceedings{peng2019moment,
title={Moment matching for multi-source domain adaptation},
author={Peng, Xingchao and Bai, Qinxun and Xia, Xide and Huang, Zijun and Saenko, Kate and Wang, Bo},
booktitle={Proceedings of the IEEE International Conference on Computer Vision},
pages={1406--1415},
year={2019}
}
