# ReST **Repository Path**: ink315/ReST ## Basic Information - **Project Name**: ReST - **Description**: No description available - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2024-01-08 - **Last Updated**: 2024-01-08 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README
## Requirements
### Installation
1. Clone the project and create virtual environment
```bash
git clone https://github.com/chengche6230/ReST.git
conda create --name ReST python=3.8
conda activate ReST
```
2. Install (follow instructions):
* [torchreid](https://github.com/KaiyangZhou/deep-person-reid)
* [DGL](https://www.dgl.ai/pages/start.html) (also check PyTorch/CUDA compatibility table below)
* [warmup_scheduler](https://github.com/ildoonet/pytorch-gradual-warmup-lr)
* [py-motmetrics](https://github.com/cheind/py-motmetrics)
* Reference commands:
```bash
# torchreid
git clone https://github.com/KaiyangZhou/deep-person-reid.git
cd deep-person-reid/
pip install -r requirements.txt
conda install pytorch torchvision cudatoolkit=9.0 -c pytorch
python setup.py develop
# other packages (in /ReST)
conda install -c dglteam/label/cu117 dgl
pip install git+https://github.com/ildoonet/pytorch-gradual-warmup-lr.git
pip install motmetrics
```
3. Install other requirements
```bash
pip install -r requirements.txt
```
4. Download pre-trained ReID model
* [OSNet](https://drive.google.com/file/d/1z1l3FoEGfIon-JH1vEzUKSGfLmMnzFek/view?usp=sharing)
### Datasets
1. Place datasets in `./datasets/` as:
```text
./datasets/
├── CAMPUS/
│ ├── Garden1/
│ │ └── view-{}.txt
│ ├── Garden2/
│ │ └── view-HC{}.txt
│ ├── Parkinglot/
│ │ └── view-GL{}.txt
│ └── metainfo.json
├── PETS09/
│ ├── S2L1/
│ │ └── View_00{}.txt
│ └── metainfo.json
├── Wildtrack/
│ ├── sequence1/
│ │ └── src/
│ │ ├── annotations_positions/
│ │ └── Image_subsets/
│ └── metainfo.json
└── {DATASET_NAME}/ # for customized dataset
├── {SEQUENCE_NAME}/
│ └── {ANNOTATION_FILE}.txt
└── metainfo.json
```
2. Prepare all `metainfo.json` files (e.g. frames, file pattern, homography)
3. Run for each dataset:
```bash
python ./src/datasets/preprocess.py --dataset {DATASET_NAME}
```
Check `./datasets/{DATASET_NAME}/{SEQUENCE_NAME}/output` if there is anything missing:
```text
/output/
├── gt_MOT/ # for motmetrics
│ └── c{CAM}.txt
├── gt_train.json
├── gt_eval.json
├── gt_test.json
└── {DETECTOR}_test.json # if you want to use other detector, e.g. yolox_test.json
```
5. Prepare all image frames as `{FRAME}_{CAM}.jpg` in `/output/frames`.
## Model Zoo
Download trained weights if you need, and modify `TEST.CKPT_FILE_SG` & `TEST.CKPT_FILE_TG` in `./configs/{DATASET_NAME}.yml`.
| Dataset | Spatial Graph | Temporal Graph |
|-----------|------------------|------------------|
| Wildtrack | [sequence1](https://drive.google.com/file/d/1U4Qc2xHERbLUzly5gUToG2H1Rktux8mi/view?usp=sharing) | [sequence1](https://drive.google.com/file/d/17tvAeERcsy3YaB3lR2aIZYDQqKFySmVA/view?usp=sharing) |
| CAMPUS | [Garden1](https://drive.google.com/file/d/1OCxDios5BhucUIKQSinxLIELPjfS7pXJ/view?usp=sharing)
## Acknowledgement
* Thanks for the codebase from the re-implementation of [GNN-CCA](https://github.com/shawnh2/GNN-CCA) ([arXiv](https://arxiv.org/abs/2201.06311)).
## Citation
If you find this code useful for your research, please cite our paper
```text
@InProceedings{Cheng_2023_ICCV,
author = {Cheng, Cheng-Che and Qiu, Min-Xuan and Chiang, Chen-Kuo and Lai, Shang-Hong},
title = {ReST: A Reconfigurable Spatial-Temporal Graph Model for Multi-Camera Multi-Object Tracking},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2023},
pages = {10051-10060}
}
```