Dependencies

Install

Prepare data

Create a directory to store reid datasets under this repo via

cd deep-person-reid/
mkdir data/

If you wanna store datasets in another directory, you need to specify --root path_to_your/data when running the training code. Please follow the instructions below to prepare each dataset. After that, you can simply do -d the_dataset when running the training code.

Download new split [14] from person-re-ranking. What you need are cuhk03_new_protocol_config_detected.mat and cuhk03_new_protocol_config_labeled.mat. Put these two mat files under data/cuhk03. Finally, the data structure would look like

Use -d cuhk03 when running the training code. In default mode, we use new split (767/700). If you wanna use the original splits (1367/100) created by [13], specify --cuhk03-classic-split. As [13] computes CMC differently from Market1501, you might need to specify --use-metric-cuhk03 for fair comparison with their method. In addition, we support both labeled and detected modes. The default mode loads detected images. Specify --cuhk03-labeled if you wanna train and test on labeled images.

Download the split created by iLIDS-VID from here, and put it in data/prid2011/. We follow [11] and use 178 persons whose sequences are more than a threshold so that results on this dataset can be fairly compared with other approaches. The data structure would look like:

Test

Say you have downloaded ResNet50 trained with xent on market1501. The path to this model is 'saved-models/resnet50_xent_market1501.pth.tar' (create a directory to store model weights mkdir saved-models/). Then, run the following command to test

Note that --test-batch in video reid represents number of tracklets. If we set this argument to 2, and sample 15 images per tracklet, the resulting number of images per batch is 2*15=30. Adjust this argument according to your GPU memory.

Q&A

How do I set different learning rates to different components in my model?

A: Instead of giving model.parameters() to optimizer, you could pass an iterable of dicts, as described here. Please see the example below

# First comment the following code.#optimizer = torch.optim.Adam(model.parameters(), lr=args.lr, weight_decay=args.weight_decay)
param_groups = [
{'params': model.base.parameters(), 'lr': 0},
{'params': model.classifier.parameters()},
]
# Such that model.base will be frozen and model.classifier will be trained with# the default leanring rate, i.e. args.lr. This example code only applies to model# that has two components (base and classifier). Modify the code to adapt to your model.
optimizer = torch.optim.Adam(param_groups, lr=args.lr, weight_decay=args.weight_decay)

Of course, you can pass model.classifier.parameters() to optimizer if you only need to train the classifier (in this case, setting the requires_grads wrt the base model params to false will be more efficient).