Skip to content

Commit

Permalink
Update README.md
Browse files Browse the repository at this point in the history
  • Loading branch information
Dong-JinKim authored Sep 6, 2019
1 parent ce23500 commit bf6ed72
Showing 1 changed file with 6 additions and 2 deletions.
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,10 +13,14 @@ Link: **[arXiv](https://arxiv.org/pdf/1903.05942.pdf)** , **[Dataset](https://dr
We introduce “relational captioning,” a novel image captioning task which aims to generate multiple captions with respect to relational information between objects in an image. The figure shows the comparison with the previous frameworks.

## Updates
(08/28/2019)
(28/08/2019)
- Our code is updated from evaluation-only to trainable version.
- Codes for backpropagation part are added to several functions.

(06/09/2019)
- Fixed the bug of UnionSlicer code.
- Added eval_utils_mAP.lua.

## Installation

Some of the codes are built upon DenseCap: Fully Convolutional Localization Networks for Dense Captioning [[website]](https://cs.stanford.edu/people/karpathy/densecap/). We appreciate them for their great work.
Expand Down Expand Up @@ -54,7 +58,7 @@ To evaluate a model on our Relational Captioning Dataset, please follow the foll
3. Use the script `preprocess.py` to generate a single HDF5 file containing the entire dataset.
4. Run `script/setup_eval.sh` to download and unpack METEOR jarfile.
5. Use the script `evaluate_model.lua` to evaluate a trained model on the validation or test data.

6. If you want to measure the mAP metric, change the line9 from `imRecall` to `mAP` and run `evaluate_model.lua`.

## Training
To train a model on our Relational Captioning Dataset, you can simply follow these steps:
Expand Down

0 comments on commit bf6ed72

Please sign in to comment.