Our paper is available here.
-
demo/ contains demo code, demonstrating how to run the full ParaGuide approach to transform texts from formal → informal, or to match exemplars.
-
training/ contains the logic for training a paraphrase-conditioned text diffusion model.
-
inference/ contains our code for generating inferences with ParaGuide (once you have preprocessed/paraphrased your data)
-
data/ contains the logic for generating reddit and enron (paraphrase, original text) data.
-
baselines/ contains our implementations of each baseline.
-
evaluations contains our automatic evaluation automatic and human eval data and code.
In your python environment (>=3.8), you can install dependencies via the requirements file:
pip install -r requirements.txt
Our models and data are available for download here.
We also provided corresponding scripts:
- Models: models/download.sh
- Data: data/enron/download_training_dataset.sh
We recommend first checking out demo/generate_examples.py, which demonstrates ParaGuide inference logic!
This research is supported in part by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA), via the HIATUS Program contract #2022-22072200005. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation therein.