Rich examples are included to demonstrate the use of Texar. The implementations of cutting-edge models/algorithms also provide references for reproducibility and comparisons.
More examples are continuously added...
- language_model_ptb: Basic RNN language model
- distributed_gpu: Basic RNN language model with distributed training
- seq2seq_attn: Attentional seq2seq
- seq2seq_configs: Seq2seq implemented with Texar model template
- seq2seq_rl: Attentional seq2seq trained with policy gradient
- seq2seq_exposure_bias: Various algorithms tackling exposure bias in sequence generation
- hierarchical_dialog: Hierarchical recurrent encoder-decoder model for conversation response generation
- torchtext: Use of torchtext data loader
- transformer: Transformer for machine translation
- bert: Pre-trained BERT model for text representation
- gpt-2: Pre-trained OpenAI GPT-2 language model
- vae_text: VAE with a transformer decoder for improved language modeling
- vae_text: VAE language model
- seqGAN: GANs for text generation
- text_style_transfer: Discriminator supervision for controlled text generation
- seq2seq_rl: Attentional seq2seq trained with policy gradient.
- seqGAN: Policy gradient for sequence generation
- rl_gym: Various RL algoritms for games on OpenAI Gym
- memory_network_lm: End-to-end memory network for language modeling
- bert: Pre-trained BERT model for text representation
- sentence_classifier: Basic CNN-based sentence classifier
- sequence_tagging: BiLSTM-CNN model for Named Entity Recognition (NER)
- seq2seq_exposure_bias: RAML and other learning algorithms for sequence generation
- gpt-2: Pre-trained OpenAI GPT-2 language model
- language_model_ptb: Basic RNN language model
- vae_text: VAE language model
- seqGAN: GAN + policy gradient
- memory_network_lm: End-to-end memory network for language modeling
- seq2seq_attn: Attentional seq2seq
- seq2seq_configs: Seq2seq implemented with Texar model template.
- seq2seq_rl: Attentional seq2seq trained with policy gradient.
- seq2seq_exposure_bias: Various algorithms tackling exposure bias in sequence generation (MT and summarization as examples).
- transformer: Transformer for machine translation
- hierarchical_dialog: Hierarchical recurrent encoder-decoder model for conversation response generation.
- seq2seq_exposure_bias: Various algorithms tackling exposure bias in sequence generation (MT and summarization as examples).
- text_style_transfer: Discriminator supervision for controlled text generation
- bert: Pre-trained BERT model for text representation
- sentence_classifier: Basic CNN-based sentence classifier
- sequence_tagging: BiLSTM-CNN model for Named Entity Recognition (NER)
- rl_gym: Various RL algoritms for games on OpenAI Gym
- distributed_gpu: Basic example of distributed training.
- bert: Distributed training of BERT.