Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

xP3 Long #823

Open
wants to merge 1 commit into
base: eval-hackathon
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions promptsource/templates/GEM/wiki_lingua/en/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -82,3 +82,30 @@ templates:
original_task: true
name: write_abstract_en
reference: xsum 'read_below_DOC_write_abstract' template
dff7b414-7385-4855-bb90-253073a34fde: !Template
answer_choices: null
id: dff7b414-7385-4855-bb90-253073a34fde
jinja: "{{target}}\n\nGiven the above abstract, write an English article for it. ||| {{source}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longwritearticle
reference: ''
dff7b415-7385-4855-bb90-253073a34fde: !Template
answer_choices: null
id: dff7b415-7385-4855-bb90-253073a34fde
jinja: "{{target}}\n\nI'm interested in that, but I only have a few mins.
Can you give me the first 500 characters of an article about that? ||| {{source[:500]}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longchars
reference: ''
111 changes: 111 additions & 0 deletions promptsource/templates/GEM/wiki_lingua/en_en/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
dataset: GEM/wiki_lingua
subset: en_en
templates:
088288f3-7516-4cf7-9406-0e082053bf54: !Template
answer_choices: null
id: 088288f3-7516-4cf7-9406-0e082053bf54
jinja: '{{source}}


===

Write a summary of the previous text in {{target_language_name}}: ||| {{target}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: summarize_above_en
reference: xsum DOC_write_summary_of_above template
2038df7b-5420-4a33-87ec-09715419deef: !Template
answer_choices: null
id: 2038df7b-5420-4a33-87ec-09715419deef
jinja: 'Source in {{source_language_name}}: {{source}}


Summary in {{target_language_name}}: ||| {{target}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: article_summary_en
reference: xsum 'article_DOC_summary' template
753f0a46-aeff-4cd2-932c-8548897cebe5: !Template
answer_choices: null
id: 753f0a46-aeff-4cd2-932c-8548897cebe5
jinja: '{{source}}


How would you rephrase that briefly using {{target_language_name}}? ||| {{target}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: rephrase_en
reference: xsum 'DOC_how_would_you_rephrase_few_words' template
d3c5baa3-5e37-46f8-b1b2-5b834181c9da: !Template
answer_choices: null
id: d3c5baa3-5e37-46f8-b1b2-5b834181c9da
jinja: '{{source}}


TL;DR in {{target_language_name}}: ||| {{target}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: tldr_en
reference: xsum 'DOC_tldr' template
dff7b314-7385-4855-bb90-253073a34fde: !Template
answer_choices: null
id: dff7b314-7385-4855-bb90-253073a34fde
jinja: "First, read the {{source_language_name}} text below.\n\n{{source}} \n\nNow, please write\
\ a short abstract for it in {{target_language_name}}. Abstract: ||| {{target}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: write_abstract_en
reference: xsum 'read_below_DOC_write_abstract' template
dfa7b514-7385-4855-bb90-253073a34fde: !Template
answer_choices: null
id: dfa7b514-7385-4855-bb90-253073a34fde
jinja: "{{target}}\n\nGiven the above summary, write a detailed text in {{source_language_name}} for it. ||| {{source}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longwritearticle
reference: ''
dff8b414-7485-4855-bb90-253073a34fde: !Template
answer_choices: null
id: dff8b414-7485-4855-bb90-253073a34fde
jinja: "{{target}}\n\nI'm interested in that, but I only have a few mins.
Can you give me at most the first 500 characters of a detailed explanation
in {{source_language_name}} about that? ||| {{source[:500]}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longchars
reference: ''
54 changes: 54 additions & 0 deletions promptsource/templates/GEM/xlsum/english/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
dc0096ea-e9db-4e96-85b4-0740085fee55: !Template
answer_choices: null
id: dc0096ea-e9db-4e96-85b4-0740085fee55
jinja: 'Given the below title and summary of an article, generate a short article or the beginning of a long article to go along with them.
Title: {{title}}\nSummary: {{target}}\nArticle (Max 500 characters): ||| {{text[:500]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longgenarticle
reference: ''
hd0097ea-e9db-4e96-85b4-0740085fee55: !Template
answer_choices: null
id: hd0097ea-e9db-4e96-85b4-0740085fee55
jinja: 'Title: {{title}}\nGiven the above title of an imaginary article, imagine the article.\n ||| {{text[:7000]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longimaginearticle
reference: ''
hc0099ea-e9db-4e96-85b4-0740085fee55: !Template
answer_choices: null
id: hc0099ea-e9db-4e96-85b4-0740085fee55
jinja: '{{text[:1000]}}... Continue the article for another 4000 characters max: ||| {{text[1000:5000]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longcontinue
reference: ''
hc0196ea-e9db-4e96-85b4-0740085fee55: !Template
answer_choices: null
id: hc0196ea-e9db-4e96-85b4-0740085fee55
jinja: '...{{text[3000:3500]}}... Write the rest of the article: |||
{{text[5000:]}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages: []
metrics:
- ROUGE
- BLEU
original_task: true
name: xp3longrest
reference: ''
32 changes: 32 additions & 0 deletions promptsource/templates/adversarial_qa/dbert/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,3 +118,35 @@ templates:
original_task: true
name: answer_the_following_q
reference: 'Input: QC, Output: Answer'
b64d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: b64d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{{question}} Given the previous question, write a context that contains the answer. It can be 1 - 20 sentences. Context:
|||
{{context}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longwritecontext
reference: ''
c64d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: c64d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{% if metadata.split != "test" %}
Generate a few sentences of context that can be used to answer the question {{question}}.
The answer is "{{answers.text | choice}}" and should appear in the context.
Generate after this sentence. ||| {{context}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longgeneratecontext
reference: ''
32 changes: 32 additions & 0 deletions promptsource/templates/adversarial_qa/dbidaf/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,3 +118,35 @@ templates:
original_task: true
name: question_context_answer
reference: 'Input: QC, Output: Answer (short form)'
e84d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: e84d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{{question}} Given the previous question, write a context that contains the answer. It can be 1 - 20 sentences. Context:
|||
{{context}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longwritecontext
reference: ''
e65d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: e65d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{% if metadata.split != "test" %}
Generate a few sentences of context that can be used to answer the question {{question}}.
The answer is "{{answers.text | choice}}" and should appear in the context.
Generate after this sentence. ||| {{context}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longgeneratecontext
reference: ''
32 changes: 32 additions & 0 deletions promptsource/templates/adversarial_qa/droberta/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -118,3 +118,35 @@ templates:
original_task: true
name: answer_the_following_q
reference: 'Input: QC, Output: Answer'
e86d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: e86d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{{question}} Given the previous question, write a context that contains the answer. It can be 1 - 20 sentences. Context:
|||
{{context}}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longwritecontext
reference: ''
e66d5a15-68e2-4d1c-b30a-ca8250c860fa: !Template
answer_choices: null
id: e66d5a15-68e2-4d1c-b30a-ca8250c860fa
jinja: '{% if metadata.split != "test" %}
Generate a few sentences of context that can be used to answer the question {{question}}.
The answer is "{{answers.text | choice}}" and should appear in the context.
Generate after this sentence. ||| {{context}}
{% endif %}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Squad
original_task: true
name: xp3longgeneratecontext
reference: ''
27 changes: 27 additions & 0 deletions promptsource/templates/ag_news/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -106,3 +106,30 @@ templates:
original_task: true
name: classify
reference: ''
cc355f33-7e8c-4455-a72b-48d315bd4f60: !Template
answer_choices: World politics ||| Sports ||| Business ||| Science and technology
id: cc355f33-7e8c-4455-a72b-48d315bd4f60
jinja: "Generate a news article on the topic of {{answer_choices[label]}}. Article: ||| {{text}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Accuracy
original_task: true
name: xp3longgenerate
reference: ''
cb355f34-7e8c-4455-a72b-48d315bd4f60: !Template
answer_choices: Politician ||| Athlete ||| Business executive ||| Scientist
id: cb355f34-7e8c-4455-a72b-48d315bd4f60
jinja: "Imagine talking to a {{answer_choices[label]}}.
Imagine a news article that would interest them: ||| {{text}}"
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- en
metrics:
- Accuracy
original_task: true
name: xp3longimagine
reference: ''
27 changes: 27 additions & 0 deletions promptsource/templates/amazon_polarity/templates.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -190,3 +190,30 @@ templates:
original_task: true
name: flattering_or_not
reference: ''
b23369e8-0500-4e93-90d4-8e6814bfb99b: !Template
answer_choices: negative ||| positive
id: b23369e8-0500-4e93-90d4-8e6814bfb99b
jinja: 'Write a {{answer_choices[label]}} review with the title "{{title}}".
Review: ||| {{content}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: xp3longwritereview
reference: ''
b25369e8-0500-4e93-90d4-8e6814bfb99b: !Template
answer_choices: negative ||| positive
id: b25369e8-0500-4e93-90d4-8e6814bfb99b
jinja: 'Generate an imaginary product review titled: {{title}}. Review: ||| {{content}}'
metadata: !TemplateMetadata
choices_in_prompt: true
languages:
- en
metrics:
- Accuracy
original_task: true
name: xp3longimaginereview
reference: ''
29 changes: 29 additions & 0 deletions promptsource/templates/clue/cmrc2018/templates.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
9fc15385-814e-419a-b862-2d4e06a58ef6: !Template
answer_choices: null
id: 9fc15385-814e-419a-b862-2d4e06a58ef6
jinja: 'Q: {{ question }}
Can you write some context to answer the question? ||| {{ context }}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Squad
original_task: true
name: xp3longctxt
reference: ''
9fc25385-814e-419a-b862-2d4e06a58ef6: !Template
answer_choices: null
id: 9fc25385-814e-419a-b862-2d4e06a58ef6
jinja: '{{ context[:answers["answer_start"][0]-5] }}... How would you
continue the prior text to answer "{{ question }}"?
||| {{ context[answers["answer_start"][0]-5:] }}'
metadata: !TemplateMetadata
choices_in_prompt: false
languages:
- zh
metrics:
- Squad
original_task: true
name: xp3longcontinue
reference: ''
Loading