-
Notifications
You must be signed in to change notification settings - Fork 2.3k
RoBERTa on SuperGLUE's 'Reading Comprehension with Commonsense Reasoning' task #4995
Comments
It looks like this can be cast as a QA model, and thus does not need any model code. It will need
|
I would like to work on this issue. |
Let us know if you run into any trouble! |
I have a couple of quick questions that arose while implementing the reader:
|
In general, the output of this reader has to follow the guidelines of the |
I updated the description of this task to recommend the new Tango framework. |
@dirkgr I actually finished this some time ago (https://github.com/gabeorlanski/allennlp-readers-development) but forgot to make a pull request. Would it be better to make the PR with what I have (obviously ported to a fork of allennlp_models) first, or should I try to make it Tango compatible? |
If you want to try doing it with Tango, please do! Tango is still very new, and you'd be the first one outside of AI2 trying it. It might be a little rough, but I would love your feedback, especially because you could bring an outside perspective. |
If you don't have a ton of time though, I'd also be quite happy with your original version. |
ReCoRD is one of the tasks of the SuperGLUE benchmark. The task is to re-trace the steps of Facebook's RoBERTa paper (https://arxiv.org/pdf/1907.11692.pdf) and build an AllenNLP config that reads the ReCoRD data and fine-tunes a model on it. We expect scores in the range of their entry on the SuperGLUE leaderboard.
This is a span prediction task. You can use the existing
TransformerQA
model and dataset reader, or write your own in the style ofTransformerClassificationTT
. Or you write an AllenNLP model that's a thin wrapper around the Huggingface SQuAD model. All of these are valid approaches. To tie your components together, we recommend the IMDB experiment config as a starting point.The text was updated successfully, but these errors were encountered: