Skip to content
This repository has been archived by the owner on Dec 16, 2022. It is now read-only.

Commit

Permalink
some model card fixes (#274)
Browse files Browse the repository at this point in the history
* some model card fixes

* CHANGELOG
  • Loading branch information
epwalsh committed Jun 7, 2021
1 parent 0a7901c commit b17d114
Show file tree
Hide file tree
Showing 5 changed files with 10 additions and 18 deletions.
4 changes: 4 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Added support for NLVR2 visual entailment, including a data loader, two models, and training configs.

### Fixed

- Fixed registerd model name in the `pair-classification-roberta-rte` and `vgqa-vilbert` model cards.


## [v2.5.0](https://github.com/allenai/allennlp-models/releases/tag/v2.5.0) - 2021-06-03

Expand Down
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"id": "pair-classification-roberta-rte",
"registered_model_name": "roberta-rte",
"registered_model_name": "basic_classifier",
"registered_predictor_name": null,
"display_name": "RoBERTa RTE",
"task_id": "pair_classification",
Expand Down
4 changes: 2 additions & 2 deletions allennlp_models/modelcards/vgqa-vilbert.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"id": "vgqa-vilbert",
"registered_model_name": "vgqa_vilbert",
"registered_predictor_name": "vilbert_vgqa",
"registered_model_name": "vqa_vilbert_from_huggingface",
"registered_predictor_name": "vgqa_vilbert",
"display_name": "ViLBERT - Visual Genome Question Answering",
"task_id": "vgqa",
"model_details": {
Expand Down
16 changes: 2 additions & 14 deletions allennlp_models/pretrained.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,27 +7,14 @@

from allennlp.common.model_card import ModelCard
from allennlp.common.task_card import TaskCard

# These imports are included so that the model cards can be filled with default information
# obtained from the registered model classes.
from allennlp_models.classification.models import * # noqa: F401, F403
from allennlp_models.coref.models import * # noqa: F401, F403
from allennlp_models.generation.models import * # noqa: F401, F403
from allennlp_models.lm.models import * # noqa: F401, F403
from allennlp_models.mc.models import * # noqa: F401, F403
from allennlp_models.pair_classification.models import * # noqa: F401, F403
from allennlp_models.rc.models import * # noqa: F401, F403
from allennlp_models.structured_prediction.models import * # noqa: F401, F403
from allennlp_models.tagging.models import * # noqa: F401, F403
from allennlp_models.vision.models import * # noqa: F401, F403
from allennlp.common.plugins import import_plugins


def get_tasks() -> Dict[str, TaskCard]:
"""
Returns a mapping of [`TaskCard`](/models/common/task_card#taskcard)s for all
tasks.
"""

tasks = {}
task_card_paths = os.path.join(
os.path.dirname(os.path.realpath(__file__)), "taskcards", "*.json"
Expand All @@ -45,6 +32,7 @@ def get_pretrained_models() -> Dict[str, ModelCard]:
Returns a mapping of [`ModelCard`](/models/common/model_card#modelcard)s for all
available pretrained models.
"""
import_plugins()

pretrained_models = {}
model_card_paths = os.path.join(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ local transformer_dim = 1024;
"train_data_path": "https://allennlp.s3.amazonaws.com/datasets/snli/snli_1.0_train.jsonl",
"validation_data_path": "https://allennlp.s3.amazonaws.com/datasets/snli/snli_1.0_dev.jsonl",
"model": {
"type": "allennlp.fairness.bias_mitigator_applicator.BiasMitigatorApplicator",
"type": "bias_mitigator_applicator",
"base_model": {
"_pretrained": {
"archive_file": "https://storage.googleapis.com/allennlp-public-models/snli-roberta.2021-03-11.tar.gz",
Expand Down

0 comments on commit b17d114

Please sign in to comment.