-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix the broken test #77
Conversation
@ninginthecloud has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Codecov Report
@@ Coverage Diff @@
## main #77 +/- ##
==========================================
- Coverage 94.43% 94.40% -0.03%
==========================================
Files 119 119
Lines 6433 6435 +2
==========================================
Hits 6075 6075
- Misses 358 360 +2
📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
ad11d92
to
f10ad5f
Compare
@ninginthecloud has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
@ninginthecloud has imported this pull request. If you are a Meta employee, you can view this diff on Phabricator. |
Summary: # TorchEval Version 0.0.6 ## Change Log - New metrics: - AUC - Binary, Multiclass, Multilabel AUPRC (also called Average Precision) pytorch#108 pytorch#109 - Multilabel Precision Recall Curve pytorch#87 - Recall at Fixed Precision pytorch#88 pytorch#91 - Windowed Mean Square Error pytorch#72 pytorch#86 - Blue Score pytorch#93 pytorch#95 - Perplexity pytorch#90 - Word Error Rate pytorch#97 - Word Information Loss pytorch#111 - Word Information Preserved pytorch#110 - Features - Added Sync for Dictionaries of Metrics pytorch#98 - Improved FLOPS counter pytorch#81 - Improved Module Summary, added forward elapsed times pytorch#100 pytorch#103 pytorch#104 pytorch#105 pytorch#114 - AUROC now supports weighted inputs pytorch#94 - Other - Improved Documentation pytorch#80 pytorch#117 pytorch#121 - Added Module Summary to Quickstart pytorch#113 - Updates several unit tests pytorch#77 pytorch#96 pytorch#101 pytorch#73 - Docs Automatically Add New Metrics pytorch#118 - Several Aggregation Metrics now Support fp64 pytorch#116 pytorch#123 ### [BETA] Sync Dictionaries of Metrics We're looking forward to building tooling for metric collections. The first important feature towards this end is collective syncing of groups of metrics. In the example below, we show how easy it is to sync all your metrics at the same time with `sync_and_compute_collection`. This method is not only for convenience, on the backend we only use one torch distributed sync collective for the entire group of metrics, meaning that the overhead from repeated network directives is maximally reduced. ```python import torch from torcheval.metrics import BinaryAUPRC, BinaryAUROC, BinaryAccuracy from torcheval.metrics.toolkit import sync_and_compute_collection, reset_metrics # Collections should be Dict[str, Metric] train_metrics = { "train_auprc": BinaryAUPRC(), "train_auroc": BinaryAUROC(), "train_accuracy": BinaryAccuracy(), } # Hydrate metrics with some random data preds = torch.rand(size=(100,)) targets = torch.randint(low=0, high=2, size=(100,)) for name, metric in train_metrics.items(): metric.update(preds, targets) # Sync the whole group with a single gather print(sync_and_compute_collection(train_metrics)) >>> {'train_auprc': tensor(0.5913), 'train_auroc': tensor(0.5161, dtype=torch.float64), 'train_accuracy': tensor(0.5100)} # reset all metrics in collection reset_metrics(train_metrics.values()) ``` Be on the lookout for more metric collection code coming in future releases. ## Contributors We're grateful for our community, which helps us improving torcheval by highlighting issues and contributing code. The following persons have contributed patches for this release: Rohit Alekar lindawangg Julia Reinspach jingchi-wang Ekta Sardana williamhufb @\andreasfloros Erika Lal samiwilf Reviewed By: ananthsub Differential Revision: D42737308 fbshipit-source-id: 4c9d72ce73a35636d7cd6421926a23a80250e267
Summary: Pull Request resolved: #124 # TorchEval Version 0.0.6 ## Change Log - New metrics: - AUC - Binary, Multiclass, Multilabel AUPRC (also called Average Precision) #108 #109 - Multilabel Precision Recall Curve #87 - Recall at Fixed Precision #88 #91 - Windowed Mean Square Error #72 #86 - Blue Score #93 #95 - Perplexity #90 - Word Error Rate #97 - Word Information Loss #111 - Word Information Preserved #110 - Features - Added Sync for Dictionaries of Metrics #98 - Improved FLOPS counter #81 - Improved Module Summary, added forward elapsed times #100 #103 #104 #105 #114 - AUROC now supports weighted inputs #94 - Other - Improved Documentation #80 #117 #121 - Added Module Summary to Quickstart #113 - Updates several unit tests #77 #96 #101 #73 - Docs Automatically Add New Metrics #118 - Several Aggregation Metrics now Support fp64 #116 #123 ### [BETA] Sync Dictionaries of Metrics We're looking forward to building tooling for metric collections. The first important feature towards this end is collective syncing of groups of metrics. In the example below, we show how easy it is to sync all your metrics at the same time with `sync_and_compute_collection`. This method is not only for convenience, on the backend we only use one torch distributed sync collective for the entire group of metrics, meaning that the overhead from repeated network directives is maximally reduced. ```python import torch from torcheval.metrics import BinaryAUPRC, BinaryAUROC, BinaryAccuracy from torcheval.metrics.toolkit import sync_and_compute_collection, reset_metrics # Collections should be Dict[str, Metric] train_metrics = { "train_auprc": BinaryAUPRC(), "train_auroc": BinaryAUROC(), "train_accuracy": BinaryAccuracy(), } # Hydrate metrics with some random data preds = torch.rand(size=(100,)) targets = torch.randint(low=0, high=2, size=(100,)) for name, metric in train_metrics.items(): metric.update(preds, targets) # Sync the whole group with a single gather print(sync_and_compute_collection(train_metrics)) >>> {'train_auprc': tensor(0.5913), 'train_auroc': tensor(0.5161, dtype=torch.float64), 'train_accuracy': tensor(0.5100)} # reset all metrics in collection reset_metrics(train_metrics.values()) ``` Be on the lookout for more metric collection code coming in future releases. ## Contributors We're grateful for our community, which helps us improving torcheval by highlighting issues and contributing code. The following persons have contributed patches for this release: Rohit Alekar lindawangg Julia Reinspach jingchi-wang Ekta Sardana williamhufb @\andreasfloros Erika Lal samiwilf Reviewed By: ananthsub Differential Revision: D42737308 fbshipit-source-id: dfd852345e1a9f3069ea33b056f5a60a3adde5aa
Please read through our contribution guide prior to creating your pull request.
Summary:
tests/metrics/test_toolkit.py has two broken test examples. Skip them for now and we will fix it soon.
Test plan:
Fixes #{issue number}