From 9d14dfd78f107eb428b49a9f6155c461b050c41a Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Aleksandar=20Toma=C5=A1evi=C4=87?= <39856297+atomashevic@users.noreply.github.com> Date: Tue, 20 Aug 2024 15:23:27 +0200 Subject: [PATCH] Update README.md --- README.md | 28 ++++++++++++++++++++++++---- 1 file changed, 24 insertions(+), 4 deletions(-) diff --git a/README.md b/README.md index d361719..58e74bc 100644 --- a/README.md +++ b/README.md @@ -129,20 +129,40 @@ The rag function supports various large language models (LLMs), including TinyLL Here's an example based on the decription of this package. First, we specify the text data. ```R -text <- "With `transforEmotion` you can use cutting-edge transformer models for zero-shot emotion classification of text, image, and video in R, *all without the need for a GPU, subscriptions, paid services, or using Python. Implements sentiment analysis using [huggingface](https://huggingface.co/) transformer zero-shot classification model pipelines. The default pipeline for text is [Cross-Encoder's DistilRoBERTa](https://huggingface.co/cross-encoder/nli-distilroberta-base) trained on the [Stanford Natural Language Inference](https://huggingface.co/datasets/snli) (SNLI) and [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MultiNLI) datasets. Using similar models, zero-shot classification transformers have demonstrated superior performance relative to other natural language processing models (Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)). All other zero-shot classification model pipelines can be implemented using their model name from https://huggingface.co/models?pipeline_tag=zero-shot-classification." +text <- "With `transforEmotion` you can use cutting-edge transformer models for zero-shot emotion + classification of text, image, and video in R, *all without the need for a GPU, + subscriptions, paid services, or using Python. Implements sentiment analysis + using [huggingface](https://huggingface.co/) transformer zero-shot classification model pipelines. + The default pipeline for text is + [Cross-Encoder's DistilRoBERTa](https://huggingface.co/cross-encoder/nli-distilroberta-base) + trained on the [Stanford Natural Language Inference](https://huggingface.co/datasets/snli) (SNLI) and + [Multi-Genre Natural Language Inference](https://huggingface.co/datasets/multi_nli) (MultiNLI) datasets. + Using similar models, zero-shot classification transformers have demonstrated superior performance + relative to other natural language processing models + (Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)). + All other zero-shot classification model pipelines can be implemented using their model name + from https://huggingface.co/models?pipeline_tag=zero-shot-classification." ``` And then we run the `rag` function. ```R - rag(text, query = "What is the use case for transforEmotion package?" -+ ) + rag(text, query = "What is the use case for transforEmotion package?") ``` This code will provide the output similar to this one. ``` -The use case for transforEmotion package is to use cutting-edge transformer models for zero-shot emotion classification of text, image, and video in R, without the need for a GPU, subscriptions, paid services, or using Python. This package implements sentiment analysis using the Cross-Encoder's DistilRoBERTa model trained on the Stanford Natural Language Inference (SNLI) and MultiNLI datasets. Using similar models, zero-shot classification transformers have demonstrated superior performance relative to other natural language processing models (Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)). The transforEmotion package can be used to implement these models and other zero-shot classification model pipelines from the HuggingFace library.> +The use case for transforEmotion package is to use cutting-edge transformer +models forzero-shot emotion classification of text, image, and video in R, +without the need for a GPU, subscriptions, paid services, or using Python. +This package implements sentiment analysis using the Cross-Encoder's DistilRoBERTa +model trained on the Stanford Natural Language Inference (SNLI) and MultiNLI datasets. +Using similar models, zero-shot classification transformers have demonstrated +superior performance relative to other natural language processing models +(Yin, Hay, & Roth, [2019](https://arxiv.org/abs/1909.00161)). +The transforEmotion package can be used to implement these models and other +zero-shot classification model pipelines from the HuggingFace library.> ``` ## Image Example