Skip to content

ankitsharma07/fastapi-tweet-extraction

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Objective

  • Given a tweet and a sentiment, we have to predict the word or phrase that exemplifies the provided sentiment.
  • This data is from a kaggle competition: Tweet Sentiment Extraction
  • Data format
    • Train data has:
      • text: text of the tweet
      • sentiment: the general sentiment of the tweet
      • selected_text: the text that supports the tweet’s sentiment

FastAPI

  • Used FastAPI to create an API endpoint /predict which takes:
    • tweet: string
    • sentiment: string
  • And Outputs
    • original tweet
    • original sentiment
    • phrase that supports the sentiment
  • There are three types of sentiment:
    • positive
    • negative
    • neutral

Example Request Body

{
  "tweet": "My bike was put on hold...should have known that.... argh total bummer",
  "sentiment": "negative"
}

Response Body

{
  "tweet": "My bike was put on hold...should have known that.... argh total bummer",
  "sentiment": "negative",
  "text representing sentiment": "argh total bummer"
}

Docker Container

  • To create a docker container first clone the repository and download and add these folders inside /app/input/ and /app/
  • bert-base-uncased inside /app/input/
  • models folder inside /app/

Tree structure of directories post addition of these folders

├── Dockerfile
├── README.org
├── app
│   ├── __pycache__
│   │   └── api.cpython-310.pyc
│   ├── api.py
│   ├── input
│   │   ├── bert-base-uncased
│   │   │   ├── config.json
│   │   │   ├── pytorch_model.bin
│   │   │   └── vocab.txt
│   │   ├── test.csv
│   │   └── train.csv
│   ├── models
│   │   ├── model.bin
│   │   └── model_cfg.py
│   └── notebooks
│       └── tweet-sentiment-bert-and-eda.ipynb
├── requirements.txt
└── src
    ├── __init__.py
    ├── config.py
    ├── dataset.py
    ├── engine.py
    ├── inference.py
    ├── model.py
    ├── train.py
    └── utils.py

NOTE: The src folder contains codes for training and inference. We won’t need them for our docker env but I have kept them there in case we want to re-train our models.

Build and Run

Build

After adding the files and folders in the mentioned directories come to the base directory where Dockerfile is present and run this command to build the docker image:

docker build -t image-name .

Run

After successfully building the Docker Image we can run our container using:

docker run -p 80:80 image-name

FastAPI Swagger URL

After running the docker visit http://0.0.0.0/docs/ url where we have our get / method which results the Health of our API and post /predict method which results the resultant phrase which exemplifies the sentiment for the given tweet.

Request URL

http://0.0.0.0/predict

Curl request

curl -X 'POST' \
  'http://0.0.0.0/predict' \
  -H 'accept: application/json' \
  -H 'Content-Type: application/json' \
  -d '{
  "tweet": "",
  "sentiment": ""
}'

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published