Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Commit

Permalink
docs(tuner): add loss function explain for tuner (#138)
Browse files Browse the repository at this point in the history
  • Loading branch information
hanxiao authored Oct 18, 2021
1 parent 84585be commit 475c1d8
Show file tree
Hide file tree
Showing 25 changed files with 159 additions and 52 deletions.
59 changes: 58 additions & 1 deletion .github/workflows/cd.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,8 +21,65 @@ jobs:
env:
release_token: ${{ secrets.FINETUNER_RELEASE_TOKEN }}

prerelease:

prep-testbed:
runs-on: ubuntu-latest
needs: update-doc
steps:
- uses: actions/checkout@v2
- id: set-matrix
run: |
sudo apt-get install jq
echo "::set-output name=matrix::$(bash scripts/get-all-test-paths.sh)"
outputs:
matrix: ${{ steps.set-matrix.outputs.matrix }}

core-test:
needs: prep-testbed
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
python-version: [3.7]
test-path: ${{fromJson(needs.prep-testbed.outputs.matrix)}}
steps:
- uses: actions/checkout@v2
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v2
with:
python-version: ${{ matrix.python-version }}
- name: Prepare enviroment
run: |
python -m pip install --upgrade pip
python -m pip install wheel
pip install -r .github/requirements-test.txt
pip install -r .github/requirements-cicd.txt
pip install --no-cache-dir .
export JINA_LOG_LEVEL="ERROR"
- name: Test
id: test
run: |
pytest --suppress-no-test-exit-code --cov=finetuner --cov-report=xml \
-v -s -m "not gpu" ${{ matrix.test-path }}
echo "::set-output name=codecov_flag::finetuner"
timeout-minutes: 30
- name: Check codecov file
id: check_files
uses: andstor/file-existence-action@v1
with:
files: "coverage.xml"
- name: Upload coverage from test to Codecov
uses: codecov/codecov-action@v1
if: steps.check_files.outputs.files_exists == 'true' && ${{ matrix.python-version }} == '3.7'
with:
file: coverage.xml
name: ${{ matrix.test-path }}-codecov
flags: ${{ steps.test.outputs.codecov_flag }}
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }} # not required for public repos

prerelease:
needs: [update-doc, core-test]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
Expand Down
6 changes: 2 additions & 4 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -97,8 +97,6 @@ jobs:
core-test:
needs: prep-testbed
runs-on: ubuntu-latest
env:
JINA_DAEMON_BUILD: DEVEL
strategy:
fail-fast: false
matrix:
Expand All @@ -123,9 +121,8 @@ jobs:
run: |
pytest --suppress-no-test-exit-code --cov=finetuner --cov-report=xml \
-v -s -m "not gpu" ${{ matrix.test-path }}
echo "flag it as jina for codeoverage"
echo "::set-output name=codecov_flag::finetuner"
timeout-minutes: 20
timeout-minutes: 30
- name: Check codecov file
id: check_files
uses: andstor/file-existence-action@v1
Expand All @@ -139,6 +136,7 @@ jobs:
name: ${{ matrix.test-path }}-codecov
flags: ${{ steps.test.outputs.codecov_flag }}
fail_ci_if_error: false
token: ${{ secrets.CODECOV_TOKEN }} # not required for public repos

test-gpu:
needs: prep-testbed
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@
<p align=center>
<a href="https://pypi.org/project/finetuner/"><img src="https://github.com/jina-ai/jina/blob/master/.github/badges/python-badge.svg?raw=true" alt="Python 3.7 3.8 3.9" title="Finetuner supports Python 3.7 and above"></a>
<a href="https://pypi.org/project/finetuner/"><img src="https://img.shields.io/pypi/v/finetuner?color=%23099cec&amp;label=PyPI&amp;logo=pypi&amp;logoColor=white" alt="PyPI"></a>
<a href="https://codecov.io/gh/jina-ai/finetuner"><img src="https://codecov.io/gh/jina-ai/finetuner/branch/main/graph/badge.svg?token=xSs4acAEaJ"/></a>
<a href="https://slack.jina.ai"><img src="https://img.shields.io/badge/Slack-1.8k%2B-blueviolet?logo=slack&amp;logoColor=white"></a>
</p>

Expand Down
1 change: 1 addition & 0 deletions docs/basics/data-format.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
(data-format)=
# Data Format

Finetuner uses Jina [`Document`](https://docs.jina.ai/fundamentals/document/) as the primitive data type. In
Expand Down
6 changes: 6 additions & 0 deletions docs/basics/fit.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
# One-liner `fit()`

```{include} ../index.md
:start-after: <!-- start fit-method -->
:end-before: <!-- end fit-method -->
```
6 changes: 2 additions & 4 deletions docs/basics/index.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
```{toctree}
:hidden:
glossary
fit
data-format
tuner
tailor
labeler
glossary
```
Binary file removed docs/basics/lstm.cosine.png
Binary file not shown.
Binary file removed docs/basics/lstm.triplet.png
Binary file not shown.
Binary file removed docs/basics/mlp.cosine.png
Binary file not shown.
Binary file removed docs/basics/mlp.triplet.png
Binary file not shown.
1 change: 1 addition & 0 deletions docs/components/finetuner-composition.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
1 change: 1 addition & 0 deletions docs/components/four-usecases.svg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 8 additions & 0 deletions docs/components/index.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
```{toctree}
:hidden:
overview
tuner
tailor
labeler
```
File renamed without changes.
Binary file added docs/components/lstm.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/components/mlp.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
22 changes: 22 additions & 0 deletions docs/components/overview.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Overview

Finetuner project is composed of three components:
- **Tuner**: to tune any embedding model for better embedding on labeled data;
- **Tailor**: to convert any deep neural network into an embedding model;
- **Labeler**: a UI for interactive labeling and conduct [active learning](https://en.wikipedia.org/wiki/Active_learning_(machine_learning)) via Tuner.

```{figure} finetuner-composition.svg
:align: center
:width: 70%
```

## Usage

The three components can be used in combinations under different scenarios.


```{figure} four-usecases.svg
:align: center
:width: 80%
```

File renamed without changes
File renamed without changes.
84 changes: 44 additions & 40 deletions docs/basics/tuner.md → docs/components/tuner.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Labeled data can be constructed {ref}`by following this<construct-labeled-data>`

## Fit method

Tuner can be called via `finetuner.fit()`:
Tuner can be called via `finetuner.fit()`. Its minimum form looks like the folllowing:

```python
import finetuner
Expand All @@ -22,14 +22,45 @@ finetuner.fit(

Here, `embed_model` must be {term}`embedding model`; and `train_data` must be {term}`labeled data`.

### Loss function

Besides, it accepts the following `**kwargs`:
By default, Tuner uses `CosineSiameseLoss` for training. you can also use other built-in losses by `finetuner.fit(..., loss='...')`.

|Argument| Description |
|---|---|
|`eval_data` | the evaluation data (same format as `train_data`) to be used on every epoch|
|`batch_size`| the number of `Document` in each batch|
|`epochs` |the number of epochs for training |
Let $\mathbf{x}_i$ denote the predicted embedding for Document $i$, the built-in losses are summarized as below:

:::{dropdown} `CosineSiameseLoss`
:open:


$$\ell_{i,j} = \big(\cos(\mathbf{x}_i, \mathbf{x}_j) - y_{i,j}\big)^2$$, where $y_{i,j}$ is the label of $\{-1, 1\}$ and $y_{i,j}=1$ represents Document $i$ and $j$ are positively related.

:::

:::{dropdown} `EuclideanSiameseLoss`
:open:

$$\ell_{i,j}=\frac{1}{2}\big(y_{i,j}\left \| \mathbf{x}_i-\mathbf{x}_j\right \| + (1-y_{i,j})\max(0, 1-\left \| \mathbf{x}_i-\mathbf{x}_j\right \|)\big)^2$$, where $y_{i,j}$ is the label of $\{-1, 1\}$ and $y_{i,j}=1$ represents Document $i$ and $j$ are positively related.

:::

:::{dropdown} `CosineTripletLoss`
:open:

$$\ell_{i, p, n}=\max(0, \cos(\mathbf{x}_i, \mathbf{x}_n)-\cos(\mathbf{x}_i, \mathbf{x}_p)+1)$$, where Document $p$ and $i$ are positively related, whereas $n$ and $i$ are negatively related or unrelated.
:::

:::{dropdown} `EuclideanTripletLoss`
:open:

$$\ell_{i, p, n}=\max(0, \left \|\mathbf{x}_i, \mathbf{x}_p \right \|-\left \|\mathbf{x}_i, \mathbf{x}_n \right \|+1)$$, where Document $p$ and $i$ are positively related, whereas $n$ and $i$ are negatively related or unrelated.

:::

```{tip}
Although siamese and triplet loss work on pair and triplet input respectively, there is **no need** to worry about the data input format. You only need to make sure your data is labeled according to {ref}`data-format`, then you can switch between all losses freely.
```

## Examples

Expand Down Expand Up @@ -87,23 +118,10 @@ Besides, it accepts the following `**kwargs`:
)
```

By default, `head_layer` is set to `CosineLayer`, one can also use `TripletLayer`:

````{tab} CosineLayer

```{figure} mlp.cosine.png
:align: center
```

````
````{tab} TripletLayer
```{figure} mlp.triplet.png
:align: center
```
````
```{figure} mlp.png
:align: center
```

### Tune a bidirectional LSTM on Covid QA

Expand Down Expand Up @@ -167,22 +185,8 @@ By default, `head_layer` is set to `CosineLayer`, one can also use `TripletLayer
)
```

By default, `head_layer` is set to `CosineLayer`, one can also use `TripletLayer`:

````{tab} CosineLayer

```{figure} lstm.cosine.png
:align: center
```

````
````{tab} TripletLayer
```{figure} lstm.triplet.png
:align: center
```
````
```{figure} lstm.png
:align: center
```


Loading

0 comments on commit 475c1d8

Please sign in to comment.