Skip to content

Commit

Permalink
#9 Create pip package and automated builds
Browse files Browse the repository at this point in the history
Release PyPi package + Create GitHub workflow
  • Loading branch information
casper-hansen authored Sep 1, 2023
2 parents 7fbe9bb + afcce1a commit f0eba43
Show file tree
Hide file tree
Showing 17 changed files with 320 additions and 90 deletions.
95 changes: 95 additions & 0 deletions .github/workflows/build.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
name: Build AutoAWQ Wheels with CUDA

on:
push:
tags:
- "v*"

jobs:
release:
# Retrieve tag and create release
name: Create Release
runs-on: ubuntu-latest
outputs:
upload_url: ${{ steps.create_release.outputs.upload_url }}
steps:
- name: Checkout
uses: actions/checkout@v3

- name: Extract branch info
shell: bash
run: |
echo "release_tag=${GITHUB_REF#refs/*/}" >> $GITHUB_ENV
- name: Create Release
id: create_release
uses: "actions/github-script@v6"
env:
RELEASE_TAG: ${{ env.release_tag }}
with:
github-token: "${{ secrets.GITHUB_TOKEN }}"
script: |
const script = require('.github/workflows/scripts/github_create_release.js')
await script(github, context, core)
build_wheels:
name: Build AWQ
runs-on: ${{ matrix.os }}
needs: release

strategy:
matrix:
os: [ubuntu-20.04, windows-latest]
pyver: ["3.8", "3.9", "3.10", "3.11"]
cuda: ["11.8"]
defaults:
run:
shell: pwsh
env:
CUDA_VERSION: ${{ matrix.cuda }}

steps:
- uses: actions/checkout@v3

- uses: actions/setup-python@v3
with:
python-version: ${{ matrix.pyver }}

- name: Setup Miniconda
uses: conda-incubator/[email protected]
with:
activate-environment: "build"
python-version: ${{ matrix.pyver }}
mamba-version: "*"
use-mamba: false
channels: conda-forge,defaults
channel-priority: true
add-pip-as-python-dependency: true
auto-activate-base: false

- name: Install Dependencies
run: |
conda install cuda-toolkit -c "nvidia/label/cuda-${env:CUDA_VERSION}.0"
conda install pytorch "pytorch-cuda=${env:CUDA_VERSION}" -c pytorch -c nvidia
python -m pip install --upgrade build setuptools wheel ninja
# Environment variables
Add-Content $env:GITHUB_ENV "CUDA_PATH=$env:CONDA_PREFIX"
Add-Content $env:GITHUB_ENV "CUDA_HOME=$env:CONDA_PREFIX"
if ($IsLinux) {$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib:' + $env:LD_LIBRARY_PATH}
# Print version information
python --version
python -c "import torch; print('PyTorch:', torch.__version__)"
python -c "import torch; print('CUDA:', torch.version.cuda)"
python -c "from torch.utils import cpp_extension; print (cpp_extension.CUDA_HOME)"
- name: Build Wheel
run: |
python setup.py sdist bdist_wheel
- name: Upload Assets
uses: shogo82148/actions-upload-release-asset@v1
with:
upload_url: ${{ needs.release.outputs.upload_url }}
asset_path: ./dist/*.whl
17 changes: 17 additions & 0 deletions .github/workflows/scripts/github_create_release.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
module.exports = async (github, context, core) => {
try {
const response = await github.rest.repos.createRelease({
draft: false,
generate_release_notes: true,
name: process.env.RELEASE_TAG,
owner: context.repo.owner,
prerelease: false,
repo: context.repo.repo,
tag_name: process.env.RELEASE_TAG,
});

core.setOutput('upload_url', response.data.upload_url);
} catch (error) {
core.setFailed(error.message);
}
}
30 changes: 25 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ AutoAWQ is a package that implements the Activation-aware Weight Quantization (A

Roadmap:

- [ ] Publish pip package
- [x] Publish pip package
- [ ] Refactor quantization code
- [ ] Support more models
- [ ] Optimize the speed of models
Expand All @@ -13,15 +13,29 @@ Roadmap:

Requirements:
- Compute Capability 8.0 (sm80). Ampere and later architectures are supported.
- CUDA Toolkit 11.8 and later.

Clone this repository and install with pip.
Install:
- Use pip to install awq

```
pip install awq
```

### Build source

<details>

<summary>Build AutoAWQ from scratch</summary>

```
git clone https://github.com/casper-hansen/AutoAWQ
cd AutoAWQ
pip install -e .
```

</details>

## Supported models

The detailed support list:
Expand All @@ -36,6 +50,7 @@ The detailed support list:
| OPT | 125m/1.3B/2.7B/6.7B/13B/30B |
| Bloom | 560m/3B/7B/ |
| LLaVA-v0 | 13B |
| GPTJ | 6.7B |

## Usage

Expand All @@ -44,8 +59,8 @@ Below, you will find examples for how to easily quantize a model and run inferen
### Quantization

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
from awq.models.auto import AutoAWQForCausalLM

model_path = 'lmsys/vicuna-7b-v1.5'
quant_path = 'vicuna-7b-v1.5-awq'
Expand All @@ -68,8 +83,8 @@ tokenizer.save_pretrained(quant_path)
Run inference on a quantized model from Huggingface:

```python
from awq import AutoAWQForCausalLM
from transformers import AutoTokenizer
from awq.models.auto import AutoAWQForCausalLM

quant_path = "casperhansen/vicuna-7b-v1.5-awq"
quant_file = "awq_model_w4_g128.pt"
Expand Down Expand Up @@ -101,8 +116,11 @@ Benchmark speeds may vary from server to server and that it also depends on your
| MPT-30B | A6000 | OOM | 31.57 | -- |
| Falcon-7B | A6000 | 39.44 | 27.34 | 1.44x |

<details>

For example, here is the difference between a fast and slow CPU on MPT-7B:
<summary>Detailed benchmark (CPU vs. GPU)</summary>

Here is the difference between a fast and slow CPU on MPT-7B:

RTX 4090 + Intel i9 13900K (2 different VMs):
- CUDA 12.0, Driver 525.125.06: 134 tokens/s (7.46 ms/token)
Expand All @@ -113,6 +131,8 @@ RTX 4090 + AMD EPYC 7-Series (3 different VMs):
- CUDA 12.2, Driver 535.54.03: 56 tokens/s (17.71 ms/token)
- CUDA 12.0, Driver 525.125.06: 55 tokens/ (18.15 ms/token)

</details>

## Reference

If you find AWQ useful or relevant to your research, you can cite their [paper](https://arxiv.org/abs/2306.00978):
Expand Down
1 change: 1 addition & 0 deletions awq/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
from awq.models.auto import AutoAWQForCausalLM
4 changes: 2 additions & 2 deletions awq/entry.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import argparse
from lm_eval import evaluator
from transformers import AutoTokenizer
from awq.models.auto import AutoAWQForCausalLM
from awq import AutoAWQForCausalLM
from awq.quantize.auto_clip import apply_clip
from awq.quantize.auto_scale import apply_scale
from awq.utils.lm_eval_adaptor import LMEvalAdaptor
Expand Down Expand Up @@ -152,7 +152,7 @@ def _warmup(device:str):
parser.add_argument('--tasks', type=str, default='wikitext', help='Tasks to evaluate. '
'Separate tasks by comma for multiple tasks.'
'https://github.com/EleutherAI/lm-evaluation-harness/blob/master/docs/task_table.md')
parser.add_argument("--task_use_pretrained", default=False, action=argparse.BooleanOptionalAction,
parser.add_argument("--task_use_pretrained", default=False, action='store_true',
help="Pass '--task_use_pretrained' to use a pretrained model running FP16")
parser.add_argument('--task_batch_size', type=int, default=1)
parser.add_argument('--task_n_shot', type=int, default=0)
Expand Down
9 changes: 1 addition & 8 deletions awq/modules/fused_attn.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,8 +34,6 @@ def _set_cos_sin_cache(self, seq_len, device, dtype):
sin = freqs.sin()
cache = torch.cat((cos, sin), dim=-1)

# self.register_buffer("cos_cached", emb.cos()[None, None, :, :].to(dtype), persistent=False)
# self.register_buffer("sin_cached", emb.sin()[None, None, :, :].to(dtype), persistent=False)
self.register_buffer("cos_sin_cache", cache.half(), persistent=False)

def forward(
Expand All @@ -46,7 +44,6 @@ def forward(
):
# Apply rotary embedding to the query and key before passing them
# to the attention op.
# print(positions.shape, query.shape, key.shape, self.cos_sin_cache.shape)
query = query.contiguous()
key = key.contiguous()
awq_inference_engine.rotary_embedding_neox(
Expand Down Expand Up @@ -146,7 +143,7 @@ def make_quant_attn(model, dev):
qweights = torch.cat([q_proj.qweight, k_proj.qweight, v_proj.qweight], dim=1)
qzeros = torch.cat([q_proj.qzeros, k_proj.qzeros, v_proj.qzeros], dim=1)
scales = torch.cat([q_proj.scales, k_proj.scales, v_proj.scales], dim=1)
# g_idx = torch.cat([q_proj.g_idx, k_proj.g_idx, v_proj.g_idx], dim=0)

g_idx = None
bias = torch.cat([q_proj.bias, k_proj.bias, v_proj.bias], dim=0) if q_proj.bias is not None else None

Expand All @@ -156,8 +153,6 @@ def make_quant_attn(model, dev):
qkv_layer.scales = scales

qkv_layer.bias = bias
# We're dropping the rotary embedding layer m.rotary_emb here. We don't need it in the triton branch.

attn = QuantLlamaAttention(m.hidden_size, m.num_heads, qkv_layer, m.o_proj, dev)

if '.' in name:
Expand All @@ -169,6 +164,4 @@ def make_quant_attn(model, dev):
parent = model
child_name = name

#print(f"Replacing {name} with quant_attn; parent: {parent_name}, child's name: {child_name}")

setattr(parent, child_name, attn)
1 change: 0 additions & 1 deletion awq/modules/fused_mlp.py
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,6 @@ def our_llama_mlp(self, x):

def make_fused_mlp(m, parent_name=''):
if not hasattr(make_fused_mlp, "called"):
# print("[Warning] Calling a fake MLP fusion. But still faster than Huggingface Implimentation.")
make_fused_mlp.called = True
"""
Replace all LlamaMLP modules with QuantLlamaMLP modules, which fuses many of the operations.
Expand Down
2 changes: 0 additions & 2 deletions awq/modules/fused_norm.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,4 @@ def make_quant_norm(model):
parent = model
child_name = name

#print(f"Replacing {name} with quant_attn; parent: {parent_name}, child's name: {child_name}")

setattr(parent, child_name, norm)
4 changes: 2 additions & 2 deletions awq/quantize/auto_scale.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import gc
import torch
import torch.nn as nn
import logging

from transformers.models.bloom.modeling_bloom import BloomBlock, BloomGelu
from transformers.models.opt.modeling_opt import OPTDecoderLayer
Expand Down Expand Up @@ -154,9 +155,8 @@ def _search_module_scale(block, linears2scale: list, x, kwargs={}):
best_scales = scales
block.load_state_dict(org_sd)
if best_ratio == -1:
print(history)
logging.debug(history)
raise Exception
# print(best_ratio)
best_scales = best_scales.view(-1)

assert torch.isnan(best_scales).sum() == 0, best_scales
Expand Down
3 changes: 2 additions & 1 deletion awq/utils/calib_data.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
import torch
import logging
from datasets import load_dataset

def get_calib_dataset(data="pileval", tokenizer=None, n_samples=512, block_size=512):
Expand All @@ -25,5 +26,5 @@ def get_calib_dataset(data="pileval", tokenizer=None, n_samples=512, block_size=
# now concatenate all samples and split according to block size
cat_samples = torch.cat(samples, dim=1)
n_split = cat_samples.shape[1] // block_size
print(f" * Split into {n_split} blocks")
logging.debug(f" * Split into {n_split} blocks")
return [cat_samples[:, i*block_size:(i+1)*block_size] for i in range(n_split)]
4 changes: 2 additions & 2 deletions awq/utils/lm_eval_adaptor.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
import torch
from lm_eval.base import BaseLM
import fnmatch

import logging

class LMEvalAdaptor(BaseLM):

Expand Down Expand Up @@ -52,7 +52,7 @@ def max_length(self):
elif 'falcon' in self.model_name:
return 2048
else:
print(self.model.config)
logging.debug(self.model.config)
raise NotImplementedError

@property
Expand Down
3 changes: 2 additions & 1 deletion awq/utils/parallel.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
import os
import torch
import gc
import logging


def auto_parallel(args):
Expand All @@ -23,5 +24,5 @@ def auto_parallel(args):
cuda_visible_devices = list(range(8))
os.environ["CUDA_VISIBLE_DEVICES"] = ",".join(
[str(dev) for dev in cuda_visible_devices[:n_gpu]])
print("CUDA_VISIBLE_DEVICES: ", os.environ["CUDA_VISIBLE_DEVICES"])
logging.debug("CUDA_VISIBLE_DEVICES: ", os.environ["CUDA_VISIBLE_DEVICES"])
return cuda_visible_devices
2 changes: 1 addition & 1 deletion awq_cuda/layernorm/reduction.cuh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ https://github.com/NVIDIA/FasterTransformer/blob/main/src/fastertransformer/kern
#include <float.h>
#include <type_traits>

static const float HALF_FLT_MAX = 65504.F;
#define HALF_FLT_MAX 65504.F
#define FINAL_MASK 0xffffffff


Expand Down
Loading

0 comments on commit f0eba43

Please sign in to comment.