diff --git a/Makefile b/Makefile index 23b36076..02f27d51 100644 --- a/Makefile +++ b/Makefile @@ -32,6 +32,7 @@ install: @pip install --user -U pip @pip install --user "Cython<3" @pip install --user -r requirements.txt + @pip install --user --no-deps -r requirements-without-deps.txt @./scripts/setup_on_jupyterlab.sh @pre-commit install @sudo apt-get -y install graphviz diff --git a/notebooks/vertex_genai/solutions/vertex_llm_tuning.ipynb b/notebooks/vertex_genai/solutions/vertex_llm_tuning.ipynb new file mode 100644 index 00000000..ec038808 --- /dev/null +++ b/notebooks/vertex_genai/solutions/vertex_llm_tuning.ipynb @@ -0,0 +1,612 @@ +{ + "cells": [ + { + "cell_type": "markdown", + "metadata": { + "id": "tvgnzT1CKxrO" + }, + "source": [ + "# Tuning and deploy a foundation model\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "d975e698c9a4" + }, + "source": [ + "**Learning Objective**\n", + "\n", + "1. Learn how to generate a JSONL file for PaLM tuning\n", + "1. Learn how to launch a tuning job on Vertex Pipeline\n", + "1. Learn how to deploy and query a tuned LLM\n", + "1. Learn how to evaluate a tuned LLM\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "JAPoU8Sm5E6e", + "tags": [] + }, + "source": [ + "Creating an LLM requires massive amounts of data, significant computing resources, and specialized skills. In this notebook, you'll learn how tuning allows you to customize a PaLM foundation model on Vertex Generative AI studio for more specific tasks or knowledge domains.\n", + "While the prompt design is excellent for quick experimentation, if training data is available, you can achieve higher quality by tuning the model. Tuning a model enables you to customize the model response based on examples of the task you want the model to perform.\n", + "\n", + "For more details on tuning have a look at the [official documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models).\n", + "\n", + "**Quota**: Tuning the `text-bison@001` model uses the `tpu-v3-8` training resources and the accompanying quotas from your Google Cloud project. Each project has a default quota of eight v3-8 cores, which allows for one to two concurrent tuning jobs. If you want to run more concurrent jobs you need to request additional quota via the [Quotas page](https://console.cloud.google.com/iam-admin/quotas).\n", + "\n", + "**Costs:** This tutorial uses a billable component of Google Cloud `Vertex AI Generative AI Studio`.\n", + "Learn about [Vertex AI pricing](https://cloud.google.com/vertex-ai/pricing),\n", + "and use the [Pricing Calculator](https://cloud.google.com/products/calculator/)\n", + "to generate a cost estimate based on your projected usage.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Setup" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "import os\n", + "import warnings\n", + "\n", + "warnings.filterwarnings(\"ignore\")\n", + "os.environ[\"TF_CPP_MIN_LOG_LEVEL\"] = \"2\"" + ] + }, + { + "cell_type": "code", + "execution_count": 2, + "metadata": {}, + "outputs": [], + "source": [ + "import time\n", + "\n", + "import evaluate\n", + "import pandas as pd\n", + "from google.cloud import aiplatform, bigquery\n", + "from sklearn.model_selection import train_test_split\n", + "from vertexai.preview.language_models import TextGenerationModel" + ] + }, + { + "cell_type": "code", + "execution_count": 3, + "metadata": {}, + "outputs": [], + "source": [ + "REGION = \"us-central1\"\n", + "PROJECT_ID = !(gcloud config get-value project)\n", + "PROJECT_ID = PROJECT_ID[0]\n", + "BUCKET_NAME = PROJECT_ID\n", + "BUCKET_URI = f\"gs://{BUCKET_NAME}\"" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "colab": { + "base_uri": "https://localhost:8080/" + }, + "id": "NSRiXkavaalH", + "outputId": "8b752c8a-d575-4982-85f8-5a40317c8ac3" + }, + "outputs": [], + "source": [ + "!gsutil ls $BUCKET_URI || gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "WdtNETYxaalH" + }, + "source": [ + "## Training Data\n", + "\n", + "\n", + "In this notebook, we will be tuning the Vertex PaLM vertex using the Python SDK on a questions & answers dataset from StackOverflow. \n", + "Our first step will be to query the StackOverflow data on BigQuery Public Datasets, limiting to questions with the `python` tag, and `accepted` answers from 2020-01-01 only. \n", + "\n", + "We will limit the dataset to 1000 samples, 800 of which will be used to tune the LLM and the rest for evaluating the tuned model.\n", + "The second step will be to convert the dataset into a JSONL format, with one example per line, so that the tuning job can consume it.\n" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "1BydoFfTaalI" + }, + "source": [ + "Next let us run the query to assemble our dataset into the DataFrame `df`:" + ] + }, + { + "cell_type": "code", + "execution_count": 4, + "metadata": {}, + "outputs": [], + "source": [ + "%%bigquery df\n", + "\n", + "SELECT CONCAT(q.title, q.body) as input_text, a.body AS output_text\n", + "FROM\n", + " `bigquery-public-data.stackoverflow.posts_questions` q\n", + "JOIN\n", + " `bigquery-public-data.stackoverflow.posts_answers` a\n", + "ON\n", + " q.accepted_answer_id = a.id\n", + "WHERE\n", + " q.accepted_answer_id IS NOT NULL AND\n", + " REGEXP_CONTAINS(q.tags, \"python\") AND\n", + " a.creation_date >= \"2020-01-01\"\n", + "LIMIT\n", + " 1000" + ] + }, + { + "cell_type": "code", + "execution_count": 5, + "metadata": { + "id": "9VTaovLtaalI" + }, + "outputs": [], + "source": [ + "df.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The column `input_text` corresponds to the actual questions asked by the StackOverflow users, while the `output_text` column corresponds to the correct answers. From this dataset of 1000 questions-answers pairs, we will now need to generate a JSONL file with one example per line in the format:\n", + "\n", + "```python\n", + "{'input_text': , 'output_text': }\n", + "```\n", + "\n", + "This is the format we need to tune our LLM model.\n", + "\n", + "Let's first split the data into training and evaluation. To tune PaLM for a Q&A task we advise 100+ training examples. In this case you will use 800.\n" + ] + }, + { + "cell_type": "code", + "execution_count": 6, + "metadata": { + "id": "aXqBwSwaaalJ" + }, + "outputs": [], + "source": [ + "# split is set to 80/20\n", + "train, evaluation = train_test_split(df, test_size=0.2)\n", + "print(len(train))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "nf-q8TpnaalJ" + }, + "source": [ + "For tuning, the training data first needs to be converted into a JSONL format, which is very easy in Pandas:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "training_data_filename = \"tune_data_stack_overflow_python_qa.jsonl\"\n", + "\n", + "train.to_json(training_data_filename, orient=\"records\", lines=True)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let's inspect the first line of the JSONL file we just created:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "!head -n 1 $training_data_filename" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "FV8Wxz7JaalN" + }, + "source": [ + "You can then export the local file to GCS, so that it can be used by Vertex AI for the tuning job." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "vDDLHac5aalN" + }, + "outputs": [], + "source": [ + "!gsutil cp $training_data_filename $BUCKET_URI" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "Ff68wmzoaalN" + }, + "source": [ + "You can check to make sure that the file successfully transferred to your Google Cloud Storage bucket:" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "2-DnKpYlaalN" + }, + "outputs": [], + "source": [ + "TRAINING_DATA_URI = f\"{BUCKET_URI}/{training_data_filename}\"\n", + "\n", + "!gsutil ls -al $TRAINING_DATA_URI" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "-mW7K57BaalN", + "tags": [] + }, + "source": [ + "### Model Tuning\n", + "Now it's time to start to tune a model. You will use the Vertex AI SDK to submit our tuning job.\n", + "\n", + "#### Recommended Tuning Configurations\n", + "✅ Here are some recommended configurations for tuning a foundation model based on the task, in this example Q&A. You can find more in the [documentation](https://cloud.google.com/vertex-ai/docs/generative-ai/models/tune-models).\n", + "\n", + "Question Answering task:\n", + "- Make sure that your train dataset size is 100+\n", + "- Choose your training steps in the range 100-500. You can try more than one value to get the best performance on a particular dataset (e.g. 100, 200, 500)" + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": {}, + "outputs": [], + "source": [ + "aiplatform.init(project=PROJECT_ID, location=REGION)\n", + "\n", + "model = TextGenerationModel.from_pretrained(\"text-bison@001\")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "o0XNL9ojaalN" + }, + "source": [ + "Next it's time to start your tuning job. \n", + "\n", + "**Disclaimer:** Tuning and deploying a LLM model takes time. \n", + "For 100 train steps, it takes around 1h. For 500 train steps, it takes around 3h30." + ] + }, + { + "cell_type": "code", + "execution_count": null, + "metadata": { + "id": "on4baTh5aalN" + }, + "outputs": [], + "source": [ + "TRAIN_STEPS = 100\n", + "MODEL_NAME = f\"asl-palm-text-tuned-model-{time.time()}\"\n", + "\n", + "model.tune_model(\n", + " training_data=TRAINING_DATA_URI,\n", + " model_display_name=MODEL_NAME,\n", + " train_steps=TRAIN_STEPS,\n", + " # Tuning can only happen in the \"europe-west4\" location for now\n", + " tuning_job_location=\"europe-west4\",\n", + " # Model can only be deployed in the \"us-central1\" location for now\n", + " tuned_model_location=\"us-central1\",\n", + ")\n", + "\n", + "print(\"Model name:\", MODEL_NAME)" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "O6JC8XplaalO" + }, + "source": [ + "## Retrieve the tuned model from your Vertex AI Model registry\n", + "\n", + "\n", + "When your tuning job is finished, your model will be available on Vertex AI Model Registry. The next cell shows you how to list tuned models." + ] + }, + { + "cell_type": "code", + "execution_count": 7, + "metadata": {}, + "outputs": [], + "source": [ + "model = TextGenerationModel.from_pretrained(\"text-bison@001\")\n", + "model.list_tuned_model_names()" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ZriyF0V-aalO" + }, + "source": [ + "You can also use the Google Cloud Console UI to view all of your models in [Vertex AI Model Registry](https://console.cloud.google.com/vertex-ai/models?). \n", + "\n", + "It's time to get predictions. First you need to get the latest tuned model from the Vertex AI Model registry." + ] + }, + { + "cell_type": "code", + "execution_count": 8, + "metadata": { + "id": "j66dr12taalO" + }, + "outputs": [], + "source": [ + "tuned_model = TextGenerationModel.get_tuned_model(\n", + " model.list_tuned_model_names()[-1]\n", + ")" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "xDOueoptaalO" + }, + "source": [ + "Now you can start sending a prompt to the API. Feel free to update the following prompt:" + ] + }, + { + "cell_type": "code", + "execution_count": 9, + "metadata": { + "id": "2ERbfPJPaalO" + }, + "outputs": [], + "source": [ + "PROMPT = \"\"\"\n", + "How can I store my TensorFlow checkpoint on Google Cloud Storage?\n", + "\n", + "Python example:\n", + "\n", + "\"\"\"\n", + "\n", + "print(tuned_model.predict(PROMPT))" + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "qtYr_KNPaalO", + "tags": [] + }, + "source": [ + "## Evaluation\n", + "\n", + "\n", + "It's essential to evaluate your model to understand its performance. Evaluation can be done in an automated way using evaluation metrics like F1, Bleu, or Rouge. You can also leverage human evaluation methods. Human evaluation methods involve asking humans to rate the quality of the LLM's answers. This can be done through crowdsourcing or by having experts evaluate the responses. Some standard human evaluation metrics include fluency, coherence, relevance, and informativeness. Often you want to choose a mix of evaluation metrics to get a good understanding of your model performance. \n", + "\n", + "\n", + "Among other metrics we will compute the following two metrics that provide crude measures albeit automated of how two texts may have the same meaning: \n", + "- The [BLEU](https://en.wikipedia.org/wiki/BLEU) evaluation metric is a sort of **precision** metric, measuring the proportion of $n$-grams in the generated sentence matching $n$-grams in the reference sentence. It goes from 0 to 1 with a higher score for more similar sentences. BLEU1 considers uni-grams only, while BLEU2 considers bi-grams. \n", + "\n", + "- The [ROUGE](https://en.wikipedia.org/wiki/ROUGE_(metric)) evaluation metric is a sort of **recall** metric, measuring the proportion of $n$-grams in the reference sentence that are matched by $n$-grams in the generated sentence. It goes from 0 to 1 with a higher score for more similar sentences. ROUGE1 considers uni-grams only, while ROUGE2 considers bi-grams.\n", + "\n", + "\n", + "We will use [evaluate](https://github.com/huggingface/evaluate/tree/main) to to compute the scores.\n", + "Earlier in the notebook, you created a train and eval dataset. Now it's time to take some of the eval data. You will use the questions to get a response from our tuned model, and the answers we will use as a reference:\n", + "- **Candidates**: Answers generated by the tuned model.\n", + "- **References**: Original answers that we will use to compare\n", + "\n" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Let us first select a sample of our evaluation set:" + ] + }, + { + "cell_type": "code", + "execution_count": 10, + "metadata": { + "id": "LKMmIH0XaalO" + }, + "outputs": [], + "source": [ + "# you can change the number of rows you want to use\n", + "EVAL_ROWS = 60\n", + "INPUT_LIMIT = 10000 # characters\n", + "evaluation = evaluation[evaluation.input_text.apply(len) <= INPUT_LIMIT]\n", + "evaluation = evaluation.head(EVAL_ROWS)\n", + "evaluation.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The function in the cell below will query our tuned model using the `evaluation.input_text` and store the ground truth in `evaluation.output_text` in a DataFrame next to the model answers:" + ] + }, + { + "cell_type": "code", + "execution_count": 11, + "metadata": {}, + "outputs": [], + "source": [ + "def create_eval_data(model, evaluation):\n", + " model_answers = []\n", + "\n", + " for prompt in evaluation.input_text:\n", + " response = model.predict(prompt)\n", + " model_answers.append(response.text)\n", + "\n", + " eval_df = pd.DataFrame(\n", + " {\"candidate\": model_answers, \"reference\": evaluation.output_text}\n", + " )\n", + " mask = eval_df.candidate == \"\"\n", + " return eval_df[~mask]" + ] + }, + { + "cell_type": "code", + "execution_count": 12, + "metadata": {}, + "outputs": [], + "source": [ + "eval_df = create_eval_data(model, evaluation)" + ] + }, + { + "cell_type": "code", + "execution_count": 13, + "metadata": {}, + "outputs": [], + "source": [ + "eval_df.head()" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "The function in the next cell computes the uni-gram BLEU and ROUGE scores. It averages these scores over all the reference answers and those generated by our tuned model, giving scores that can serve as performance metrics for our model." + ] + }, + { + "cell_type": "code", + "execution_count": 14, + "metadata": {}, + "outputs": [], + "source": [ + "def compute_scores(eval_data):\n", + " predictions = eval_data.candidate.tolist()\n", + " references = eval_data.reference.tolist()\n", + " rouge = evaluate.load(\"rouge\")\n", + " bleu = evaluate.load(\"bleu\")\n", + " rouge_value = rouge.compute(predictions=predictions, references=references)[\n", + " \"rouge1\"\n", + " ]\n", + " bleu_value = bleu.compute(predictions=predictions, references=references)[\n", + " \"bleu\"\n", + " ]\n", + " return {\"rouge\": rouge_value, \"bleu\": bleu_value}" + ] + }, + { + "cell_type": "code", + "execution_count": 15, + "metadata": {}, + "outputs": [], + "source": [ + "compute_scores(eval_df)" + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "Given two versions of the model (possibly tuned with a different amount of data or training steps), you can now compare the scores to decide which one is the best. However, remember that these automated metrics are very crude proxy of human assessment. " + ] + }, + { + "cell_type": "markdown", + "metadata": {}, + "source": [ + "## Acknowledgement \n", + "\n", + "This notebook is adapted from a [tutorial](https://github.com/GoogleCloudPlatform/generative-ai/blob/main/language/tuning/getting_started_tuning.ipynb)\n", + "written by Polong Lin." + ] + }, + { + "cell_type": "markdown", + "metadata": { + "id": "ur8xi4C7S06n" + }, + "source": [ + "Copyright 2023 Google LLC\n", + "\n", + "Licensed under the Apache License, Version 2.0 (the \"License\");\n", + "you may not use this file except in compliance with the License.\n", + "You may obtain a copy of the License at\n", + "\n", + " https://www.apache.org/licenses/LICENSE-2.0\n", + "\n", + "Unless required by applicable law or agreed to in writing, software\n", + "distributed under the License is distributed on an \"AS IS\" BASIS,\n", + "WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n", + "See the License for the specific language governing permissions and\n", + "limitations under the License." + ] + } + ], + "metadata": { + "colab": { + "provenance": [], + "toc_visible": true + }, + "environment": { + "kernel": "python3", + "name": "tf2-gpu.2-12.m111", + "type": "gcloud", + "uri": "gcr.io/deeplearning-platform-release/tf2-gpu.2-12:m111" + }, + "kernelspec": { + "display_name": "Python 3", + "language": "python", + "name": "python3" + }, + "language_info": { + "codemirror_mode": { + "name": "ipython", + "version": 3 + }, + "file_extension": ".py", + "mimetype": "text/x-python", + "name": "python", + "nbconvert_exporter": "python", + "pygments_lexer": "ipython3", + "version": "3.10.12" + } + }, + "nbformat": 4, + "nbformat_minor": 4 +} diff --git a/requirements-without-deps.txt b/requirements-without-deps.txt new file mode 100644 index 00000000..65a4f59f --- /dev/null +++ b/requirements-without-deps.txt @@ -0,0 +1,11 @@ +# Frameworks to evaluate text generation +# as well as their dependencies +evaluate==0.4.0 +rouge-score==0.1.2 +nltk==3.8.1 +rouge-score==0.1.2 +datasets==2.14.5 +huggingface-hub==0.17.2 +multiprocess==0.70.15 +responses==0.18.0 +xxhash==3.3.0 diff --git a/requirements.txt b/requirements.txt index f6535f62..fb1c9aa4 100644 --- a/requirements.txt +++ b/requirements.txt @@ -1,5 +1,5 @@ # Requirements for asl-ml-immersion repository -google-cloud-aiplatform==1.26.0 +google-cloud-aiplatform==1.33.1 google-cloud-pipeline-components==1.0.44 pyyaml==5.3.1 kfp==1.8.22