From 40347c5cab6a889f7ea1c47e9ae8ee9877f97cff Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=B6rg=20Bornschein?= Date: Fri, 28 Jun 2024 08:11:07 +0100 Subject: [PATCH 1/2] Fix documentation for Gemma2. Model sizes and Blog post URL are wrong in the documentation. --- docs/source/en/model_doc/gemma2.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/source/en/model_doc/gemma2.md b/docs/source/en/model_doc/gemma2.md index fa16dfbc4ba0fc..44d5017e16b08d 100644 --- a/docs/source/en/model_doc/gemma2.md +++ b/docs/source/en/model_doc/gemma2.md @@ -19,12 +19,12 @@ rendered properly in your Markdown viewer. ## Overview -The Gemma2 model was proposed in [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/Gemma2-open-models/) by Gemma2 Team, Google. -Gemma2 models are trained on 6T tokens, and released with 2 versions, 2b and 7b. +The Gemma2 model was proposed in [Gemma2: Open Models Based on Gemini Technology and Research]([https://blog.google/technology/developers/google-gemma-2/]) by Gemma2 Team, Google. +Two Gemma2 models are released, with parameters sizes of 9 billion (9B) and 27 billion (27B). -The abstract from the paper is the following: +The abstract from the blog post is the following: -*This work introduces Gemma2, a new family of open language models demonstrating strong performance across academic benchmarks for language understanding, reasoning, and safety. We release two sizes of models (2 billion and 7 billion parameters), and provide both pretrained and fine-tuned checkpoints. Gemma2 outperforms similarly sized open models on 11 out of 18 text-based tasks, and we present comprehensive evaluations of safety and responsibility aspects of the models, alongside a detailed description of our model development. We believe the responsible release of LLMs is critical for improving the safety of frontier models, and for enabling the next wave of LLM innovations* +*Now we’re officially releasing Gemma 2 to researchers and developers globally. Available in both 9 billion (9B) and 27 billion (27B) parameter sizes, Gemma 2 is higher-performing and more efficient at inference than the first generation, with significant safety advancements built in. In fact, at 27B, it offers competitive alternatives to models more than twice its size, delivering the kind of performance that was only possible with proprietary models as recently as December.* Tips: From eee47c5f00a4c0d41abbfcd36905c5443785acfc Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?J=C3=B6rg=20Bornschein?= Date: Mon, 1 Jul 2024 10:02:47 +0100 Subject: [PATCH 2/2] Update docs/source/en/model_doc/gemma2.md Co-authored-by: amyeroberts <22614925+amyeroberts@users.noreply.github.com> --- docs/source/en/model_doc/gemma2.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/gemma2.md b/docs/source/en/model_doc/gemma2.md index 44d5017e16b08d..5befa0b1f43777 100644 --- a/docs/source/en/model_doc/gemma2.md +++ b/docs/source/en/model_doc/gemma2.md @@ -19,7 +19,7 @@ rendered properly in your Markdown viewer. ## Overview -The Gemma2 model was proposed in [Gemma2: Open Models Based on Gemini Technology and Research]([https://blog.google/technology/developers/google-gemma-2/]) by Gemma2 Team, Google. +The Gemma2 model was proposed in [Gemma2: Open Models Based on Gemini Technology and Research](https://blog.google/technology/developers/google-gemma-2/) by Gemma2 Team, Google. Two Gemma2 models are released, with parameters sizes of 9 billion (9B) and 27 billion (27B). The abstract from the blog post is the following: