Skip to content

Commit

Permalink
docs: [google-ai-generativelanguage] Many small fixes (#13017)
Browse files Browse the repository at this point in the history
- [ ] Regenerate this pull request now.

BEGIN_COMMIT_OVERRIDE
docs: Many small fixes
feat: Add new PromptFeedback and FinishReason entries
feat: Add model max_temperature
feat: Add new PromptFeedback and FinishReason entries for
google-gemini/generative-ai-python#476
END_COMMIT_OVERRIDE


PiperOrigin-RevId: 663936564

Source-Link:
googleapis/googleapis@21c206f

Source-Link:
googleapis/googleapis-gen@97ac6df
Copy-Tag:
eyJwIjoicGFja2FnZXMvZ29vZ2xlLWFpLWdlbmVyYXRpdmVsYW5ndWFnZS8uT3dsQm90LnlhbWwiLCJoIjoiOTdhYzZkZmNhYTc5ZWY3NmJiNzhmODYwZTc5ODZhZGNiZTIyMzA4MSJ9

docs: Many small fixes


PiperOrigin-RevId: 663936518

Source-Link:
googleapis/googleapis@5157b5f

Source-Link:
googleapis/googleapis-gen@740787c
Copy-Tag:
eyJwIjoicGFja2FnZXMvZ29vZ2xlLWFpLWdlbmVyYXRpdmVsYW5ndWFnZS8uT3dsQm90LnlhbWwiLCJoIjoiNzQwNzg3YzVlYjRmMmRjZmI5MDk0YTExODNlMDMxNGM3MjVmYjBjYSJ9

---------

Co-authored-by: Owl Bot <gcf-owl-bot[bot]@users.noreply.github.com>
  • Loading branch information
gcf-owl-bot[bot] and gcf-owl-bot[bot] authored Aug 19, 2024
1 parent 9c54c1d commit fdebbf2
Show file tree
Hide file tree
Showing 79 changed files with 1,319 additions and 1,022 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
__version__ = "0.6.8" # {x-release-please-version}
__version__ = "0.0.0" # {x-release-please-version}
Original file line number Diff line number Diff line change
Expand Up @@ -13,4 +13,4 @@
# See the License for the specific language governing permissions and
# limitations under the License.
#
__version__ = "0.6.8" # {x-release-please-version}
__version__ = "0.0.0" # {x-release-please-version}
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,6 @@
# limitations under the License.
#
from collections import OrderedDict
import functools
import re
from typing import (
AsyncIterable,
Expand Down Expand Up @@ -194,9 +193,7 @@ def universe_domain(self) -> str:
"""
return self._client._universe_domain

get_transport_class = functools.partial(
type(GenerativeServiceClient).get_transport_class, type(GenerativeServiceClient)
)
get_transport_class = GenerativeServiceClient.get_transport_class

def __init__(
self,
Expand Down Expand Up @@ -280,14 +277,15 @@ async def generate_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.GenerateContentResponse:
r"""Generates a response from the model given an input
``GenerateContentRequest``.
Input capabilities differ between models, including tuned
models. See the `model
guide <https://ai.google.dev/models/gemini>`__ and `tuning
guide <https://ai.google.dev/docs/model_tuning_guidance>`__ for
details.
r"""Generates a model response given an input
``GenerateContentRequest``. Refer to the `text generation
guide <https://ai.google.dev/gemini-api/docs/text-generation>`__
for detailed usage information. Input capabilities differ
between models, including tuned models. Refer to the `model
guide <https://ai.google.dev/gemini-api/docs/models/gemini>`__
and `tuning
guide <https://ai.google.dev/gemini-api/docs/model-tuning>`__
for details.
.. code-block:: python
Expand Down Expand Up @@ -329,12 +327,14 @@ async def sample_generate_content():
on the ``request`` instance; if ``request`` is provided, this
should not be set.
contents (:class:`MutableSequence[google.ai.generativelanguage_v1.types.Content]`):
Required. The content of the current
conversation with the model.
For single-turn queries, this is a
single instance. For multi-turn queries,
this is a repeated field that contains
conversation history + latest request.
Required. The content of the current conversation with
the model.
For single-turn queries, this is a single instance. For
multi-turn queries like
`chat <https://ai.google.dev/gemini-api/docs/text-generation#chat>`__,
this is a repeated field that contains the conversation
history and the latest request.
This corresponds to the ``contents`` field
on the ``request`` instance; if ``request`` is provided, this
Expand All @@ -347,18 +347,18 @@ async def sample_generate_content():
Returns:
google.ai.generativelanguage_v1.types.GenerateContentResponse:
Response from the model supporting multiple candidates.
Response from the model supporting multiple candidate
responses.
Note on safety ratings and content filtering. They
are reported for both prompt in
Safety ratings and content filtering are reported for
both prompt in
GenerateContentResponse.prompt_feedback and for each
candidate in finish_reason and in safety_ratings. The
API contract is that: - either all requested
candidates are returned or no candidates at all - no
candidates are returned only if there was something
wrong with the prompt (see prompt_feedback) -
feedback on each candidate is reported on
finish_reason and safety_ratings.
API: - Returns either all requested candidates or
none of them - Returns no candidates at all only if
there was something wrong with the prompt (check
prompt_feedback) - Reports feedback on each candidate
in finish_reason and safety_ratings.
"""
# Create or coerce a protobuf request object.
Expand Down Expand Up @@ -421,8 +421,9 @@ def stream_generate_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> Awaitable[AsyncIterable[generative_service.GenerateContentResponse]]:
r"""Generates a streamed response from the model given an input
``GenerateContentRequest``.
r"""Generates a `streamed
response <https://ai.google.dev/gemini-api/docs/text-generation?lang=python#generate-a-text-stream>`__
from the model given an input ``GenerateContentRequest``.
.. code-block:: python
Expand Down Expand Up @@ -465,12 +466,14 @@ async def sample_stream_generate_content():
on the ``request`` instance; if ``request`` is provided, this
should not be set.
contents (:class:`MutableSequence[google.ai.generativelanguage_v1.types.Content]`):
Required. The content of the current
conversation with the model.
For single-turn queries, this is a
single instance. For multi-turn queries,
this is a repeated field that contains
conversation history + latest request.
Required. The content of the current conversation with
the model.
For single-turn queries, this is a single instance. For
multi-turn queries like
`chat <https://ai.google.dev/gemini-api/docs/text-generation#chat>`__,
this is a repeated field that contains the conversation
history and the latest request.
This corresponds to the ``contents`` field
on the ``request`` instance; if ``request`` is provided, this
Expand All @@ -483,18 +486,18 @@ async def sample_stream_generate_content():
Returns:
AsyncIterable[google.ai.generativelanguage_v1.types.GenerateContentResponse]:
Response from the model supporting multiple candidates.
Response from the model supporting multiple candidate
responses.
Note on safety ratings and content filtering. They
are reported for both prompt in
Safety ratings and content filtering are reported for
both prompt in
GenerateContentResponse.prompt_feedback and for each
candidate in finish_reason and in safety_ratings. The
API contract is that: - either all requested
candidates are returned or no candidates at all - no
candidates are returned only if there was something
wrong with the prompt (see prompt_feedback) -
feedback on each candidate is reported on
finish_reason and safety_ratings.
API: - Returns either all requested candidates or
none of them - Returns no candidates at all only if
there was something wrong with the prompt (check
prompt_feedback) - Reports feedback on each candidate
in finish_reason and safety_ratings.
"""
# Create or coerce a protobuf request object.
Expand Down Expand Up @@ -555,8 +558,9 @@ async def embed_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.EmbedContentResponse:
r"""Generates an embedding from the model given an input
``Content``.
r"""Generates a text embedding vector from the input ``Content``
using the specified `Gemini Embedding
model <https://ai.google.dev/gemini-api/docs/models/gemini#text-embedding>`__.
.. code-block:: python
Expand Down Expand Up @@ -679,8 +683,9 @@ async def batch_embed_contents(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.BatchEmbedContentsResponse:
r"""Generates multiple embeddings from the model given
input text in a synchronous call.
r"""Generates multiple embedding vectors from the input ``Content``
which consists of a batch of strings represented as
``EmbedContentRequest`` objects.
.. code-block:: python
Expand Down Expand Up @@ -804,8 +809,10 @@ async def count_tokens(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.CountTokensResponse:
r"""Runs a model's tokenizer on input content and returns
the token count.
r"""Runs a model's tokenizer on input ``Content`` and returns the
token count. Refer to the `tokens
guide <https://ai.google.dev/gemini-api/docs/tokens>`__ to learn
more about tokens.
.. code-block:: python
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -664,7 +664,7 @@ def __init__(
Type[GenerativeServiceTransport],
Callable[..., GenerativeServiceTransport],
] = (
type(self).get_transport_class(transport)
GenerativeServiceClient.get_transport_class(transport)
if isinstance(transport, str) or transport is None
else cast(Callable[..., GenerativeServiceTransport], transport)
)
Expand Down Expand Up @@ -693,14 +693,15 @@ def generate_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.GenerateContentResponse:
r"""Generates a response from the model given an input
``GenerateContentRequest``.
Input capabilities differ between models, including tuned
models. See the `model
guide <https://ai.google.dev/models/gemini>`__ and `tuning
guide <https://ai.google.dev/docs/model_tuning_guidance>`__ for
details.
r"""Generates a model response given an input
``GenerateContentRequest``. Refer to the `text generation
guide <https://ai.google.dev/gemini-api/docs/text-generation>`__
for detailed usage information. Input capabilities differ
between models, including tuned models. Refer to the `model
guide <https://ai.google.dev/gemini-api/docs/models/gemini>`__
and `tuning
guide <https://ai.google.dev/gemini-api/docs/model-tuning>`__
for details.
.. code-block:: python
Expand Down Expand Up @@ -742,12 +743,14 @@ def sample_generate_content():
on the ``request`` instance; if ``request`` is provided, this
should not be set.
contents (MutableSequence[google.ai.generativelanguage_v1.types.Content]):
Required. The content of the current
conversation with the model.
For single-turn queries, this is a
single instance. For multi-turn queries,
this is a repeated field that contains
conversation history + latest request.
Required. The content of the current conversation with
the model.
For single-turn queries, this is a single instance. For
multi-turn queries like
`chat <https://ai.google.dev/gemini-api/docs/text-generation#chat>`__,
this is a repeated field that contains the conversation
history and the latest request.
This corresponds to the ``contents`` field
on the ``request`` instance; if ``request`` is provided, this
Expand All @@ -760,18 +763,18 @@ def sample_generate_content():
Returns:
google.ai.generativelanguage_v1.types.GenerateContentResponse:
Response from the model supporting multiple candidates.
Response from the model supporting multiple candidate
responses.
Note on safety ratings and content filtering. They
are reported for both prompt in
Safety ratings and content filtering are reported for
both prompt in
GenerateContentResponse.prompt_feedback and for each
candidate in finish_reason and in safety_ratings. The
API contract is that: - either all requested
candidates are returned or no candidates at all - no
candidates are returned only if there was something
wrong with the prompt (see prompt_feedback) -
feedback on each candidate is reported on
finish_reason and safety_ratings.
API: - Returns either all requested candidates or
none of them - Returns no candidates at all only if
there was something wrong with the prompt (check
prompt_feedback) - Reports feedback on each candidate
in finish_reason and safety_ratings.
"""
# Create or coerce a protobuf request object.
Expand Down Expand Up @@ -831,8 +834,9 @@ def stream_generate_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> Iterable[generative_service.GenerateContentResponse]:
r"""Generates a streamed response from the model given an input
``GenerateContentRequest``.
r"""Generates a `streamed
response <https://ai.google.dev/gemini-api/docs/text-generation?lang=python#generate-a-text-stream>`__
from the model given an input ``GenerateContentRequest``.
.. code-block:: python
Expand Down Expand Up @@ -875,12 +879,14 @@ def sample_stream_generate_content():
on the ``request`` instance; if ``request`` is provided, this
should not be set.
contents (MutableSequence[google.ai.generativelanguage_v1.types.Content]):
Required. The content of the current
conversation with the model.
For single-turn queries, this is a
single instance. For multi-turn queries,
this is a repeated field that contains
conversation history + latest request.
Required. The content of the current conversation with
the model.
For single-turn queries, this is a single instance. For
multi-turn queries like
`chat <https://ai.google.dev/gemini-api/docs/text-generation#chat>`__,
this is a repeated field that contains the conversation
history and the latest request.
This corresponds to the ``contents`` field
on the ``request`` instance; if ``request`` is provided, this
Expand All @@ -893,18 +899,18 @@ def sample_stream_generate_content():
Returns:
Iterable[google.ai.generativelanguage_v1.types.GenerateContentResponse]:
Response from the model supporting multiple candidates.
Response from the model supporting multiple candidate
responses.
Note on safety ratings and content filtering. They
are reported for both prompt in
Safety ratings and content filtering are reported for
both prompt in
GenerateContentResponse.prompt_feedback and for each
candidate in finish_reason and in safety_ratings. The
API contract is that: - either all requested
candidates are returned or no candidates at all - no
candidates are returned only if there was something
wrong with the prompt (see prompt_feedback) -
feedback on each candidate is reported on
finish_reason and safety_ratings.
API: - Returns either all requested candidates or
none of them - Returns no candidates at all only if
there was something wrong with the prompt (check
prompt_feedback) - Reports feedback on each candidate
in finish_reason and safety_ratings.
"""
# Create or coerce a protobuf request object.
Expand Down Expand Up @@ -962,8 +968,9 @@ def embed_content(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.EmbedContentResponse:
r"""Generates an embedding from the model given an input
``Content``.
r"""Generates a text embedding vector from the input ``Content``
using the specified `Gemini Embedding
model <https://ai.google.dev/gemini-api/docs/models/gemini#text-embedding>`__.
.. code-block:: python
Expand Down Expand Up @@ -1083,8 +1090,9 @@ def batch_embed_contents(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.BatchEmbedContentsResponse:
r"""Generates multiple embeddings from the model given
input text in a synchronous call.
r"""Generates multiple embedding vectors from the input ``Content``
which consists of a batch of strings represented as
``EmbedContentRequest`` objects.
.. code-block:: python
Expand Down Expand Up @@ -1205,8 +1213,10 @@ def count_tokens(
timeout: Union[float, object] = gapic_v1.method.DEFAULT,
metadata: Sequence[Tuple[str, str]] = (),
) -> generative_service.CountTokensResponse:
r"""Runs a model's tokenizer on input content and returns
the token count.
r"""Runs a model's tokenizer on input ``Content`` and returns the
token count. Refer to the `tokens
guide <https://ai.google.dev/gemini-api/docs/tokens>`__ to learn
more about tokens.
.. code-block:: python
Expand Down
Loading

0 comments on commit fdebbf2

Please sign in to comment.