Skip to content

Commit

Permalink
Put tracing docs in main sidebar (mlflow#13081)
Browse files Browse the repository at this point in the history
Signed-off-by: Daniel Lok <[email protected]>
  • Loading branch information
daniellok-db authored Sep 6, 2024
1 parent 6bbccb4 commit 280f64a
Show file tree
Hide file tree
Showing 5 changed files with 9 additions and 5 deletions.
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -355,6 +355,7 @@ Explore the guides and tutorials below to start your journey!
getting-started/index
new-features/index
llms/index
MLflow Tracing<llms/tracing/index>
model-evaluation/index
deep-learning/index
traditional-ml/index
Expand Down
3 changes: 2 additions & 1 deletion docs/source/llms/langchain/autologging.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@ MLflow LangChain flavor supports autologging, a powerful feature that allows you
Quickstart
----------

To enable autologging for LangChain models, call :py:func:`mlflow.langchain.autolog()` at the beginning of your script or notebook. This will automatically log the traces by default as well as other artifacts such as models, input examples, and model signatures if you explicitly enable them. For more information about the configuration, please refer to the `Configure Autologging <#configure-autologging>`_ section.
To enable autologging for LangChain models, call :py:func:`mlflow.langchain.autolog()` at the beginning of your script or notebook. This will automatically log the traces by default as well as other artifacts such as models, input examples, and model signatures if you explicitly enable them. For more information about the configuration, please refer to the :ref:`Configure Autologging <configure-lc-autologging>` section.

.. code-block::
Expand All @@ -43,6 +43,7 @@ Once you have invoked the chain, you can view the logged traces and artifacts in
:width: 100%
:align: center

.. _configure-lc-autologging:

Configure Autologging
---------------------
Expand Down
2 changes: 1 addition & 1 deletion docs/source/llms/llama-index/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,7 +145,7 @@ The following code is an example of logging an index to MLflow with the ``chat``
The above code snippet passes the index object directly to the ``log_model`` function.
This method only works with the default ``SimpleVectorStore`` vector store, which
simply keeps the embedded documents in memory. If your index uses **external vector stores** such as ``QdrantVectorStore`` or ``DatabricksVectorSearch``, you can use the Model-from-Code
logging method. See the `How to log an index with external vector stores <#how-to-log-an-index-with-external-vector-stores>`_ for more details.
logging method. See the `How to log an index with external vector stores <#how-to-log-and-load-an-index-with-external-vector-stores>`_ for more details.

.. figure:: ../../_static/images/llms/llama-index/llama-index-artifacts.png
:alt: MLflow artifacts for the LlamaIndex index
Expand Down
6 changes: 4 additions & 2 deletions docs/source/llms/tracing/index.rst
Original file line number Diff line number Diff line change
@@ -1,9 +1,11 @@
Tracing in MLflow
=================
Introduction to MLflow Tracing
==============================

.. note::
MLflow Tracing is currently in **Experimental Status** and is subject to change without deprecation warning or notification.

MLflow Tracing is a feature that enhances LLM observability in your Generative AI (GenAI) applications by capturing detailed information about the execution of your application's services.
Tracing provides a way to record the inputs, outputs, and metadata associated with each intermediate step of a request, enabling you to easily pinpoint the source of bugs and unexpected behaviors.

MLflow offers a number of different options to enable tracing of your GenAI applications.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/llms/transformers/guide/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ template will be used to format user inputs before passing them into the pipelin
set to ``False`` by default. This is to prevent the template from being shown to the users,
which could potentially cause confusion as it was not part of their original input. To
override this behaviour, either set ``return_full_text`` to ``True`` via ``params``, or by
including it in a ``model_config`` dict in ``log_model()``. See `this section <#using-model-config-and-model-signature-params-for-transformers-inference>`_
including it in a ``model_config`` dict in ``log_model()``. See `this section <#using-model-config-and-model-signature-params-for-inference>`_
for more details on how to do this.

For a more in-depth guide, check out the `Prompt Templating notebook <../tutorials/prompt-templating/prompt-templating.ipynb>`_!
Expand Down

0 comments on commit 280f64a

Please sign in to comment.