Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add documentation for the Logfire Query API #405

Merged
merged 15 commits into from
Aug 30, 2024
4 changes: 2 additions & 2 deletions docs/guides/advanced/creating_write_tokens.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,8 +7,8 @@ You can create a write token by following these steps:

1. Open the **Logfire** web interface at [logfire.pydantic.dev](https://logfire.pydantic.dev).
2. Select your project from the **Projects** section on the left hand side of the page.
3. Click on the ⚙️ **Settings** tab on the top right corner of the page.
4. Select the **{} Write tokens** tab on the left hand menu.
3. Click on the ⚙️ **Settings** tab in the top right corner of the page.
4. Select the **{} Write tokens** tab from the left hand menu.
5. Click on the **Create write token** button.

After creating the write token, you'll see a dialog with the token value.
Expand Down
1 change: 1 addition & 0 deletions docs/guides/advanced/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,3 +3,4 @@
* **[Testing](testing.md):** Verify your application's logging and span tracking with Logfire's testing utilities, ensuring accurate data capture and observability.
* **[Backfill](backfill.md):** Recover lost data and bulk load historical data into Logfire with the `logfire backfill` command, ensuring data continuity.
* **[Creating Write Tokens](creating_write_tokens.md):** Generate and manage multiple write tokens for different services.
* **[Using Read Tokens](using_read_tokens):** Generate and manage read tokens for programmatic querying of your Logfire data.
196 changes: 196 additions & 0 deletions docs/guides/advanced/using_read_tokens.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,196 @@
In addition to [write tokens](./creating_write_tokens.md), Logfire also supports **read tokens**.
A read token allows you to query data from your Logfire project using our API.

## What are Read Tokens?

Read tokens provide secure, programmatic access to the data in your Logfire project. They can be used to run arbitrary
SQL queries against your data, offering flexibility and power for data analysis and integration with other tools.

If you've set up Logfire following the [first steps guide](../first_steps/index.md), you can generate read tokens from
the Logfire web interface to start querying your data.

## How to Create a Read Token

1. Open the **Logfire** web interface at [logfire.pydantic.dev](https://logfire.pydantic.dev).
2. Select your project from the **Projects** section on the left-hand side of the page.
3. Click on the ⚙️ **Settings** tab in the top right corner of the page.
4. Select the **Read tokens** tab from the left-hand menu.
5. Click on the **Create read token** button.

After creating the read token, you'll see a dialog with the token value.
**Copy this value and store it securely, it will not be shown again.**

## Using the Read Clients

Logfire provides both synchronous and asynchronous clients to interact with the API.
These clients are currently experimental, meaning we might introduce breaking changes in the future.
To use these clients, you can import them from the `experimental` namespace:

```python
from logfire.experimental.read_client import LogfireAsyncReadClient, LogfireSyncReadClient
```

### Async Client Example

The `LogfireAsyncReadClient` allows for asynchronous interaction with the Logfire API. Here's an example of how to use it:
dmontagu marked this conversation as resolved.
Show resolved Hide resolved

```python
import asyncio
import polars as pl
from io import StringIO
from logfire.experimental.read_client import LogfireAsyncReadClient

async def main():
query = """
SELECT start_timestamp
FROM records
LIMIT 1
"""

async with LogfireAsyncReadClient(read_token='<your_read_token_here>') as client:
# Load data as JSON, in column-oriented format
json_cols = await client.query_json(sql=query)
print(json_cols)

# Load data as JSON, in row-oriented format
json_rows = await client.query_json_rows(sql=query)
print(json_rows)

# Retrieve data in arrow format, and load into a polars DataFrame
# Note that JSON columns such as `attributes` will be returned as JSON-serialized strings
df_from_arrow = pl.from_arrow(await client.query_arrow(sql=query))
print(df_from_arrow)

# Retrieve data in CSV format, and load into a polars DataFrame
# Note that JSON columns such as `attributes` will be returned as JSON-serialized strings
df_from_csv = pl.read_csv(StringIO(await client.query_csv(sql=query)))
print(df_from_csv)


asyncio.run(main())
```

### Sync Client Example
dmontagu marked this conversation as resolved.
Show resolved Hide resolved

If you prefer a synchronous approach, use the `LogfireSyncReadClient`.
This is ideal when blocking I/O is acceptable and you want to avoid the complexities of asynchronous programming:

```python
from logfire.experimental.read_client import LogfireSyncReadClient
import polars as pl
from io import StringIO

def main():
query = """
SELECT start_timestamp
FROM records
LIMIT 1
"""

with LogfireSyncReadClient(read_token='<your_read_token_here>') as client:
# Load data as JSON, in column-oriented format
json_cols = client.query_json(sql=query)
print(json_cols)

# Load data as JSON, in row-oriented format
json_rows = client.query_json_rows(sql=query)
print(json_rows)

# Retrieve data in arrow format, and load into a polars DataFrame
# Note that JSON columns such as `attributes` will be returned as JSON-serialized strings
df_from_arrow = pl.from_arrow(client.query_arrow(sql=query)) # type: ignore
print(df_from_arrow)

# Retrieve data in CSV format, and load into a polars DataFrame
# Note that JSON columns such as `attributes` will be returned as JSON-serialized strings
df_from_csv = pl.read_csv(StringIO(client.query_csv(sql=query)))
print(df_from_csv)


if __name__ == '__main__':
main()
```

## Making Direct HTTP Requests

If you prefer not to use the provided clients, you can make direct HTTP requests to the Logfire API using any HTTP
client library, such as `requests` in Python. Below are the general steps and an example to guide you:

### General Steps to Make a Direct HTTP Request

1. **Set the Endpoint URL**: The base URL for the Logfire API is `https://logfire-api.pydantic.dev`.

2. **Add Authentication**: Include the read token in your request headers to authenticate.
The header key should be `Authorization` with the value `Bearer <your_read_token_here>`.

3. **Define the SQL Query**: Write the SQL query you want to execute.

4. **Send the Request**: Use an HTTP POST request to the `/v1/query` endpoint with the SQL query in the request body.

**Note:** You can provide additional query parameters to control the behavior of your requests.
You can also use the `Accept` header to specify the desired format for the response data (JSON, Arrow, or CSV).

### Example: Using Python `requests` Library

```python
import requests

# Define the base URL and your read token
base_url = 'https://logfire-api.pydantic.dev'
read_token = '<your_read_token_here>'

# Set the headers for authentication
headers = {
'Authorization': f'Bearer {read_token}',
'Content-Type': 'application/json'
}

# Define your SQL query
query = """
SELECT start_timestamp
FROM records
LIMIT 1
"""

# Prepare the payload for the POST request
payload = {
'sql': query
}

# Send the POST request to the Logfire API
response = requests.post(f'{base_url}/v1/query', json=payload, headers=headers)

# Check the response status
if response.status_code == 200:
print("Query Successful!")
print(response.json())
else:
print(f"Failed to execute query. Status code: {response.status_code}")
print(response.text)
```

### Additional Configuration

The Logfire API supports various query parameters and response formats to give you flexibility in how you retrieve your data:

- **Response Format**: Use the `Accept` header to specify the response format. Supported values include:
- `application/json`: Returns the data in JSON format. By default, this will be column-oriented unless specified otherwise with the `json_rows` parameter.
- `application/vnd.apache.arrow.stream`: Returns the data in Apache Arrow format, suitable for high-performance data processing.
- `text/csv`: Returns the data in CSV format, which is easy to use with many data tools.

If no `Accept` header is provided, the default response format is JSON.

- **Query Parameters**:
- **`min_timestamp`**: An optional ISO-format timestamp to filter records with `start_timestamp` greater than this value for the `records` table or `recorded_timestamp` greater than this value for the `metrics` table. The same filtering can also be done manually within the query itself.
- **`max_timestamp`**: Similar to `min_timestamp`, but serves as an upper bound for filtering `start_timestamp` in the `records` table or `recorded_timestamp` in the `metrics` table. The same filtering can also be done manually within the query itself.
- **`limit`**: An optional parameter to limit the number of rows returned by the query. If not specified, the default limit is 500. The maximum allowed value is 10,000.
- **`row_oriented`**: Only affects JSON responses. If set to `true`, the JSON response will be row-oriented; otherwise, it will be column-oriented.

All query parameters are optional and can be used in any combination to tailor the API response to your needs.

### Important Notes

- **Experimental Feature**: The read clients are under the `experimental` namespace, indicating that the API may change in future versions.
- **Environment Configuration**: Remember to securely store your read token in environment variables or a secure vault for production use.

With read tokens, you have the flexibility to integrate Logfire into your workflow, whether using Python scripts, data analysis tools, or other systems.
Empty file.
Loading
Loading