Skip to content

Commit

Permalink
Merge pull request #70 from Sage-Bionetworks/SNOW-103-validation-feed…
Browse files Browse the repository at this point in the history
…back

[SNOW-103] Streamlit App Validation Feedback
  • Loading branch information
jaymedina authored Aug 14, 2024
2 parents 1e72767 + 8b07afe commit 9833eda
Show file tree
Hide file tree
Showing 5 changed files with 318 additions and 169 deletions.
30 changes: 30 additions & 0 deletions .vscode/launch.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{
"version": "0.2.0",
"configurations": [
{
"name": "debug streamlit",
"type": "debugpy",
"request": "launch",
"module": "streamlit",
"args": [
"run",
"${workspaceFolder}/streamlit_template/app.py"
],
"cwd": "${workspaceFolder}/streamlit_template",
"console": "integratedTerminal",
"justMyCode": true
},
{
"name": "pytest for Streamlit app",
"type": "debugpy",
"request": "launch",
"module": "pytest",
"args": [
"${workspaceFolder}/streamlit_template/tests/test_app.py"
],
"cwd": "${workspaceFolder}/streamlit_template",
"console": "integratedTerminal",
"justMyCode": false
},
]
}
135 changes: 116 additions & 19 deletions streamlit_template/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ streamlit_template/
- Create a fork of this repository under your GitHub user account.
- Within the `.streamlit` folder, you will need a file called `secrets.toml` which will be read by Streamlit before making communications with Snowflake.
Use the contents in `example_secrets.toml` as a syntax guide for how `secrets.toml` should be set up. See the [Snowflake documentation](https://docs.snowflake.com/en/user-guide/admin-account-identifier#using-an-account-name-as-an-identifier) for how to find your
account name.
account name. **Note:** If you use the `Copy account identifier` button it will copy data in the format of `orgname.account_name`, update it to be `orgname-account_name`.
- Test your connection to Snowflake by running the example Streamlit app at the base of this directory. This will launch the application on port 8501, the default port for Streamlit applications.

```
Expand All @@ -46,10 +46,12 @@ account name.

Once you've completed the setup above, you can begin working on your SQL queries.
- Navigate to `queries.py` under the `toolkit/` folder.
- Your queries will be string objects. Assign each of them an easy-to-remember variable name, as they will be imported into `app.py` later on.
- Your queries can either be string objects, or functions that return string objects. Assign each of them an easy-to-remember variable/function name, as they will be imported into `app.py` later on. See below for two examples on how you can write your queries depending on your needs.
- It is encouraged that you test these queries in a SQL Worksheet on Snowflake's Snowsight before running them on your application.

Example:
**Example of a string object query**: <br>
You may assign your string objects to global variables if you do not intend for the queries to be modified in any way. Below is a simple example for
a use-case where only the number of files for Project `syn53214489` is calculated.
```
QUERY_NUMBER_OF_FILES = """
Expand All @@ -64,6 +66,24 @@ and
"""
```

**Example of a function query**:<br>
We encourage the use of function queries if you plan to make your application, and therefore your queries, interactive. For example, let's say you want to give users the option to input the `project_id` they want to query the number of files for. Your query in this case would look like the following...
```
def query_number_of_files(pid):
"""Returns the total number of files for a given project (pid)."""
return f"""
select
count(*) as number_of_files
from
synapse_data_warehouse.synapse.node_latest
where
project_id = {pid}
and
node_type = 'file';
"""
```

### 3. Build your Widgets

Your widgets will be the main visual component of your Streamlit application.
Expand All @@ -87,13 +107,17 @@ Here is where all your work on `queries.py` and `widgets.py` come together.
> Example:
>
> ```
> from toolkit.queries import (QUERY_ENTITY_DISTRIBUTION, QUERY_PROJECT_SIZES,
> QUERY_PROJECT_DOWNLOADS, QUERY_UNIQUE_USERS)
> from toolkit.queries import (
> query_entity_distribution,
> query_project_downloads,
> query_project_sizes,
> query_unique_users,
> )
>
> entity_distribution_df = get_data_from_snowflake(QUERY_ENTITY_DISTRIBUTION)
> project_sizes_df = get_data_from_snowflake(QUERY_PROJECT_SIZES)
> project_downloads_df = get_data_from_snowflake(QUERY_PROJECT_DOWNLOADS)
> unique_users_df = get_data_from_snowflake(QUERY_UNIQUE_USERS)
> entity_distribution_df = get_data_from_snowflake(query_entity_distribution())
> project_sizes_df = get_data_from_snowflake(query_project_sizes())
> project_downloads_df = get_data_from_snowflake(query_project_downloads())
> unique_users_df = get_data_from_snowflake(query_unique_users(my_param))
> ```
### 5. Test your Application
Expand All @@ -110,11 +134,39 @@ as you see fit.
> to avoid import issues.
### 6. Dockerize your Application

- **Update the requirements file** <br>
Ensure that the `requirements.txt` file is up to date with all the necessary Python packages that are used in your scripts.
- **Push all relevant changes** <br>
Ensure you have pushed all your changes to your branch of the forked repository that you are working in (remember not to commit your `secrets.toml` file).

You can choose to build and push a Docker image to the GitHub Container Registry and pull it directly from the registry when ready to deploy in Step 7. Keep in mind the size of your Docker image will be right around 800Mb at _least_, due to the python libraries
required for a basic application to run, so be conscious of this when choosing to upload your image.

If you do not wish to publish a Docker image to the container registry, you can skip the to the next section. Otherwise, follow the instructions below.

- **Build the Docker image** <br>
Run the following command in your terminal from the root of your project directory where the `Dockerfile` is located:
```
docker build -t ghcr.io/<your-username>/<your-docker-image-name>:<tag> .
```
Replace `<your-username>` with the user that owns the forked repository, `<your-docker-image-name>` with a name for your Docker image, and `<tag>` with a version tag (e.g., v1.0.0).

- **Login to GitHub Container Registry** <br>
Before pushing your image, you need to authenticate with the GitHub Container Registry. Use the following command:
```
echo "<your-token>" | docker login ghcr.io -u <your-github-username> --password-stdin
```
Replace `<your-token>` with a GitHub token that has appropriate permissions, and `<your-github-username>` with your GitHub username.

- **Push the Docker Image** <br>
Once authenticated, push your Docker image to the GitHub Container Registry with the following command:
```
docker push ghcr.io/<your-github-username>/<your-docker-image-name>:<tag>
```
Replace the placeholders with your relevant details.

- Update the `requirements.txt` file with the packages used in any of the scripts above.
- Ensure you have pushed all your changes to your fork of the repository that you are working in (remember not to commit your `secrets.toml` file).
- **_(Optional)_** You can choose to push a Docker image to the GitHub Container Registry to pull it directly from the container registry when ready to deploy.
For instructions on how to deploy your Docker image to the GitHub Container Registry, [see here](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry).
For further instructions on how to deploy your Docker image to the GitHub Container Registry, [see here](https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry).

### 7. Launch your Application on AWS EC2

Expand All @@ -123,14 +175,59 @@ as you see fit.
- Once your EC2 product's `status` is set to `Available`, click it and navigate to the _Events_ tab.
- Click the URL next to `ConnectionURI` to launch a shell session in your instance.
- Navigate to your home directory (`cd ~`).
- Clone your repository in your desired working directory.
- Create your `secrets.toml` file again. The Docker image of your Streamlit application will not have the `secrets.toml` for security reasons.
- Build your Docker image from the Dockerfile in the repository
- Run your Docker container from the image, and make sure to have your `secrets.toml` mounted and the 8501 port specified, like so:
- Clone your repository in your desired working directory. Example:
```
git clone https://github.com/<your-username>/snowflake.git
```
docker run -p 8501:8501 \
-v $PWD/secrets.toml:.streamlit/secrets.toml \
Replace `<your-username>` with the user that the forked repository is under.
- Create your `secrets.toml` file again. By default, the instance should already have `vi` available to use as an editor.
- Build your Docker image, either from the `Dockerfile` in the repository, or by pulling down your image from the GitHub Container Registry.
- Run your Docker container from the image, and make sure to have your `secrets.toml` (in the current working directory) mounted and the 8501 port specified, like so:
```
docker run \
-p 8501:8501 \
-v $PWD/secrets.toml:/.streamlit/secrets.toml \
<image name>
```
- Now your Streamlit application should be sharable via the private IP address of the EC2 instance. To find the private IP address, navigate back to the _Events_ tab when viewing your provisioned EC2: Linux Docker product, and scroll down to `EC2InstancePrivateIpAddress`. Let's say your EC2 instance's private IP address is 22.22.22.222. The URL you can share with users to access the Streamlit app would be http://22.22.22.222:8501/. **Remember that this is a private IP address, therefore your Streamlit app can only be viewed by those connected to Sage's internal network.**

> [!TIP]
> If you would like to leave the app running after you close your shell session, be sure to run with the container detached (i.e. Have `-d` somewhere in the `docker run` command)
## Additional Tips: Leveraging VSCode for Development
If you would like to leverage VSCode to debug and test your application, rather than working with `streamlit` and `pytest` on the command line, follow the instructions below:

There is a `.vscode/launch.json` file located at the root of the `snowflake` repository. This file is used to define configurations for debugging and testing within VSCode. The `launch.json` file in this repository contains two key configurations: one for debugging the Streamlit app and another for running tests with the pytest library. Here’s how you can set it up and use it:

### 1. Set Up VSCode on Your Machine

Make sure you have Visual Studio Code (VSCode) installed on your machine. You can download it [here](https://code.visualstudio.com/).

### 2. Create An Active Workspace on VSCode

* Open up VSCode.
* Click up _File_ > _Open Folder..._ and navigate to the root directory of the `snowflake` repository.
* Select the folder and click _Open_.

### 3. Review The `launch.json` Configurations

* Open the `launch.json` file in the VSCode editor.
* The launch.json file in this project contains two configurations: <br>

**Debugging the Streamlit App:**<br>
This configuration is named "debug streamlit". When you run this, it will start the Streamlit app in a debug session. This allows you to place breakpoints in your code, step through the execution, and inspect variables as the app runs. <br>
<br>
**Running Pytest for the Streamlit App:**<br>
The second configuration is named "pytest for Streamlit app". This is used to run the tests in the project using the pytest framework. It’s designed to execute the test file associated with the Streamlit app and allows you to debug your tests if they fail.

### 4. Running the Configurations

* **Open the Run and Debug Sidebar**<br>
Click on the "Run and Debug" icon in the Activity Bar on the left side of the VSCode window. It looks like a play button with a bug on it. Alternatively, you can open it by pressing `Ctrl+Shift+D` (Windows/Linux) or `Cmd+Shift+D` (Mac).
* **Select a Configuration**<br>
At the top of the "Run and Debug" sidebar, you’ll see a dropdown menu where you can select one of the configurations defined in the launch.json file.
Select "debug streamlit" to start debugging the Streamlit app, or "pytest for Streamlit app" to run and debug your tests.
* **Start Debugging or Testing**<br>
Once you’ve selected the desired configuration, click the green play button (Start Debugging) at the top of the sidebar, or simply press F5.
The debugger will start, and you can place breakpoints in your code by clicking in the left margin next to the line numbers.
If you’re running the tests, the pytest module will execute the specified test file, and you can debug any test failures using the same tools.
30 changes: 21 additions & 9 deletions streamlit_template/app.py
Original file line number Diff line number Diff line change
@@ -1,14 +1,20 @@
import numpy as np
import streamlit as st
from toolkit.queries import (
QUERY_ENTITY_DISTRIBUTION,
QUERY_PROJECT_DOWNLOADS,
QUERY_PROJECT_SIZES,
QUERY_UNIQUE_USERS,
query_entity_distribution,
query_project_downloads,
query_project_sizes,
query_unique_users,
)
from toolkit.utils import get_data_from_snowflake
from toolkit.widgets import plot_download_sizes, plot_unique_users_trend

# Configure the layout of the Streamlit app page
st.set_page_config(layout="wide",
page_title="HTAN Analytics",
page_icon=":bar_chart:",
initial_sidebar_state="expanded")

# Custom CSS for styling
with open("style.css") as f:
st.markdown(f"<style>{f.read()}</style>", unsafe_allow_html=True)
Expand All @@ -17,10 +23,16 @@
def main():

# 1. Retrieve the data using your queries in queries.py
entity_distribution_df = get_data_from_snowflake(QUERY_ENTITY_DISTRIBUTION)
project_sizes_df = get_data_from_snowflake(QUERY_PROJECT_SIZES)
project_downloads_df = get_data_from_snowflake(QUERY_PROJECT_DOWNLOADS)
unique_users_df = get_data_from_snowflake(QUERY_UNIQUE_USERS)
entity_distribution_df = get_data_from_snowflake(query_entity_distribution())
project_sizes_df = get_data_from_snowflake(query_project_sizes())
project_downloads_df = get_data_from_snowflake(query_project_downloads())
# User input for the number of months
months_back = st.sidebar.slider("Lookback Range (how many months back to display trends)",
min_value=1,
max_value=24,
value=12)
# Use the selected months_back in the unique users query
unique_users_df = get_data_from_snowflake(query_unique_users(months_back))

# 2. Transform the data as needed
convert_to_gib = 1024 * 1024 * 1024
Expand All @@ -39,7 +51,7 @@ def main():
# -------------------------------------------------------------------------
# Row 1 -------------------------------------------------------------------
st.markdown("### Monthly Overview :calendar:")
col1, col2, col3 = st.columns([1, 1, 1])
col1, col2, col3 = st.columns([1, 1, 5])
col1.metric("Total Storage Occupied", f"{total_data_size} GB", "7.2 GB")
col2.metric("Avg. Project Size", f"{average_project_size} GB", "8.0 GB")
col3.metric("Annual Cost", "102,000 USD", "10,000 USD")
Expand Down
3 changes: 1 addition & 2 deletions streamlit_template/tests/test_app.py
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@
def app():
return AppTest.from_file(
"app.py", default_timeout=DEFAULT_TIMEOUT
).run() # Point to your main Streamlit app file
).run()


def test_monthly_overview(app):
Expand Down Expand Up @@ -62,6 +62,5 @@ def test_dataframe(app):
"""Ensure that the dataframe is being displayed."""

dataframe = app.dataframe

assert dataframe is not None
assert len(dataframe) == 1
Loading

0 comments on commit 9833eda

Please sign in to comment.