This repo contains sample code for using Promptflow as part of operationlizing LLMs for humanitarian response, as referenced in Blog post A Humanitarian Crises Situation Report AI Assistant: Exploring LLMOps with Prompt Flow
The example flow does the following ...
- Extracts entities from user input and converts them to a query on ReliefWeb
- Runs query again Reliefweb API to get situation reports for the user's request
- Summarized the response
- Answers the user question
- Extracts references
- Presents results to the user
It is a very basic app and hasn't been tuned for production use, more work would be needed for working with the ReliefWeb API. It is meant to demonstrate various things to consider when operationalizing LLM solutions.
The flow also includes:
- Content safety filtering
- Prompt variants
- Groundedness checks
- Dynamic grounding using deepeval
And the repo includes GitHub actions to run ...
- Promptflow automated groundedness tests
- Code quality tests
For setup with screenshots, see also the blog post.
- Install miniconda by selecting the installer that fits your OS version. Once it is installed you may have to restart your terminal (closing your terminal and opening again)
- In this directory, open terminal
conda env create -f environment.yml
conda activate promtpflow-serve
The repo should support both OpenAI and Azurer OpenAI depending on the variables set in the .env
file. If you want
to test content safety, you will need to set up an Azure content safety instance, or dicoennect that connection in the flow and implement your own custom solution.
Promptflow can be run from the commandline, see documentation for further information, but a nice way to use it is to use VS Code which has a user interface for managing flows. To use this ..
- Download VS Code
- Install the promptflow extension
- Install the conda environment (see above)
- Open a
flow.dag.yaml
- At top of file, click install dependencies
- Select the conda environment
promptflow-serve
- Re-open
flow.dag.yaml
, select 'visual editor' at the top to see the lovely user interface - To run the flows click the play icon at the top of the promptflow user interface
You will also need to configure LLM keys. The demo assumes Azure OpenAI, but scripts can also support OpenAI direct. To configure your LLM environment ...
- Copy
.env.example
to.env
- Set keys appropriately
The code is configured to run with Azure OpenAI. You can also run with OpenAI directly as follows:
-
In promptflow, create a new OpenAI conntection (in VS code select 'P' promptflow on left,click + under connections). For command line creation, see
.github/test_deploy,yml
) -
Set connection in all LLM nodes in the flow using VS code (click on them, change connection)
-
In
deep_eval.py
adjust code to use OpenAIChat instead of AzureOpenAI. At some point this will be a settings
Note, if using promptflow in Azure ML, you can explore other model connections. After creating an appropriate deployment, follow the steps above.
The repo has been set up with black and flake8 pre-commit hooks. These can be configured in the .pre-commit-config.yaml
file and initialized with pre-commit autoupdate
.
On a new repo, you must run pre-commit install
to add pre-commit hooks.
To run code quality tests, you can run pre-commit run --all-files
Automatic tests are run using Github actions, which creates a promptflow connection and executes a promptflow evaluation run. The output is monitored by a script, and can be used as a template for adding promptflow tests as part of DevOps.
See ./github/workflows/test_deploy.yml
for more details, and 'Actions' in the repo