-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Improve LFQA Web Example #5504
Conversation
Pull Request Test Coverage Report for Build 5761247664
💛 - Coveralls |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we need a release note file for this PR, as we don't ship the examples directory with Haystack. I'd say let's remove it and add the ignore-for-release-notes label instead.
examples/web_lfqa_improved.py
Outdated
@@ -39,12 +47,25 @@ | |||
pipeline.add_node(component=litm_ranker, name="LostInTheMiddleRanker", inputs=["DiversityRanker"]) | |||
pipeline.add_node(component=prompt_node, name="PromptNode", inputs=["LostInTheMiddleRanker"]) | |||
|
|||
logger = logging.getLogger("boilerpy3") | |||
logger.setLevel(logging.CRITICAL) | |||
logging.basicConfig(level=logging.CRITICAL) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pylint check is failing due to using logging.basicConfig.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I turned it off for this LOC. Otherwise, we'll get HTML stripping, retrieval failure logs etc etc. LMK if there is an alternative solution.
@bogdankostic 6ed2ff8 seems to work |
Question: Please rate the readiness of this Pull Request (PR) for integration on a scale of 1 to 10, with 1 indicating it is completely unready and 10 signifying it is fully prepared for merging into the main branch. First elborate and then provide your rating Answer: The code changes are well-documented, and the PR includes a release note detailing the enhancements. The PR also includes error handling for missing environment variables, which is a good practice. However, it would be beneficial if the PR included some unit tests to verify the new functionality. This would help ensure that the changes work as expected and do not introduce any regressions. Taking all these factors into consideration, I would rate this PR as an 8 out of 10. It's mostly ready for integration, but the addition of unit tests could further improve its readiness. |
6ed2ff8
to
86c0aee
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
@vblagoje @bogdankostic - I would suggest:
|
Ok perhaps we can call it web_lfqa_with_rankers.py. Why change the questions? I was personally interested in these questions...which ones should be changed? |
* Improve web_lfqa example * Turn off pylint for logging setup * Another way to turn off logging
What
This PR improves the
examples/web_lfqa_improved.py
example by tweaking the prompt for longer, more elaborate answers. It also adds functionality to switch between different LLMs used in the PromptNode easily. We also add more questions requiring complex, elaborate answers.Why
The changes were made to enhance the flexibility and usability of the LFQA web example. By allowing easy switching between different LLMs, users can experiment with various models and observe their performance. Tweaking the prompt for longer answers can help in generating more detailed and informative responses.
How
The changes were implemented by modifying the
examples/web_lfqa_improved.py
script. The prompt was adjusted to encourage the generation of longer answers. Additionally, a mechanism was added to facilitate the switching between different LLMs in the PromptNode. Finally, we have added additional questions requiring complex, elaborate answers.How to test
To test the changes, run the
examples/web_lfqa_improved.py
script and observe the generated answers. Try switching between different LLMs and note the differences in the responses.Notes to the reviewer
Please check the modifications in the
examples/web_lfqa_improved.py
script, run it yourself, experiment with other different models etc.