-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Fireworks] Updates to Default model and support for function calling #15046
Merged
logan-markewich
merged 3 commits into
run-llama:main
from
aravindputrevu:firefunction-v2
Jul 30, 2024
Merged
[Fireworks] Updates to Default model and support for function calling #15046
logan-markewich
merged 3 commits into
run-llama:main
from
aravindputrevu:firefunction-v2
Jul 30, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
dosubot
bot
added
the
size:S
This PR changes 10-29 lines, ignoring generated files.
label
Jul 30, 2024
llama-index-integrations/llms/llama-index-llms-fireworks/llama_index/llms/fireworks/utils.py
Show resolved
Hide resolved
logan-markewich
approved these changes
Jul 30, 2024
barduinor
added a commit
to box-community/llama_index
that referenced
this pull request
Jul 31, 2024
* [version] bump version to 0.10.54 (run-llama#14681) * Add user configurations for Cleanlab LLM integration (run-llama#14676) * fix docs (run-llama#14687) * fix: race between concurrent pptx readers over a single temp filename. (run-llama#14686) * fix: race between concurrent pptx readers over a single temp filename. * vbump * rebase * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Fix: Update html2text dependency to ^2024.2.26 to fix image src error (run-llama#14683) * Update pyproject.toml * bump llama-index-readers-web to 0.1.23 * [version] bump version to v0.10.54post1 (fixes 0.10.54 release with mismatch llama-index-core version) (run-llama#14699) * toml * lock * bump to post1 * Documentation: update huggingface.ipynb (run-llama#14697) utilties -> utilities * Upgrade llama-cloud client to 0.0.9 and support `retrieval_mode` and `files_top_k` (run-llama#14696) * update observability docs (run-llama#14692) * These docs have been supplanted by the full llamaparse docs site (run-llama#14688) * changes to Exa search tool getting started and example notebook (run-llama#14690) * Add a sample notebook to show llamaindex agents used for managed vertex ai index (run-llama#14704) * feat(ci): cache `poetry` in CI (run-llama#14485) * v0.10.55 (run-llama#14709) * fix fastembed python version (run-llama#14710) * fix fastembed python version * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Box reader (run-llama#14685) * remove flakey and unhelpful tests (run-llama#14737) * fix: tools are required for attachments in openai api (run-llama#14609) * Update simple_summarize.py (run-llama#14714) * feat: improve azureai search deleting (run-llama#14693) * Adds Quantization option to QdrantVectorStore (run-llama#14740) * Add GraphRAG Implementation (run-llama#14752) * update docs for OpenAI/AzureOpenAI additional_kwargs (run-llama#14749) * update docs for OpenAI/AzureOpenAI additional_kwargs * code lint * follow odata.nextLink (run-llama#14708) * follow odata.nextLink jsonify response once * removed print statement * bumped version --------- Co-authored-by: Chris Knowles <[email protected]> * 📃 docs(Learn): Loading Data (run-llama#14762) * 📃 docs(Learn): Loading Data 1. add understanding\using_llms\using_llms.md missing full stop 2. fix understanding\loading\loading.md DatabaseReader link 3. add module_guides\loading\node_parsers\modules.md node_parsers modules SemanticSplitterNodeParser video link 4. fix docs\docs\examples\ingestion\document_management_pipeline.ipynb redis link * fix one link --------- Co-authored-by: Andrei Fajardo <[email protected]> * [FIX] Issues with `llama-index-readers-box` pyproject.toml (run-llama#14770) fix maintainers issue * Bump setuptools from 69.5.1 to 70.0.0 in /llama-index-integrations/embeddings/llama-index-embeddings-upstage (run-llama#14771) Bump setuptools Bumps [setuptools](https://github.com/pypa/setuptools) from 69.5.1 to 70.0.0. - [Release notes](https://github.com/pypa/setuptools/releases) - [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst) - [Commits](pypa/setuptools@v69.5.1...v70.0.0) --- updated-dependencies: - dependency-name: setuptools dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * 📃 docs(examples): fix KnowledgeGraphDemo link error (run-llama#14764) 📃 docs(examples): KnowledgeGraphDemo 1. fix docs\docs\examples\index_structs\knowledge_graph\KnowledgeGraphDemo.ipynb example.html link error * Chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg (run-llama#14761) chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg * Remove double curly replacing from output parser utils (run-llama#14735) fix(output-parser): remove double curly replacing * Fix OpenWeatherMapToolSpec.forecast_tommorrow_at_location (run-llama#14745) * fix: weather agent tool api call for weather forecast * chore: fix style issue * bump version * docs: Fix wrong Qdrant_metadata_filter doc (run-llama#14773) #docs Fix wrong Qdrant_metadata_filter doc * feat: azureaisearch support collection string (run-llama#14712) * feat: azureaisearch support collections * feat: add odata filters * wip * wip * feat: add azureaisearch supported conditions (run-llama#14787) * chore: add conditions * wip * feat: Add NOT IN filter for Qdrant vector store (run-llama#14791) * feat: add nested filters for azureaisearch (run-llama#14795) * feat: nested filters * bump version * Add filter to get_triples in neo4j (run-llama#14811) Add filter to triples in neo4j * Improve output format system prompt in ReAct agent (run-llama#14814) fix(react agent): output format system prompt * Add support for mistralai nemo model (run-llama#14819) * Add support for gpt-4o-mini (run-llama#14820) * Fix bug when sanitize is used in neo4j property graph (run-llama#14812) * Fix bug when sanitize is used in neo4j property graph * bump version * Fix AgentRunner AgentRunStepStartEvent dispatch (run-llama#14828) * Fix AgentRunner AgentRunStepStartEvent dispatch * Fix index variable name * Fix PineconeRetriever: remove usage of global variables, check embeddings - Init Pinecone properly (run-llama#14799) * Fix PineconeRetriever class usage of global variables, add check to determine if query already has embeddings, and proper initialization of Pinecone and its index. * Apply pre-commit fixes * Fix Azure OpenAI LLM and Embedding async client bug (run-llama#14833) * Fix Azure OpenAI LLM and Embedding async client bug * Bump version of llama-index-llms-azure-openai and llama-index-embeddings-azure-openai * Add a notebook to show llamaindex agent works with graphRAG and Vertex AI (run-llama#14774) * update notebook * update notebook * fix managed indices bug * update sample notebook * add sample notebook for llamaindex agent with managed vertexai index * add agentic graphRAG notebook * add agentic graphRAG notebook * fix typo and move to new folder * Fix OpenAI Embedding async client bug (run-llama#14835) * Integration notebook for RAGChecker (run-llama#14838) * Notebook for RAGChecker integration with LlamaIndex * ragchecker integration * lint --------- Co-authored-by: Andrei Fajardo <[email protected]> * add support for nvidia/nv-rerankqa-mistral-4b-v3 (run-llama#14844) * Update docstring for gmailtoolspec's search_messages tool (run-llama#14840) * Update docstring for gmailtoolspec's search_messages tool * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Add Context-Only Response Synthesizer (run-llama#14439) * Add new integration for YandexGPT Embedding Model (run-llama#14313) * feat: ✨ Implement async functionality in `BedrockConverse` (run-llama#14326) * Azure AI Inference integration (run-llama#14672) * Fixing the issue where the _apply_node_postprocessors function was ca… (run-llama#14839) Fixing the issue where the _apply_node_postprocessors function was called without passing in a correctly typed object, leading to an inability to convert the object passed into subsequent deeper functions into a QueryBundle and consequently throwing an exception. * chore: read AZURE_POOL_MANAGEMENT_ENDPOINT from env vars (run-llama#14732) * Add an optional parameter similarity_score to VectorContextRetrieve… (run-llama#14831) * align deps (run-llama#14850) * Bugfix: ollama streaming response would not return last object that i… (run-llama#14830) * [version] bump version to 0.10.56 (run-llama#14849) * pyproject tomls * changelog * prepare docs * snuck one in * snuck one in * lock * lock * CHANGELOG - unreleased section (run-llama#14852) cr * [version] bump version to 0.1.7 for MongoDB Vector Store (run-llama#14851) Version bump to fix package issue * Implements `delete_nodes()` and `clear()` for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (run-llama#14800) * rename init file (run-llama#14853) * MongoDB Atlas Vector Search: Enhanced Metadata Filtering (run-llama#14856) * optimize ingestion pipeline deduping (run-llama#14858) * Empty array being send to vector store (run-llama#14859) * update notion reader (run-llama#14861) * fix unpicklable attributes (run-llama#14860) * Removed a dead link in Document Management Docs (run-llama#14863) * bump langchain version in integration (run-llama#14879) * 📃 docs(unserstanding): typo link error (run-llama#14867) 1. `getting_started`: change `https://github.com/jerryjliu/llama_index.git` to `https://github.com/run-llama/llama_index.git` 2. `putting_it_all_together`: change link `apps.md` to `apps/index.md` * Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (run-llama#14869) * Callbacks to Observability in the examples section (run-llama#14888) * WIP: update structured outputs syntax (run-llama#14747) * add property extraction for KGs (run-llama#14707) * v0.10.57 (run-llama#14893) * fireworks ai llama3.1 (run-llama#14914) * Update-mappings (run-llama#14917) update-mappings * Fix TaskStepOutput sources bug (run-llama#14885) * fix the initialization of Pinecone in the low-level ingestion notebook. (run-llama#14919) * Improved deeplake.get_nodes() performance (run-llama#14920) Co-authored-by: Nathan Voxland <[email protected]> * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (run-llama#14918) * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy. * bump version and add comment --------- Co-authored-by: Jimmy Longley <[email protected]> * feat: allow to limit how many elements retrieve (qdrant) (run-llama#14904) * Add claude 3.5 sonnet to multi modal llms (run-llama#14932) * Add vector store integration of lindorm, including knn search, … (run-llama#14623) * Llamaindex retriever for Vertex AI Search (run-llama#14913) * Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (run-llama#14937) * 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (run-llama#14933) * structured extraction docs + bug fixes (run-llama#14925) * cr * cr * cr * patch entity extractor * fix cicd --------- Co-authored-by: Logan Markewich <[email protected]> * v0.10.58 (run-llama#14944) * add organization_id param to LlamaCloudIndex.from_documents (run-llama#14947) * add organization_id param to from_documents * update version * add org id param to LlamaCloudIndex ctor * add back org id var * Add function calling for Ollama (run-llama#14948) * Fixed Import Error in PandasAIReader (run-llama#14915) * fix: organization id (run-llama#14961) * Fix None type error when using Neo4jPropertyGraphStore (run-llama#14957) * add back kwargs to Ollama (run-llama#14963) * use proper stemmer in bm25 tokenize (run-llama#14965) * fireworks 3.12 support (run-llama#14964) * Feature/azure docstore hotfix (run-llama#14950) * breaking: update to OpenLLM 0.6 (run-llama#14935) * UnstructuredReader fixes V2. (run-llama#14946) * restrict python version to enable publish of pandasai reader (run-llama#14966) * Fix/confluence reader sort auth parameters priority (run-llama#14905) * Feature/azure ai search hotfix (run-llama#14949) * Docs updates (run-llama#14941) * docs: Removed unused import from example * Updated link * Update hierarchical.py * typo: bm25 notebook * linting --------- Co-authored-by: Logan Markewich <[email protected]> * toggle for ollama function calling (run-llama#14972) * make re-raising error skip constructor (run-llama#14970) * undo incompatible kwarg in elasticsearch vector store (run-llama#14973) undo incompatible kwarg * honor exclusion keys when creating the index nodes (run-llama#14911) * honor exclusion keys when creating the index nodes * cleaner code based on feedback. * missed a reference. * remove unnecessary embedding key removal * integration[embedding]: support textembed embedding (run-llama#14968) * update: support textembed embedding * Fix: lint and format error * Add: example notebook --------- Co-authored-by: Keval Dekivadiya <[email protected]> * docs: update TiDB Cloud links to public beta! (run-llama#14976) * Feat: expand span coverage for query pipeline (run-llama#14997) Expand span coverage for query pipeline * Adds a LlamaPack that implements LongRAG (run-llama#14916) * Dashscope updates (run-llama#15028) * bump fastembed dep (run-llama#15029) * Update default sparse encoder for Hybrid search (run-llama#15019) * initial implementation FalkorDBPropertyGraphStore (run-llama#14936) * Add extra_info to metadata field on document in RTFReader (run-llama#15025) * Adds option to construct PGVectorStore with a HNSW index (run-llama#15024) * Jimmy/disable embeddings for sparse strategy (run-llama#15032) * MLflow Integration doc Update (run-llama#14977) * Initial commit * Update docs/docs/module_guides/observability/index.md Co-authored-by: Yuki Watanabe <[email protected]> --------- Co-authored-by: Yuki Watanabe <[email protected]> * Added feature to stream_chat allowing previous chunks to be inserted into the current context window (run-llama#14889) * docs(literalai): add Literal AI integration to documentation (run-llama#15023) * docs(literalai): add Literal AI integration to documentation * fix: white space * fixed import error regarding OpenAIAgent in the code_interpreter example notebook (run-llama#14999) fixed imports * docs(vector_store_index): fix typo (run-llama#15040) * Add py.typed file to vector store packages (run-llama#15031) * GitLab reader integration (run-llama#15030) * Fix: Azure AI inference integration support for tools (run-llama#15044) * [Fireworks] Updates to Default model and support for function calling (run-llama#15046) * Adding support for Llama 3 and Mixtral 22B * Changes to default model, adding support for Firefunction v2 * Fixing merge conflict * Update base.py (run-llama#15049) * Fix typo in docs (run-llama#15059) * Enhance MilvusVectorStore with flexible index management (run-llama#15058) --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Andrei Fajardo <[email protected]> Co-authored-by: Ashish Sardana <[email protected]> Co-authored-by: Logan <[email protected]> Co-authored-by: Jeff Inman <[email protected]> Co-authored-by: Andrei Fajardo <[email protected]> Co-authored-by: Vitalii Gerasimov <[email protected]> Co-authored-by: Ikko Eltociear Ashimine <[email protected]> Co-authored-by: Simon Suo <[email protected]> Co-authored-by: Jerry Liu <[email protected]> Co-authored-by: Laurie Voss <[email protected]> Co-authored-by: Vkzem <[email protected]> Co-authored-by: Dave Wang <[email protected]> Co-authored-by: Saurav Maheshkar <[email protected]> Co-authored-by: Huu Le <[email protected]> Co-authored-by: Joey Fallone <[email protected]> Co-authored-by: Yang YiHe <[email protected]> Co-authored-by: Emanuel Ferreira <[email protected]> Co-authored-by: Jonathan Liu <[email protected]> Co-authored-by: Ravi Theja <[email protected]> Co-authored-by: Botong Zhu <[email protected]> Co-authored-by: Chris Knowles <[email protected]> Co-authored-by: Chris Knowles <[email protected]> Co-authored-by: Houtaroy <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: richzw <[email protected]> Co-authored-by: Fernando Silva <[email protected]> Co-authored-by: Alexander Fischer <[email protected]> Co-authored-by: Appletree24 <[email protected]> Co-authored-by: Tomaz Bratanic <[email protected]> Co-authored-by: Garrit Franke <[email protected]> Co-authored-by: Harsha <[email protected]> Co-authored-by: Joel Rorseth <[email protected]> Co-authored-by: Matin Khajavi <[email protected]> Co-authored-by: Jia Le <[email protected]> Co-authored-by: Xiangkun Hu <[email protected]> Co-authored-by: Matthew Farrellee <[email protected]> Co-authored-by: Titus Lim <[email protected]> Co-authored-by: Robin Richtsfeld <[email protected]> Co-authored-by: Kirill <[email protected]> Co-authored-by: André Cristóvão Neves Ferreira <[email protected]> Co-authored-by: Facundo Santiago <[email protected]> Co-authored-by: weiweizwc98 <[email protected]> Co-authored-by: Wassim Chegham <[email protected]> Co-authored-by: Shashank Gowda V <[email protected]> Co-authored-by: Koufax <[email protected]> Co-authored-by: Chetan Choudhary <[email protected]> Co-authored-by: Brandon Max <[email protected]> Co-authored-by: Kanav <[email protected]> Co-authored-by: Nathan Voxland (Activeloop) <[email protected]> Co-authored-by: Nathan Voxland <[email protected]> Co-authored-by: Jimmy Longley <[email protected]> Co-authored-by: Jimmy Longley <[email protected]> Co-authored-by: Javier Martinez <[email protected]> Co-authored-by: Diicell <[email protected]> Co-authored-by: Rainy Guo <[email protected]> Co-authored-by: Joel Barmettler <[email protected]> Co-authored-by: Sourabh Desai <[email protected]> Co-authored-by: Aaryan Kaushik <[email protected]> Co-authored-by: yaqiang.sun <[email protected]> Co-authored-by: Francisco Aguilera <[email protected]> Co-authored-by: Aaron Pham <[email protected]> Co-authored-by: Redouane Achouri <[email protected]> Co-authored-by: Rohit Amarnath <[email protected]> Co-authored-by: keval dekivadiya <[email protected]> Co-authored-by: Keval Dekivadiya <[email protected]> Co-authored-by: sykp241095 <[email protected]> Co-authored-by: Tibor Reiss <[email protected]> Co-authored-by: saipjkai <[email protected]> Co-authored-by: Avi Avni <[email protected]> Co-authored-by: Henry LeCompte <[email protected]> Co-authored-by: Michael Berk <[email protected]> Co-authored-by: Yuki Watanabe <[email protected]> Co-authored-by: rohans30 <[email protected]> Co-authored-by: Damien BUTY <[email protected]> Co-authored-by: sahan ruwantha <[email protected]> Co-authored-by: Julien Bouquillon <[email protected]> Co-authored-by: Christophe Bornet <[email protected]> Co-authored-by: Jiacheng Zhang <[email protected]> Co-authored-by: Aravind Putrevu <[email protected]> Co-authored-by: Di Wang <[email protected]> Co-authored-by: Shubham <[email protected]>
barduinor
added a commit
to box-community/llama_index
that referenced
this pull request
Jul 31, 2024
* [version] bump version to 0.10.54 (run-llama#14681) * Add user configurations for Cleanlab LLM integration (run-llama#14676) * fix docs (run-llama#14687) * fix: race between concurrent pptx readers over a single temp filename. (run-llama#14686) * fix: race between concurrent pptx readers over a single temp filename. * vbump * rebase * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Fix: Update html2text dependency to ^2024.2.26 to fix image src error (run-llama#14683) * Update pyproject.toml * bump llama-index-readers-web to 0.1.23 * [version] bump version to v0.10.54post1 (fixes 0.10.54 release with mismatch llama-index-core version) (run-llama#14699) * toml * lock * bump to post1 * Documentation: update huggingface.ipynb (run-llama#14697) utilties -> utilities * Upgrade llama-cloud client to 0.0.9 and support `retrieval_mode` and `files_top_k` (run-llama#14696) * update observability docs (run-llama#14692) * These docs have been supplanted by the full llamaparse docs site (run-llama#14688) * changes to Exa search tool getting started and example notebook (run-llama#14690) * Add a sample notebook to show llamaindex agents used for managed vertex ai index (run-llama#14704) * feat(ci): cache `poetry` in CI (run-llama#14485) * v0.10.55 (run-llama#14709) * fix fastembed python version (run-llama#14710) * fix fastembed python version * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Box reader (run-llama#14685) * remove flakey and unhelpful tests (run-llama#14737) * fix: tools are required for attachments in openai api (run-llama#14609) * Update simple_summarize.py (run-llama#14714) * feat: improve azureai search deleting (run-llama#14693) * Adds Quantization option to QdrantVectorStore (run-llama#14740) * Add GraphRAG Implementation (run-llama#14752) * update docs for OpenAI/AzureOpenAI additional_kwargs (run-llama#14749) * update docs for OpenAI/AzureOpenAI additional_kwargs * code lint * follow odata.nextLink (run-llama#14708) * follow odata.nextLink jsonify response once * removed print statement * bumped version --------- Co-authored-by: Chris Knowles <[email protected]> * 📃 docs(Learn): Loading Data (run-llama#14762) * 📃 docs(Learn): Loading Data 1. add understanding\using_llms\using_llms.md missing full stop 2. fix understanding\loading\loading.md DatabaseReader link 3. add module_guides\loading\node_parsers\modules.md node_parsers modules SemanticSplitterNodeParser video link 4. fix docs\docs\examples\ingestion\document_management_pipeline.ipynb redis link * fix one link --------- Co-authored-by: Andrei Fajardo <[email protected]> * [FIX] Issues with `llama-index-readers-box` pyproject.toml (run-llama#14770) fix maintainers issue * Bump setuptools from 69.5.1 to 70.0.0 in /llama-index-integrations/embeddings/llama-index-embeddings-upstage (run-llama#14771) Bump setuptools Bumps [setuptools](https://github.com/pypa/setuptools) from 69.5.1 to 70.0.0. - [Release notes](https://github.com/pypa/setuptools/releases) - [Changelog](https://github.com/pypa/setuptools/blob/main/NEWS.rst) - [Commits](pypa/setuptools@v69.5.1...v70.0.0) --- updated-dependencies: - dependency-name: setuptools dependency-type: indirect ... Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * 📃 docs(examples): fix KnowledgeGraphDemo link error (run-llama#14764) 📃 docs(examples): KnowledgeGraphDemo 1. fix docs\docs\examples\index_structs\knowledge_graph\KnowledgeGraphDemo.ipynb example.html link error * Chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg (run-llama#14761) chore: restrict the scipy verstion to 1.12.0 to fix the error cannot import name triu from scipy.linalg * Remove double curly replacing from output parser utils (run-llama#14735) fix(output-parser): remove double curly replacing * Fix OpenWeatherMapToolSpec.forecast_tommorrow_at_location (run-llama#14745) * fix: weather agent tool api call for weather forecast * chore: fix style issue * bump version * docs: Fix wrong Qdrant_metadata_filter doc (run-llama#14773) #docs Fix wrong Qdrant_metadata_filter doc * feat: azureaisearch support collection string (run-llama#14712) * feat: azureaisearch support collections * feat: add odata filters * wip * wip * feat: add azureaisearch supported conditions (run-llama#14787) * chore: add conditions * wip * feat: Add NOT IN filter for Qdrant vector store (run-llama#14791) * feat: add nested filters for azureaisearch (run-llama#14795) * feat: nested filters * bump version * Add filter to get_triples in neo4j (run-llama#14811) Add filter to triples in neo4j * Improve output format system prompt in ReAct agent (run-llama#14814) fix(react agent): output format system prompt * Add support for mistralai nemo model (run-llama#14819) * Add support for gpt-4o-mini (run-llama#14820) * Fix bug when sanitize is used in neo4j property graph (run-llama#14812) * Fix bug when sanitize is used in neo4j property graph * bump version * Fix AgentRunner AgentRunStepStartEvent dispatch (run-llama#14828) * Fix AgentRunner AgentRunStepStartEvent dispatch * Fix index variable name * Fix PineconeRetriever: remove usage of global variables, check embeddings - Init Pinecone properly (run-llama#14799) * Fix PineconeRetriever class usage of global variables, add check to determine if query already has embeddings, and proper initialization of Pinecone and its index. * Apply pre-commit fixes * Fix Azure OpenAI LLM and Embedding async client bug (run-llama#14833) * Fix Azure OpenAI LLM and Embedding async client bug * Bump version of llama-index-llms-azure-openai and llama-index-embeddings-azure-openai * Add a notebook to show llamaindex agent works with graphRAG and Vertex AI (run-llama#14774) * update notebook * update notebook * fix managed indices bug * update sample notebook * add sample notebook for llamaindex agent with managed vertexai index * add agentic graphRAG notebook * add agentic graphRAG notebook * fix typo and move to new folder * Fix OpenAI Embedding async client bug (run-llama#14835) * Integration notebook for RAGChecker (run-llama#14838) * Notebook for RAGChecker integration with LlamaIndex * ragchecker integration * lint --------- Co-authored-by: Andrei Fajardo <[email protected]> * add support for nvidia/nv-rerankqa-mistral-4b-v3 (run-llama#14844) * Update docstring for gmailtoolspec's search_messages tool (run-llama#14840) * Update docstring for gmailtoolspec's search_messages tool * vbump --------- Co-authored-by: Andrei Fajardo <[email protected]> * Add Context-Only Response Synthesizer (run-llama#14439) * Add new integration for YandexGPT Embedding Model (run-llama#14313) * feat: ✨ Implement async functionality in `BedrockConverse` (run-llama#14326) * Azure AI Inference integration (run-llama#14672) * Fixing the issue where the _apply_node_postprocessors function was ca… (run-llama#14839) Fixing the issue where the _apply_node_postprocessors function was called without passing in a correctly typed object, leading to an inability to convert the object passed into subsequent deeper functions into a QueryBundle and consequently throwing an exception. * chore: read AZURE_POOL_MANAGEMENT_ENDPOINT from env vars (run-llama#14732) * Add an optional parameter similarity_score to VectorContextRetrieve… (run-llama#14831) * align deps (run-llama#14850) * Bugfix: ollama streaming response would not return last object that i… (run-llama#14830) * [version] bump version to 0.10.56 (run-llama#14849) * pyproject tomls * changelog * prepare docs * snuck one in * snuck one in * lock * lock * CHANGELOG - unreleased section (run-llama#14852) cr * [version] bump version to 0.1.7 for MongoDB Vector Store (run-llama#14851) Version bump to fix package issue * Implements `delete_nodes()` and `clear()` for Weviate, Opensearch, Milvus, Postgres, and Pinecone Vector Stores (run-llama#14800) * rename init file (run-llama#14853) * MongoDB Atlas Vector Search: Enhanced Metadata Filtering (run-llama#14856) * optimize ingestion pipeline deduping (run-llama#14858) * Empty array being send to vector store (run-llama#14859) * update notion reader (run-llama#14861) * fix unpicklable attributes (run-llama#14860) * Removed a dead link in Document Management Docs (run-llama#14863) * bump langchain version in integration (run-llama#14879) * 📃 docs(unserstanding): typo link error (run-llama#14867) 1. `getting_started`: change `https://github.com/jerryjliu/llama_index.git` to `https://github.com/run-llama/llama_index.git` 2. `putting_it_all_together`: change link `apps.md` to `apps/index.md` * Bugfix: AzureOpenAI may fail with custom azure_ad_token_provider (run-llama#14869) * Callbacks to Observability in the examples section (run-llama#14888) * WIP: update structured outputs syntax (run-llama#14747) * add property extraction for KGs (run-llama#14707) * v0.10.57 (run-llama#14893) * fireworks ai llama3.1 (run-llama#14914) * Update-mappings (run-llama#14917) update-mappings * Fix TaskStepOutput sources bug (run-llama#14885) * fix the initialization of Pinecone in the low-level ingestion notebook. (run-llama#14919) * Improved deeplake.get_nodes() performance (run-llama#14920) Co-authored-by: Nathan Voxland <[email protected]> * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy (run-llama#14918) * Bugfix: Don't pass empty list of embeddings to elasticsearch store when using sparse strategy. * bump version and add comment --------- Co-authored-by: Jimmy Longley <[email protected]> * feat: allow to limit how many elements retrieve (qdrant) (run-llama#14904) * Add claude 3.5 sonnet to multi modal llms (run-llama#14932) * Add vector store integration of lindorm, including knn search, … (run-llama#14623) * Llamaindex retriever for Vertex AI Search (run-llama#14913) * Fix: Token counter expecting response.raw as dict, got ChatCompletionChunk (run-llama#14937) * 🐞 fix(integrations): BM25Retriever persist missing arg similarity_top_k (run-llama#14933) * structured extraction docs + bug fixes (run-llama#14925) * cr * cr * cr * patch entity extractor * fix cicd --------- Co-authored-by: Logan Markewich <[email protected]> * v0.10.58 (run-llama#14944) * add organization_id param to LlamaCloudIndex.from_documents (run-llama#14947) * add organization_id param to from_documents * update version * add org id param to LlamaCloudIndex ctor * add back org id var * Add function calling for Ollama (run-llama#14948) * Fixed Import Error in PandasAIReader (run-llama#14915) * fix: organization id (run-llama#14961) * Fix None type error when using Neo4jPropertyGraphStore (run-llama#14957) * add back kwargs to Ollama (run-llama#14963) * use proper stemmer in bm25 tokenize (run-llama#14965) * fireworks 3.12 support (run-llama#14964) * Feature/azure docstore hotfix (run-llama#14950) * breaking: update to OpenLLM 0.6 (run-llama#14935) * UnstructuredReader fixes V2. (run-llama#14946) * restrict python version to enable publish of pandasai reader (run-llama#14966) * Fix/confluence reader sort auth parameters priority (run-llama#14905) * Feature/azure ai search hotfix (run-llama#14949) * Docs updates (run-llama#14941) * docs: Removed unused import from example * Updated link * Update hierarchical.py * typo: bm25 notebook * linting --------- Co-authored-by: Logan Markewich <[email protected]> * toggle for ollama function calling (run-llama#14972) * make re-raising error skip constructor (run-llama#14970) * undo incompatible kwarg in elasticsearch vector store (run-llama#14973) undo incompatible kwarg * honor exclusion keys when creating the index nodes (run-llama#14911) * honor exclusion keys when creating the index nodes * cleaner code based on feedback. * missed a reference. * remove unnecessary embedding key removal * integration[embedding]: support textembed embedding (run-llama#14968) * update: support textembed embedding * Fix: lint and format error * Add: example notebook --------- Co-authored-by: Keval Dekivadiya <[email protected]> * docs: update TiDB Cloud links to public beta! (run-llama#14976) * Feat: expand span coverage for query pipeline (run-llama#14997) Expand span coverage for query pipeline * Adds a LlamaPack that implements LongRAG (run-llama#14916) * Dashscope updates (run-llama#15028) * bump fastembed dep (run-llama#15029) * Update default sparse encoder for Hybrid search (run-llama#15019) * initial implementation FalkorDBPropertyGraphStore (run-llama#14936) * Add extra_info to metadata field on document in RTFReader (run-llama#15025) * Adds option to construct PGVectorStore with a HNSW index (run-llama#15024) * Jimmy/disable embeddings for sparse strategy (run-llama#15032) * MLflow Integration doc Update (run-llama#14977) * Initial commit * Update docs/docs/module_guides/observability/index.md Co-authored-by: Yuki Watanabe <[email protected]> --------- Co-authored-by: Yuki Watanabe <[email protected]> * Added feature to stream_chat allowing previous chunks to be inserted into the current context window (run-llama#14889) * docs(literalai): add Literal AI integration to documentation (run-llama#15023) * docs(literalai): add Literal AI integration to documentation * fix: white space * fixed import error regarding OpenAIAgent in the code_interpreter example notebook (run-llama#14999) fixed imports * docs(vector_store_index): fix typo (run-llama#15040) * Add py.typed file to vector store packages (run-llama#15031) * GitLab reader integration (run-llama#15030) * Fix: Azure AI inference integration support for tools (run-llama#15044) * [Fireworks] Updates to Default model and support for function calling (run-llama#15046) * Adding support for Llama 3 and Mixtral 22B * Changes to default model, adding support for Firefunction v2 * Fixing merge conflict * Update base.py (run-llama#15049) * Fix typo in docs (run-llama#15059) * Enhance MilvusVectorStore with flexible index management (run-llama#15058) --------- Signed-off-by: dependabot[bot] <[email protected]> Co-authored-by: Andrei Fajardo <[email protected]> Co-authored-by: Ashish Sardana <[email protected]> Co-authored-by: Logan <[email protected]> Co-authored-by: Jeff Inman <[email protected]> Co-authored-by: Andrei Fajardo <[email protected]> Co-authored-by: Vitalii Gerasimov <[email protected]> Co-authored-by: Ikko Eltociear Ashimine <[email protected]> Co-authored-by: Simon Suo <[email protected]> Co-authored-by: Jerry Liu <[email protected]> Co-authored-by: Laurie Voss <[email protected]> Co-authored-by: Vkzem <[email protected]> Co-authored-by: Dave Wang <[email protected]> Co-authored-by: Saurav Maheshkar <[email protected]> Co-authored-by: Huu Le <[email protected]> Co-authored-by: Joey Fallone <[email protected]> Co-authored-by: Yang YiHe <[email protected]> Co-authored-by: Emanuel Ferreira <[email protected]> Co-authored-by: Jonathan Liu <[email protected]> Co-authored-by: Ravi Theja <[email protected]> Co-authored-by: Botong Zhu <[email protected]> Co-authored-by: Chris Knowles <[email protected]> Co-authored-by: Chris Knowles <[email protected]> Co-authored-by: Houtaroy <[email protected]> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: richzw <[email protected]> Co-authored-by: Fernando Silva <[email protected]> Co-authored-by: Alexander Fischer <[email protected]> Co-authored-by: Appletree24 <[email protected]> Co-authored-by: Tomaz Bratanic <[email protected]> Co-authored-by: Garrit Franke <[email protected]> Co-authored-by: Harsha <[email protected]> Co-authored-by: Joel Rorseth <[email protected]> Co-authored-by: Matin Khajavi <[email protected]> Co-authored-by: Jia Le <[email protected]> Co-authored-by: Xiangkun Hu <[email protected]> Co-authored-by: Matthew Farrellee <[email protected]> Co-authored-by: Titus Lim <[email protected]> Co-authored-by: Robin Richtsfeld <[email protected]> Co-authored-by: Kirill <[email protected]> Co-authored-by: André Cristóvão Neves Ferreira <[email protected]> Co-authored-by: Facundo Santiago <[email protected]> Co-authored-by: weiweizwc98 <[email protected]> Co-authored-by: Wassim Chegham <[email protected]> Co-authored-by: Shashank Gowda V <[email protected]> Co-authored-by: Koufax <[email protected]> Co-authored-by: Chetan Choudhary <[email protected]> Co-authored-by: Brandon Max <[email protected]> Co-authored-by: Kanav <[email protected]> Co-authored-by: Nathan Voxland (Activeloop) <[email protected]> Co-authored-by: Nathan Voxland <[email protected]> Co-authored-by: Jimmy Longley <[email protected]> Co-authored-by: Jimmy Longley <[email protected]> Co-authored-by: Javier Martinez <[email protected]> Co-authored-by: Diicell <[email protected]> Co-authored-by: Rainy Guo <[email protected]> Co-authored-by: Joel Barmettler <[email protected]> Co-authored-by: Sourabh Desai <[email protected]> Co-authored-by: Aaryan Kaushik <[email protected]> Co-authored-by: yaqiang.sun <[email protected]> Co-authored-by: Francisco Aguilera <[email protected]> Co-authored-by: Aaron Pham <[email protected]> Co-authored-by: Redouane Achouri <[email protected]> Co-authored-by: Rohit Amarnath <[email protected]> Co-authored-by: keval dekivadiya <[email protected]> Co-authored-by: Keval Dekivadiya <[email protected]> Co-authored-by: sykp241095 <[email protected]> Co-authored-by: Tibor Reiss <[email protected]> Co-authored-by: saipjkai <[email protected]> Co-authored-by: Avi Avni <[email protected]> Co-authored-by: Henry LeCompte <[email protected]> Co-authored-by: Michael Berk <[email protected]> Co-authored-by: Yuki Watanabe <[email protected]> Co-authored-by: rohans30 <[email protected]> Co-authored-by: Damien BUTY <[email protected]> Co-authored-by: sahan ruwantha <[email protected]> Co-authored-by: Julien Bouquillon <[email protected]> Co-authored-by: Christophe Bornet <[email protected]> Co-authored-by: Jiacheng Zhang <[email protected]> Co-authored-by: Aravind Putrevu <[email protected]> Co-authored-by: Di Wang <[email protected]> Co-authored-by: Shubham <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Adding support for firefunction-v2 and default model
New Package?
Did I fill in the
tool.llamahub
section in thepyproject.toml
and provide a detailed README.md for my new integration or package?Version Bump?
Did I bump the version in the
pyproject.toml
file of the package I am updating? (Except for thellama-index-core
package)Type of Change
Please delete options that are not relevant.
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce. Please also list any relevant details for your test configuration
Suggested Checklist:
make format; make lint
to appease the lint gods