chore(deps): update container image docker.io/localai/localai to v2.20.1 by renovate #328
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR contains the following updates:
v2.19.4-aio-cpu
->v2.20.1-aio-cpu
v2.19.4-aio-gpu-nvidia-cuda-11
->v2.20.1-aio-gpu-nvidia-cuda-11
v2.19.4-aio-gpu-nvidia-cuda-12
->v2.20.1-aio-gpu-nvidia-cuda-12
v2.19.4-cublas-cuda11-ffmpeg-core
->v2.20.1-cublas-cuda11-ffmpeg-core
v2.19.4-cublas-cuda11-core
->v2.20.1-cublas-cuda11-core
v2.19.4-cublas-cuda12-ffmpeg-core
->v2.20.1-cublas-cuda12-ffmpeg-core
v2.19.4-cublas-cuda12-core
->v2.20.1-cublas-cuda12-core
v2.19.4-ffmpeg-core
->v2.20.1-ffmpeg-core
v2.19.4
->v2.20.1
Warning
Some dependencies could not be looked up. Check the Dependency Dashboard for more information.
Release Notes
mudler/LocalAI (docker.io/localai/localai)
v2.20.1
Compare Source
It's that time again—I’m excited (and honestly, a bit proud) to announce the release of LocalAI v2.20! This one’s a biggie, with some of the most requested features and enhancements, all designed to make your self-hosted AI journey even smoother and more powerful.
TL;DR
gpt4all.cpp
andpetals
backends deprecated🌍 Explorer and Global Community Pools
Now you can share your LocalAI instance with the global community or explore available instances by visiting explorer.localai.io. This decentralized network powers our demo instance, creating a truly collaborative AI experience.
How It Works
Using the Explorer, you can easily share or connect to clusters. For detailed instructions on creating new clusters or connecting to existing ones, check out our documentation.
👀 Demo Instance Now Available
Curious about what LocalAI can do? Dive right in with our live demo at demo.localai.io! Thanks to our generous sponsors, this instance is publicly available and configured via peer-to-peer (P2P) networks. If you'd like to connect, follow the instructions here.
🤗 Hugging Face Integration
I am excited to announce that LocalAI is now integrated within Hugging Face’s local apps! This means you can select LocalAI directly within Hugging Face to build and deploy models with the power and flexibility of our platform. Experience seamless integration with a single click!
This integration was made possible through this PR.
🎨 FLUX-1 Image Generation Support
FLUX-1 lands in LocalAI! With this update, LocalAI can now generate stunning images using FLUX-1, even in federated mode. Whether you're experimenting with new designs or creating production-quality visuals, FLUX-1 has you covered.
Try it out at demo.localai.io and see what LocalAI + FLUX-1 can do!
🐛 Diffusers and hipblas Fixes
Great news for AMD users! If you’ve encountered issues with the Diffusers backend or hipblas, those bugs have been resolved. We’ve transitioned to
uv
for managing Python dependencies, ensuring a smoother experience. For more details, check out Issue #1592.🏎️ Strict Mode for API Compliance
To stay up to date with OpenAI’s latest changes, now LocalAI have support as well for Strict Mode ( https://openai.com/index/introducing-structured-outputs-in-the-api/ ). This new feature ensures compatibility with the most recent API updates, enforcing stricter JSON outputs using BNF grammar rules.
To activate, simply set
strict: true
in your API calls, even if it’s disabled in your configuration.Key Notes:
strict: true
enables grammar enforcement, even if disabled in your config.format_type
is set tojson_schema
, BNF grammars will be automatically generated from the schema.🛑 Disable Gallery
Need to streamline your setup? You can now disable the gallery endpoint using
LOCALAI_DISABLE_GALLERY_ENDPOINT
. For more options, check out the full list of commands with--help
.🌞 P2P and Federation Enhancements
Several enhancements have been made to improve your experience with P2P and federated clusters:
LOCALAI_RANDOM_WORKER
if needed).LOCALAI_TARGET_WORKER
.💪 Run Multiple P2P Clusters in the Same Network
You can now run multiple clusters within the same network by specifying a network ID via CLI. This allows you to logically separate clusters while using the same shared token. Just set
LOCALAI_P2P_NETWORK_ID
to a UUID that matches across instances.Please note, while this offers segmentation, it’s not fully secure—anyone with the network token can view available services within the network.
🧪 Deprecation Notice:
gpt4all.cpp
andpetals
BackendsAs we continue to evolve, we are officially deprecating the
gpt4all.cpp
andpetals
backends. The newerllama.cpp
offers a superior set of features and better performance, making it the preferred choice moving forward.From this release onward,
gpt4all
models inggml
format are no longer compatible. Additionally, thepetals
backend has been deprecated as well. LocalAI’s new P2P capabilities now offer a comprehensive replacement for these features.What's Changed
Breaking Changes 🛠
Bug fixes 🐛
Exciting New Features 🎉
json_schema
format type and strict mode by @mudler in https://github.com/mudler/LocalAI/pull/3193🧠 Models
📖 Documentation and examples
👒 Dependencies
7aec99b
to8b14837
by @dependabot in https://github.com/mudler/LocalAI/pull/31421e6f6554aa11fa10160a5fda689e736c3c34169f
by @mudler in https://github.com/mudler/LocalAI/pull/318915fa07a5c564d3ed7e7eb64b73272cedb27e73ec
by @localai-bot in https://github.com/mudler/LocalAI/pull/31976eac06759b87b50132a01be019e9250a3ffc8969
by @localai-bot in https://github.com/mudler/LocalAI/pull/32033a14e00366399040a139c67dd5951177a8cb5695
by @localai-bot in https://github.com/mudler/LocalAI/pull/3204b72942fac998672a79a1ae3c03b340f7e629980b
by @localai-bot in https://github.com/mudler/LocalAI/pull/320881c999fe0a25c4ebbfef10ed8a1a96df9cfc10fd
by @localai-bot in https://github.com/mudler/LocalAI/pull/32096e02327e8b7837358e0406bf90a4632e18e27846
by @localai-bot in https://github.com/mudler/LocalAI/pull/32124134999e01f31256b15342b41c4de9e2477c4a6c
by @localai-bot in https://github.com/mudler/LocalAI/pull/3218fc4ca27b25464a11b3b86c9dbb5b6ed6065965c2
by @localai-bot in https://github.com/mudler/LocalAI/pull/324022fcd5fd110ba1ff592b4e23013d870831756259
by @localai-bot in https://github.com/mudler/LocalAI/pull/323906943a69f678fb32829ff06d9c18367b17d4b361
by @localai-bot in https://github.com/mudler/LocalAI/pull/32455fd89a70ead34d1a17015ddecad05aaa2490ca46
by @localai-bot in https://github.com/mudler/LocalAI/pull/3248llama_add_bos_token
by @mudler in https://github.com/mudler/LocalAI/pull/32538b3befc0e2ed8fb18b903735831496b8b0c80949
by @localai-bot in https://github.com/mudler/LocalAI/pull/32572fb9267887d24a431892ce4dccc75c7095b0d54d
by @localai-bot in https://github.com/mudler/LocalAI/pull/3260554b049068de24201d19dde2fa83e35389d4585d
by @localai-bot in https://github.com/mudler/LocalAI/pull/32638b14837
to82a5e98
by @dependabot in https://github.com/mudler/LocalAI/pull/3274cfac111e2b3953cdb6b0126e67a2487687646971
by @localai-bot in https://github.com/mudler/LocalAI/pull/3315d65786ea540a5aef21f67cacfa6f134097727780
by @localai-bot in https://github.com/mudler/LocalAI/pull/33442f3c1466ff46a2413b0e363a5005c46538186ee6
by @localai-bot in https://github.com/mudler/LocalAI/pull/3345fc54ef0d1c138133a01933296d50a36a1ab64735
by @localai-bot in https://github.com/mudler/LocalAI/pull/33569e3c5345cd46ea718209db53464e426c3fe7a25e
by @localai-bot in https://github.com/mudler/LocalAI/pull/3357Other Changes
New Contributors
Full Changelog: mudler/LocalAI@v2.19.4...v2.20.0
v2.20.0
Compare Source
TL;DR
gpt4all.cpp
andpetals
backends deprecated🌍 Explorer and Global Community Pools
Now you can share your LocalAI instance with the global community or explore available instances by visiting explorer.localai.io. This decentralized network powers our demo instance, creating a truly collaborative AI experience.
How It Works
Using the Explorer, you can easily share or connect to clusters. For detailed instructions on creating new clusters or connecting to existing ones, check out our documentation.
👀 Demo Instance Now Available
Curious about what LocalAI can do? Dive right in with our live demo at demo.localai.io! Thanks to our generous sponsors, this instance is publicly available and configured via peer-to-peer (P2P) networks. If you'd like to connect, follow the instructions here.
🤗 Hugging Face Integration
I am excited to announce that LocalAI is now integrated within Hugging Face’s local apps! This means you can select LocalAI directly within Hugging Face to build and deploy models with the power and flexibility of our platform. Experience seamless integration with a single click!
This integration was made possible through this PR.
🎨 FLUX-1 Image Generation Support
FLUX-1 lands in LocalAI! With this update, LocalAI can now generate stunning images using FLUX-1, even in federated mode. Whether you're experimenting with new designs or creating production-quality visuals, FLUX-1 has you covered.
Try it out at demo.localai.io and see what LocalAI + FLUX-1 can do!
🐛 Diffusers and hipblas Fixes
Great news for AMD users! If you’ve encountered issues with the Diffusers backend or hipblas, those bugs have been resolved. We’ve transitioned to
uv
for managing Python dependencies, ensuring a smoother experience. For more details, check out Issue #1592.🏎️ Strict Mode for API Compliance
To stay up to date with OpenAI’s latest changes, now LocalAI have support as well for Strict Mode ( https://openai.com/index/introducing-structured-outputs-in-the-api/ ). This new feature ensures compatibility with the most recent API updates, enforcing stricter JSON outputs using BNF grammar rules.
To activate, simply set
strict: true
in your API calls, even if it’s disabled in your configuration.Key Notes:
strict: true
enables grammar enforcement, even if disabled in your config.format_type
is set tojson_schema
, BNF grammars will be automatically generated from the schema.🛑 Disable Gallery
Need to streamline your setup? You can now disable the gallery endpoint using
LOCALAI_DISABLE_GALLERY_ENDPOINT
. For more options, check out the full list of commands with--help
.🌞 P2P and Federation Enhancements
Several enhancements have been made to improve your experience with P2P and federated clusters:
LOCALAI_RANDOM_WORKER
if needed).LOCALAI_TARGET_WORKER
.💪 Run Multiple P2P Clusters in the Same Network
You can now run multiple clusters within the same network by specifying a network ID via CLI. This allows you to logically separate clusters while using the same shared token. Just set
LOCALAI_P2P_NETWORK_ID
to a UUID that matches across instances.Please note, while this offers segmentation, it’s not fully secure—anyone with the network token can view available services within the network.
🧪 Deprecation Notice:
gpt4all.cpp
andpetals
BackendsAs we continue to evolve, we are officially deprecating the
gpt4all.cpp
andpetals
backends. The newerllama.cpp
offers a superior set of features and better performance, making it the preferred choice moving forward.From this release onward,
gpt4all
models inggml
format are no longer compatible. Additionally, thepetals
backend has been deprecated as well. LocalAI’s new P2P capabilities now offer a comprehensive replacement for these features.What's Changed
Breaking Changes 🛠
Bug fixes 🐛
Exciting New Features 🎉
json_schema
format type and strict mode by @mudler in https://github.com/mudler/LocalAI/pull/3193🧠 Models
Configuration
📅 Schedule: Branch creation - At any time (no schedule defined), Automerge - At any time (no schedule defined).
🚦 Automerge: Enabled.
♻ Rebasing: Whenever PR becomes conflicted, or you tick the rebase/retry checkbox.
🔕 Ignore: Close this PR and you won't be reminded about these updates again.
This PR was generated by Mend Renovate. View the repository job log.