BionicGPT is an on-premise replacement for ChatGPT, offering the advantages of Generative AI while maintaining strict data confidentiality
BionicGPT can run on your laptop or scale into the data center.
Try our Docker Compose installation. Ideal for running AI locally and for small pilots.
- ๐ฅ๏ธ Intuitive Interface: Our chat interface is inspired by ChatGPT to ensure a user-friendly experience.
- ๐ Theme Customization: The theme for Bionic is completely customizable allowing you to brand Bionic as you like.
- โก Ultra Fast UI: Enjoy fast and responsive performance from our Rust based UI.
- ๐ Chat History: Effortlessly access and manage your conversation history.
- ๐ค AI Assistants: Users can create assistants that work with their own data to enhance the AI.
- ๐จ๏ธ Share Assistants with Team Members: Generate and share assistants seamlessly between users, enhancing collaboration and communication.
- ๐ RAG Pipelines: Assistants are full scale enterprise ready RAG pipelines that can be launched in minutes.
- ๐ Any Documents: 80% of enterprise data exists in difficult-to-use formats like HTML, PDF, CSV, PNG, PPTX, and more. We support all of them.
- ๐พ No Code: Configure embeddings engine and chunking algorithms all through our UI.
- ๐จ๏ธ System Prompts: Configure system prompts to get the LLM to reply in the way you want.
- ๐ซ Teams: Your company is made up of Teams of people and Bionic utilises this setup for maximum effect.
- ๐ซ Invite Team Members: Teams can self-manage in a controlled environment.
- ๐ Manage Teams: Manage who has access to Bionic with your SSO system.
- ๐ฌ Virtual Teams: Create teams within teams to
- ๐ Switch Teams: Switch between teams whilst still keeping data isolated.
- ๐ RBAC: Use your SSO system to configure which features users have access to.
- ๐ฎ SAST: Static Application Security Testing - Our CI/CD pipeline runs SAST so we can identify risks before the code is built.
- ๐ข Authorization RLS - We use Row Level Security in Postgres as another check to ensure data is not leaked between unauthorized users.
- ๐ CSP: Our Content Security Policy is at the highest level and stops all manner of security threats.
- ๐ณ Minimal containers: We build containers from Scratch whenever possible to limit supply chain attacks.
- โณ Non root containers: We run containers as non root to limit horizontal movement during an attack.
- ๐ฎ Audit Trail: See who did what and when.
- โฐ Postgres Roles: We run the minimum level of permissions for our postgres connections.
- ๐ฃ SIEM integration: Integrate with your SIEM system for threat detection and investigation.
- โ Resistant to timing attacks (api keys): Coming soon.
- ๐ญ SSO: We didn't build our own authentication but use industry leading and secure open source IAM systems.
- ๐ฎ Secrets Management: Our Kubernetes operator creates secrets using secure algorithms at deployment time.
- ๐ Observability API: Compatible with Prometheus for measuring load and usage.
- ๐ค Dashboards: Create dashboards with Grafana for an overview of your whole system.
- ๐ Monitor Chats: All questions and responses are recording and available in the Postgres database.
- ๐ Fairly share resources: Without token limits it's easy for your models to become overloaded.
- ๐ Reverse Proxy: All models are protected with our reverse proxy that allows you to set limits and ensure fair usage across your users.
- ๐ฎ Role Based: Apply token usage limits based on a users role from your IAM system.
- ๐ Assistants API: Any assistant you create can easily be turned into an Open AI compatible API.
- ๐ Key Management: Users can create API keys for assistants they have access to.
- ๐ Throttling limits: All API keys follow the users throttling limits ensuring fair access to the models.
- ๐ Batch Guardrails: Apply rules to documents uploaded by our batch data pipeline.
- ๐ Streaming Guardrails: LLMs deliver results in streams, we can apply rules in realtime as the stream flies by.
- ๐พ Prompt injection: We can guard against prompt injections attacks as well as many more.
- ๐ค Full support for open source models running locally or in your data center.
- ๐ Multiple Model Support: Install and manage as many models as you want.
- ๐พ Easy Switch: Seamlessly switch between different chat models for diverse interactions.
- โ๏ธ Many Models Conversations: Effortlessly engage with various models simultaneously, harnessing their unique strengths for optimal responses. Enhance your experience by leveraging a diverse set of models in parallel.
โ ๏ธ Configurable UI: Give users access or not to certain features based on roles you give them in your IAM system.- ๐ฆ With limits: Apply token usage limits based on a users role.
- ๐ซ Fully secured: Rules are applied in our server and defence in depth secured one more time with Postgres RLS.
- ๐ค 100s of Sources: With our Airbyte integration you can batch upload data from sources such as Sharepoint, NFS, FTP, Kafka and more.
- ๐ฅ Batching: Run upload once a day or every hour. Set the way you want.
- ๐ Real time: Capture data in real time to ensure your models are always using the latest data.
- ๐ Manual Upload: Users have the ability to manually upload data so RAG pipelines can be setup in minutes.
- ๐ Datasets: Data is stored in datasets and our security ensures data can't leak between users or teams.
- ๐ OCR: We can process documents using OCR to unlock even more data.
- ๐ Effortless Setup: Install seamlessly using Kubernetes (k3s, Docker Desktop or the cloud) for a hassle-free experience.
- ๐ Continuous Updates: We are committed to improving Bionic with regular updates and new features.
follow our guide to running Bionic-GPT in production.
For companies that need better security, user management and professional support
This covers:
- โ Help with integrations
- โ Feature Prioritization
- โ Custom Integrations
- โ LTS (Long Term Support) Versions
- โ Professional Support
- โ Custom SLAs
- โ Secure access with Single Sign-On
- โ Continuous Batching
- โ Data Pipelines
BionicGPT is optimized to run on Kubernetes and provide Generative AI services for potentially 1000's of users.