This is an open-source version of apps like Anthropic's Claude Artifacts, Vercel v0, or GPT Engineer.
Powered by the E2B SDK.
- Based on Next.js 14 (App Router, Server Actions), shadcn/ui, TailwindCSS, Vercel AI SDK.
- Uses the E2B SDK by E2B to securely execute code generated by AI.
- Streaming in the UI.
- Can install and use any package from npm, pip.
- Supported stacks (add your own):
- ๐ธ Python interpreter
- ๐ธ Next.js
- ๐ธ Vue.js
- ๐ธ Streamlit
- ๐ธ Gradio
- Supported LLM Providers (add your own):
- ๐ธ OpenAI
- ๐ธ Anthropic
- ๐ธ Google AI
- ๐ธ Mistral
- ๐ธ Groq
- ๐ธ Fireworks
- ๐ธ Together AI
- ๐ธ Ollama
Make sure to give us a star!
- git
- Recent version of Node.js and npm package manager
- E2B API Key
- LLM Provider API Key
In your terminal:
git clone https://github.com/e2b-dev/fragments.git
Enter the repository:
cd fragments
Run the following to install the required dependencies:
npm i
Create a .env.local
file and set the following:
# Get your API key here - https://e2b.dev/
E2B_API_KEY="your-e2b-api-key"
# OpenAI API Key
OPENAI_API_KEY=
# Other providers
ANTHROPIC_API_KEY=
GROQ_API_KEY=
FIREWORKS_API_KEY=
TOGETHER_API_KEY=
GOOGLE_AI_API_KEY=
GOOGLE_VERTEX_CREDENTIALS=
MISTRAL_API_KEY=
XAI_API_KEY=
### Optional env vars
# Domain of the site
NEXT_PUBLIC_SITE_URL=
# Disabling API key and base URL input in the chat
NEXT_PUBLIC_NO_API_KEY_INPUT=
NEXT_PUBLIC_NO_BASE_URL_INPUT=
# Rate limit
RATE_LIMIT_MAX_REQUESTS=
RATE_LIMIT_WINDOW=
# Vercel/Upstash KV (short URLs, rate limiting)
KV_REST_API_URL=
KV_REST_API_TOKEN=
# Supabase (auth)
SUPABASE_URL=
SUPABASE_ANON_KEY=
# PostHog (analytics)
NEXT_PUBLIC_POSTHOG_KEY=
NEXT_PUBLIC_POSTHOG_HOST=
npm run dev
npm run build
-
Make sure E2B CLI is installed and you're logged in.
-
Add a new folder under sandbox-templates/
-
Initialize a new template using E2B CLI:
e2b template init
This will create a new file called
e2b.Dockerfile
. -
Adjust the
e2b.Dockerfile
Here's an example streamlit template:
# You can use most Debian-based base images FROM python:3.19-slim RUN pip3 install --no-cache-dir streamlit pandas numpy matplotlib requests seaborn plotly # Copy the code to the container WORKDIR /home/user COPY . /home/user
-
Specify a custom start command in
e2b.toml
:start_cmd = "cd /home/user && streamlit run app.py"
-
Deploy the template with the E2B CLI
e2b template build --name <template-name>
After the build has finished, you should get the following message:
โ Building sandbox template <template-id> <template-name> finished.
-
Open lib/templates.json in your code editor.
Add your new template to the list. Here's an example for Streamlit:
"streamlit-developer": { "name": "Streamlit developer", "lib": [ "streamlit", "pandas", "numpy", "matplotlib", "request", "seaborn", "plotly" ], "file": "app.py", "instructions": "A streamlit app that reloads automatically.", "port": 8501 // can be null },
Provide a template id (as key), name, list of dependencies, entrypoint and a port (optional). You can also add additional instructions that will be given to the LLM.
-
Optionally, add a new logo under public/thirdparty/templates
-
Open lib/models.json in your code editor.
-
Add a new entry to the models list:
{ "id": "mistral-large", "name": "Mistral Large", "provider": "Ollama", "providerId": "ollama" }
Where id is the model id, name is the model name (visible in the UI), provider is the provider name and providerId is the provider tag (see adding providers below).
-
Open lib/models.ts in your code editor.
-
Add a new entry to the
providerConfigs
list:Example for fireworks:
fireworks: () => createOpenAI({ apiKey: apiKey || process.env.FIREWORKS_API_KEY, baseURL: baseURL || 'https://api.fireworks.ai/inference/v1' })(modelNameString),
-
Optionally, adjust the default structured output mode in the
getDefaultMode
function:if (providerId === 'fireworks') { return 'json' }
-
Optionally, add a new logo under public/thirdparty/logos
As an open-source project, we welcome contributions from the community. If you are experiencing any bugs or want to add some improvements, please feel free to open an issue or pull request.