This project is a bit outdated and isn’t working right now. We’ll update it, but we’re tied up with another project at the moment. In the meantime, you’re welcome to use our hosted models for free.
Welcome to the ChatGPT API Free Reverse Proxy, offering free self-hosted API access to ChatGPT (gpt-3.5-turbo
) with OpenAI's familiar structure, so no code changes are needed.
- Join our Discord Community for support and questions.
- ⚡Note: Your Discord account must be at least 7 days old to be able join our Discord community.
- Features
- Option 1: Installing/Self-Hosting Guide (Without using any API key)
- Method 1: Using Docker or Run it with a Chat Web UI using docker-compose
- Method 2: Your PC/Server (manually)
- Method 3: Termux on Android Phones
- Option 2: Accessing Our Hosted API (Free)
- Usage Examples
- License
- Streaming Response: The API supports streaming response, so you can get the response as soon as it's available.
- API Endpoint Compatibility: Full alignment with official OpenAI API endpoints, ensuring hassle-free integration with existing OpenAI libraries.
- Complimentary Access: No charges for API usage, making advanced AI accessible to everyone even without an API key.
- Ensure Docker is installed by referring to the Docker Installation Docs.
- Run the following command:
docker run -dp 3040:3040 pawanosman/chatgpt:latest
- Done! You can now connect to your local server's API at:
Note that the base URL is
http://localhost:3040/v1/chat/completions
http://localhost:3040/v1
.
✅ You can run third-party chat web interfaces, such as BetterChatGPT and LobeChat, with this API using Docker Compose. Click here for the installation guide.
To install and run the ChatGPT API Reverse Proxy on your PC/Server by following these steps:
Note: This option is not available to all countries yet. if you are from a country that is not supported, you can use a U.S. VPN or use our hosted API.
- Ensure NodeJs (v19+) is installed: Download NodeJs
- Clone this repository:
git clone https://github.com/PawanOsman/ChatGPT.git
- Open
start.bat
(Windows) orstart.sh
(Linux withbash start.sh
command) to install dependencies and launch the server. - Done, you can connect to your local server's API at:
Note that the base url will be
http://localhost:3040/v1/chat/completions
http://localhost:3040/v1
To include installation instructions for Termux on Android devices, you can add the following section right after the instructions for Linux in the Installing/Self-Hosting Guide:
To install and run the ChatGPT API Reverse Proxy on Android using Termux, follow these steps:
-
Install Termux from the Play Store.
-
Update Termux packages:
apt update
-
Upgrade Termux packages:
apt upgrade
-
Install git, Node.js, and npm:
apt install -y git nodejs
-
Clone the repository:
git clone https://github.com/PawanOsman/ChatGPT.git
-
Navigate to the cloned directory:
cd ChatGPT
-
Start the server with:
bash start.sh
-
Your local server will now be running and accessible at:
http://localhost:3040/v1/chat/completions
Note that the base url will be
http://localhost:3040/v1
You can now use this address to connect to your self-hosted ChatGPT API Reverse Proxy from Android applications/websites that support reverse proxy configurations, on the same device.
Utilize our pre-hosted ChatGPT-like API for free by:
- Joining our Discord server.
- Obtaining an API key from the
#Bot
channel with the/key
command. - Incorporating the API key into your requests to:
https://api.pawan.krd/v1/chat/completions
Leverage the same integration code as OpenAI's official libraries by simply adjusting the API key and base URL in your requests. For self-hosted setups, ensure to switch the base URL to your local server's address as mentioned above.
import openai
openai.api_key = 'anything'
openai.base_url = "http://localhost:3040/v1/"
completion = openai.chat.completions.create(
model="gpt-3.5-turbo",
messages=[
{"role": "user", "content": "How do I list all files in a directory using Python?"},
],
)
print(completion.choices[0].message.content)
import OpenAI from 'openai';
const openai = new OpenAI({
apiKey: "anything",
baseURL: "http://localhost:3040/v1",
});
const chatCompletion = await openai.chat.completions.create({
messages: [{ role: 'user', content: 'Say this is a test' }],
model: 'gpt-3.5-turbo',
});
console.log(chatCompletion.choices[0].message.content);
This project is under the AGPL-3.0 License. Refer to the LICENSE file for detailed information.