-
Notifications
You must be signed in to change notification settings - Fork 104
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Export all error. I seem to be hitting a rate limit #88
Comments
I get what I believe to be the same issue, when attempting to export all my ChaGPT conversations (I have hundreds). Numerous 429 Too Many Requests errors occur, one per conversation, for most of the conversations. From looking at the errors in my browser console, it seems that ChatGPT Exporter makes only one attempt to retrieve each conversation, and that it does not introduce any delay when it receives HTTP 429 errors. If I am correct in thinking this, then retrying with exponential backoff could solve the problem and facilitate exporting arbitrarily many conversations successfully. OpenAI suggests this approach when using their other APIs (though those code examples are Python-only). |
It tries to asynchronously retrieve all the conversations at the same time.
|
This works for me, but I have to increase the sleep value.
But then I encounter a new error.
code at line 15540:
|
I didn't see this until now: "I did this for json export, for the others you only need to push conversation." I'm exporting to HTML |
yeah just change it from:
to
for markdown and html the sleep value required probably depends on how many conversations within a minute you request. |
Yeah. it's working now. thanks so much. I think a progress bar would be nice too. We're okay seeing it in the console logs, but a UI for it would be better. |
I fully agree, this is just a stopgap solution I quickly made to get it working for myself |
Thanks for all the information provided. I will check OpenAI's document and improve/handle the rate limiting. |
And ye, maybe a progress bar. The reason it is not being implemented is... it's just too fast when I do the small batch testing 😆 |
I am not able to export all due to rate limiting, looking at the network calls, it seems it trying to download everything in parallel, so I modified the code a bit to introduce artificial delay and I was able to download all my 326 conversations:
Well, it's not fast. |
1500ms works for you to download all of them? |
Thank you so much. This is a game changer! :) :) :) |
When I try to export all, many of the XHR requests fail. The response is
The text was updated successfully, but these errors were encountered: