-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for multiple back-ends? #5
Comments
Yes, this is a feature we must support, I agree on receiving the cluster via dsn, although I see more utility on the "automatic cluster" feature (by querying One of the suggested approaches to achieve load balancing in Crate is by setting up a You could deploy this I'm not 100% sure what the best approach would be if we decide to manage the cluster state, and handle availability and distribution ourselves. |
I think you might need both the DSN and I think as a start, simple fail-over support is the easiest thing to do. Basically, allow supplying a number of server URLs and then use the next one in a cyclic list when the current one isn't available. We wouldn't necessarily have to do fancy things like load-balancing as step one. |
I've been working on a new project using Crate. We were attracted to Crate primarily for the distributed features it offers with relatively little hassle. However, currently, this driver is set up to communicate with a particular instance and if that instance is down, there doesn't appear to be any way to fall back to the others in the cluster.
Would you be open to having the ability to handle multiple instances added? I'm willing to put in some of the work myself.
This could work either by having a number of URLs specified in the dsn or by querying the
sys.nodes
table once connected to a node or I suppose preferably both so the user need not specify every node up front but the application is also not stranded should the particular node its set up to communicate with be down.P.S.: this is of course a prelude to other functionality such as distributing queries among instances.
The text was updated successfully, but these errors were encountered: