-
Notifications
You must be signed in to change notification settings - Fork 1.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for Sentinel and RE Discovery Service #302
Comments
+1 , why not make a wrapper on top of this node_redis client ? If you start working on it please let me know because I'd like to star that repository. |
Hi, I've started working on this, the result can be checked at https://github.com/tanguylebarzic/node_redis var RedisMetaClient = require("./../node_redis/sentinel").RedisMetaClient;
var sentinels = [
{host: "127.0.0.1", port: 26382},
{host: "127.0.0.1", port: 26383},
{host: "127.0.0.1", port: 26384}
];
var redisMetaClient = new RedisMetaClient("mymaster", sentinels);
var i = 0;
var client = redisMetaClient.createMasterClient();
client.on('error', function(error){
console.log(error);
});
setInterval(function(){
client.set('test:' + i, i, function(error, result){
if(!error){
console.log('success, ' + i);
++i;
}
else {
console.log('error, ' + i);
}
});
}, 500); Basically, you first have to create a 'RedisMetaClient', configured with the name of the master and a list of sentinels. Then, calling What works:
How was is it done: What remains to be done:
Known issues:
|
tanguylebarzic, thanks for starting to tackle sentinel support. I'm confused by your approach, however: You seem to be implementing the sentinel functionality (e.g. identifying the master and slaves, handling a quorum) in your node.js code. But in the sentinel docs that is all set up in the redis-sentinel daemon's config. Node shouldn't care about who the master or slaves are for a given sentinel, it should simply connect to the sentinel, which is handling all that already. (And that SentinelClient should behave like a regular RedisClient, so it's backwards-compatible with connect-redis session store, etc.) Can you clarify your approach? Thanks! |
I've started implementing an alternate approach with a Always interested in feedback, suggestions, and/or collaborators. |
newleafdigital: sorry for the late reply. indeed, I noticed it was not clean to implement this the way I did (too much logic on the client). I've changed it to better match antirez' guidelines, although there are still some questions remaining IMO. Your approach looks interesting (and add cluster awareness!), I'm going to look at it! |
@newleafdigital: I have tried your fork together with socket.io. When I setup the redisstore:
My app crashes on startup:
Please be aware that the error message sais that the connection failed to port 6379, instead of 26379!!! Any idea what could be the problem? When I connect directly to the redis-server (master) everything is fine. |
Try using a single hash for the parameters:
When you connect to the sentinel via |
In that case I used the normal redis functionality (not socket.io). |
Hhhhhm. Maybe no problem of your module....When I connect to the sentinel it connects successfully, but it does not know any redis commands:
What could be the reason for this? Just to be clear: I am connecting to the sentinel, neither to the master nor to one of the slaves. Correct? ADDENDUM: Reading the docs more in detail I saw that this behavior is normal. But why does the your client do the same? Shouldn't it behave as a "normal" redisClient? |
The sentinel itself shouldn't handle get/set/etc, but the sentinel client should handle it, by delegating it to the master. Thanks |
@newleafdigital: Is it possible that you expect the sentinel running in the same machine as redis? I have my sentinel running on a different machine (the one on which node runs) and I doubt that this scenario works for the current implementation. I still get the error
and that leads me to the conclusion that it tries to connect to a redis instance on |
This seems to be a specific problem of using your client with socket.io. Did you ever test the two together? |
@newleafdigital: Now I seem to have found out what is going on. I did not deeply dig into your code but I can quite exactly describe the behaviour: In my current configuration (node-app on machine1, redis_master on machine2, redis_slaves on machines3+4, redis_sentinels on machines2+3+4) your code only works if I connect to one of the slave_sentinels on machine3+4. It does not work if I connect to the sentinel on the master (machine2). When I connect to one of the two slave_sentinels the systems logs correctly:
When I connect to the sentinel on the master it sais (and does not work afterwards):
and no If the sentinels run on other machines but the redis' machines (i.e. on the same machine like the node-app - as in my very first configuration) it only works too, if the sentinel is connected to one of the slaves (and not the master). And...together wird socket.io/RedisStore it does not work either (concerning my configuration) in any way, even if I connect to the sentinel on one of the slaves. |
+1 this is great as a separate module as it is much more likely to change. If you'd like to have your module mentioned in the wiki, come up with a tag for it e.g. #redis-sentinel and then |
Thanks for all the feedback here, and sorry I haven't had a chance to reply in depth. I've allocated myself time for this next week (March 4). @DTrejo, you're right about making this its own module, I think I'll do that (next week as well). |
Hi guys I've also been having a look at this (I need to be able to connect to get the master from a list of sentinels and reconnect to a new master if the old one goes down). I've taken a slightly different approach to @newleafdigital by just using the existing The advantages to this method are that it is just a wrapper over node_redis so there is no need to get down and dirty messing with that code. Also it should mean that since the client returned is a (slightly extended) My code is here: https://github.com/ortoo/node-redis-sentinel - it's fairly limited at the moment but feels pretty extensible. Any thoughts/assertions that I'm going off on the wrong track would be appreciated. Current functionality:
|
👍 to all of the people working on sentinel libraries! I haven't had a chance to take a look yet, but I'm eager to try it out. |
@jamessharp, nice work on node-redis-sentinel. I'm trying to figure out if I should adopt yours or split my SentinelClient into its own module. One thing I can't quite figure out with yours, is it meant to create a single client that transparently handles failover/reconnect? Or does the app need to instantiate 3 clients (master, slave, sentinel) and switch between the active client when it fails over? From your description above, it sounds like the former, but looking at the code and trying to use it, it seems more like the latter. Thanks! |
Thanks. It's meant to create a single client that transparently handles failover/reconnect. The three clients that you can have (master/slave/sentinel) are meant to give you a persistent connection to each of the different types. So if you are happy to do reads from a slave then you can get a slave connection (which will attempt to transparently reconnect to a new slave instance if the one you're accessing goes down). If you need a permanent connection to the master (whichever server that happens to be) then you can just use the master client and if you want a direct connection to the sentinel instance (that again should transparently failover) you can grab the sentinel client. Hope that makes sense. Of course there's still quite a lot of work that needs doing but it works well enough for me at the moment... |
Thanks for the quick reply @jamessharp. What are the big missing pieces? I wrote a test here to see how it works - https://gist.github.com/newleafdigital/5469571. Before running the script, you start up a master+slave+sentinel; the script connects to the master, and every second adds an incremental key to a hash. After 5 seconds, the master is killed, and the idea is to see if anything is lost. It seems that the master client never actually connects to the new master, so I'm not sure if I'm doing it wrong, or if it's not working. During a failover, is it supposed to buffer the data to avoid loss, or is I/O done during the failover supposed to be lost? Thanks |
A couple of thoughts as to why it may not be working:
I'm not sure whether the data is buffered. I'd hope so but the behaviour will be the same as whatever happens when the underlying client tries to reconnect. The main missing piece is detecting when the master is changed without it going down (I.e. a manual failover). But some more work could be done on making the reconnection as snappy as possible. |
I added a test with our implementation to compare. The results are in the gist. All the I/O done during the failover is lost with node-redis-sentinel, and no data is lost with our (much bulkier) solution. We're buffering in ours; the basic redisClient drains its buffer on disconnect or errors (I don't remember exactly), so it can't easily buffer through a failover. I like the simplicity of your solution, but I wonder if it's sufficient for the goals we were hoping to achieve:
For now, I'll assume our implementation adds some value, and I'll separate it into its own module. If it turns out your much simpler alternative will suffice, we'll switch to that. Thanks for enriching the space! |
Yeh your goals are spot on. However a lot of them would actually be useful in the core Rather than putting in the logic for no loss of data during failover and pub/sub persistence in a separate module, I reckon we should put it in the core client and then have a separate module for the sentinel specific logic (along the lines of what I've done in node-redis-sentinel). Its win-win. It makes for a better node_redis client and a simpler sentinel module. |
I've started work on a refactor of the core My goal is that the refactor results in code where you can drop in a 3rd party connection manager library much the way you can currently swap in a parser, and it will expose the appropriate hooks such that something like a sentinel library could use the core client's command queueing & bookkeeping, but replace the connection management portions. I'll try to get a branch pushed soon for feedback. |
That all sounds good but pretty heavy. Our fork of node_redis already handles the reconnection steps needed for sentinel to work. I'm not sure a pluggable connection manager is necessary. |
Actually, correction: The fork there does not have all the necessary pieces; they exist in another copy which I need to merge into that fork. I will do that on Monday (tomorrow). |
I'm not entirely sure what you mean by heavy -- the goal is to encourage a lighter codebase by making it easier to plug features in without forcing them to be a part of the core library. |
I separated our sentinel client implementation into a new module, redis-sentinel-client: https://github.com/DocuSignDev/node-redis-sentinel-client It still depends on some minor changes to node_redis (so it uses our fork); I will pair down the fork to only those necessary changes, and submit PR's for them. (@brycebaril, my original thought was that a pluggable connection manager would be redundant/overkill. I'm probably wrong about that. I'll see what changes are actually necessary to node_redis to support a sentinel client in a little bit.) |
I've submitted two pull requests to support redis-sentinel-client: #428 - Export utils and commands to share (with sentinel client, or others) - this is non-breaking/low-impact and makes node_redis better overall regardless of this particular use. #429 - Flexible connections for sentinel support - this is potentially more breaking/controversial and related to the thread above about pluggable connection managers. Thanks to @tanguylebarzic's early work on sentinel support, and to everyone who works on node_redis! |
Guess we will switch to ioredis, this effort appears to have died. |
So... after more than 10 years, I'm starting to work on that now... 🎉 better late than never 😆 |
Great to hear that, it would be so nice if we have an optimistic eta for that @leibale ? |
@nguyenpc I'm not too sure how much time it's gonna take, but the current roadmap is: V5 should be ready "soon" (a month or so), then I'll start working on Sentinel :) |
Is it now implemented? |
It's in the v5 branch waiting for some tests and last-minute changes, then we will release a "next"/beta version with it |
Hope this comes soon to the main. :) |
Hi @leibale! I'd to know if there are any plans to release version V5 soon. I've noticed that the working branch is quite active, but I was wondering if there's a target date for releasing a stable or a beta version that includes the Sentinel package? :) |
Hi again, my team have been closely following the development of the V5 version and is very interested in using the new features you are implementing. Thank you in advance |
@gianDiazM first, sorry for the huge delay, I've complement missed your message from 2 weeks ago..
If you guys want to help with one of those, do performance tests, or just play around with the client, that would be very helpful. |
Hi @leibale no worries and thank you for getting back to me. We will try to address the points you mentioned. And regarding this "Can you try it directly from git or do you need an "npm version"?" A Redis npm version that includes the client/sentinel package to integrate into our development would be great. Thanks =) |
@leibale we are in a very similar situation as @gianDiazM what exactly means "finishing" sentinel? and we kinda need a new tag |
hey, is there any workaround? How can I use my sentinel configuration? redis-sentinel is a good way? |
Hi,
Are there plans to make node_redis sentinel aware? I'm thinking about being able to use a list of sentinels at startup, then discover and connect to the masters, and subscribing to sentinels message to take into account masters changes. If not, would be happy to start working on it!
Tanguy
The text was updated successfully, but these errors were encountered: