Skip to content

Quick ZooKeeper Guide

timblair edited this page Apr 17, 2012 · 2 revisions

Installing ZooKeeper

This guide is a quick overview on how to install ZooKeeper locally to give redis_failover a try. To install ZooKeeper locally across three local nodes, simply perform the following steps:

  • Download ZooKeeper (stable) from here
  • Untar the above distribution in three separate directories, e.g:

/Users/ryan/zk/zk1

/Users/ryan/zk/zk2

/Users/ryan/zk/zk3

ZooKeeper Cluster Configuration

  • Create a zoo.cfg file under each /conf directory of the above directories. Each file should be different for each of the nodes. The only difference between them is the dataDir and clientPort configuration options, as that's specific to each ZooKeeper server:
# Location: /Users/ryan/zk/zk1/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/Users/ryan/zk/zk1/data
clientPort=2181
server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890
# Location: /Users/ryan/zk/zk2/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/Users/ryan/zk/zk2/data
clientPort=2182
server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890
# Location: /Users/ryan/zk/zk3/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/Users/ryan/zk/zk3/data
clientPort=2183
server.1=localhost:2888:3888
server.2=localhost:2889:3889
server.3=localhost:2890:3890
  • Next, create a data directory at the root of each /Users/ryan/zk/zk* directory.
  • In each data directory, create a file named myid that contains a single number corresponding to the number of the ZK server. So, under /Users/ryan/zk/zk1/data/myid, it would contain a single character 1, /Users/ryan/zk/zk2/data/myid would contain a single character 2, and /Users/ryan/zk/zk3/data/myid would contain a single character 3.

Launching the ZooKeeper Cluster

Once the above steps have been performed, go into each zk* directory and run the following:

bin/zkServer.sh start-foreground

You should see the ZooKeeper server starting in the foreground. Do this for zk1, zk2, and zk3.

Launch the Node Manager

Now that the ZooKeeper cluster is up and running, you can start your redis failover Node Manager. Let's assume that you have a Redis master running at 6379, and a Redis slave running at 6380:

redis_node_manager -n localhost:6379,localhost:6380 -z localhost:2181,localhost:2182,localhost:2183

If all is well, you should output similar to the following:

2012-04-17 03:27:34 UTC RedisFailover 98636 INFO: Communicating with zookeeper servers localhost:2181,localhost:2182,localhost:2183
2012-04-17 03:27:34 UTC RedisFailover 98636 INFO: Managing master (localhost:6379) and slaves (localhost:6380)
2012-04-17 03:27:34 UTC RedisFailover 98636 INFO: Redis Node Manager successfully started.
2012-04-17 03:27:34 UTC RedisFailover 98636 INFO: Created zookeeper node /redis_failover_nodes

Connect the redis failover client

To test your redis failover client, open up an irb/pry console and run the following:

client = RedisFailover::Client.new(:zkservers => 'localhost:2181,localhost:2182,localhost:2183')

If the client is created successfully, you should see output similar to the following:

2012-04-17 03:04:39 UTC RedisFailover 98305 INFO: Communicating with zookeeper servers localhost:2181,localhost:2182,localhost:2183
2012-04-17 03:04:39 UTC RedisFailover 98305 INFO: Purging current redis clients
2012-04-17 03:04:39 UTC RedisFailover 98305 INFO: Building new clients for nodes {:master=>"localhost:6379", :slaves=>["localhost:6380], :unavailable=>[]}
 => #<RedisFailover::Client - master: localhost:6379, slaves: localhost:6380)> 

At this point you can perform the normal Redis operations as usual:

client.set('foo', 100)
client.get('foo')
100
Clone this wiki locally