Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sync data between KeyDB v6 - Redis v7 #38

Closed
air3ijai opened this issue Dec 29, 2022 · 26 comments
Closed

Sync data between KeyDB v6 - Redis v7 #38

air3ijai opened this issue Dec 29, 2022 · 26 comments
Assignees
Labels
bug Something isn't working

Comments

@air3ijai
Copy link
Contributor

Describe the bug
We just tried to Sync data from Redis v7 to the KeyDB v6, but looks like this is not supported.

To Reproduce

  1. Run KeyDB v6

  2. Run Redis v7 and add some data

  3. Run the command

    rst -s redis://redis-v7:6379 -m redis://keydb-v6:6379
    
  4. Application starting to process the data and finish to process the whole dataset

    \[483.3MB| 36.4MB/s]
    
  5. At the same time we see the following errors in the log

    2022-12-29 17:15:03,462 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB redis-ver: 7.0.5
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB redis-bits: 64
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB ctime: 1672334103
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB used-mem: 24348794648
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-stream-db: 0
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-id: 5a705558dc170011c96162558e3e21af9d3fecbc
    2022-12-29 17:15:03,463 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-offset: 4382
    2022-12-29 17:15:03,464 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB aof-base: 0
    2022-12-29 17:15:04,882 ERROR c.m.r.r.c.n.i.XEndpoint [sync-worker-4] ERR DUMP payload version or checksum are wrong
    2022-12-29 17:15:04,882 ERROR c.m.r.r.c.n.i.XEndpoint [sync-worker-1] ERR DUMP payload version or checksum are wrong
    2022-12-29 17:15:04,882 ERROR c.m.r.r.c.n.i.XEndpoint [sync-worker-3] ERR DUMP payload version or checksum are wrong
    
  6. Keyspace is empty on KeyDB v6.

    keydb-cli info keyspace
    # Keyspace
    

Expected behavior
We should be able to sync data from Redis v7 and KeyDB v6

Version(run rct --version or rct -V and paste the information):

rct --version

redis rdb cli: v0.9.1 (1f96c81dd4935e2c7fc39f6bb227beede0c8ede4: 2022-05-01T08:10:04+0000)
home: /opt/redis-rdb-cli/bin/..
java version: 18.0.2-ea, vendor: Private Build
java home: /usr/lib/jvm/java-18-openjdk-amd64
default locale: en, platform encoding: UTF-8
os name: Linux, version: 5.11.20-300.fc34.x86_64, arch: amd64

Additional context
Probably the same things will happen with the Redis v6, because we were not able to restore RDB dump on v6 from v7 with the same massage as on KeyDB v6.

Can't handle RDB format version 10

Please see [NEW] Support Redis 7 (RDB format version 10) dumps

@leonchen83
Copy link
Owner

this tool support downgrade migration.
but you should change conf file to support this.
open /path/to/redis-rdb-cli/conf/redis-rdb-cli.conf and edit dump_rdb_version from -1 to 9
after that run rst -s redis://redis-v7:6379 -m redis://keydb-v6:6379

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 3, 2023

Trying to continue migration but initialy just with the Redis v7/Redis v6. Just updated redis-rdb-cli.conf

grep dump_rdb_version=9 /opt/redis-rdb-cli/conf/redis-rdb-cli.conf
# Redis-6.2 : `dump_rdb_version=9`
# Redis-6.0.x : `dump_rdb_version=9`
# Redis-5.0.x : `dump_rdb_version=9`
dump_rdb_version=9

And get the following error with rmt and rst

/opt/redis-rdb-cli/bin/rmt -s 'redis://redis-7:6379?authPassword=password' -m redis://localhost:6379
\[    5 B|    0 B/s]
java.lang.ArrayIndexOutOfBoundsException: Index 26 out of bounds for length 26
	at com.moilioncircle.redis.replicator.util.ByteArray.set(ByteArray.java:74)
	at com.moilioncircle.redis.replicator.util.Lzf.encode(Lzf.java:356)
	at com.moilioncircle.redis.replicator.rdb.BaseRdbEncoder.rdbSaveEncodedStringObject(BaseRdbEncoder.java:188)
	at com.moilioncircle.redis.replicator.rdb.BaseRdbEncoder.rdbGenericSaveStringObject(BaseRdbEncoder.java:209)
	at com.moilioncircle.redis.replicator.rdb.dump.DumpRdbValueVisitor.applyHashListPack(DumpRdbValueVisitor.java:398)
	at com.moilioncircle.redis.rdb.cli.ext.rmt.AbstractRmtRdbVisitor.doApplyHashListPack(AbstractRmtRdbVisitor.java:169)
	at com.moilioncircle.redis.rdb.cli.ext.visitor.BaseRdbVisitor.applyHashListPack(BaseRdbVisitor.java:319)
	at com.moilioncircle.redis.replicator.rdb.RdbParser.parse(RdbParser.java:252)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator$1.handle(RedisSocketReplicator.java:178)
	at com.moilioncircle.redis.replicator.cmd.ReplyParser.parse(ReplyParser.java:103)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator.reply(RedisSocketReplicator.java:349)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator.parseDump(RedisSocketReplicator.java:166)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator.trySync(RedisSocketReplicator.java:142)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator$RedisSocketReplicatorRetrier.open(RedisSocketReplicator.java:440)
	at com.moilioncircle.redis.replicator.AbstractReplicatorRetrier.retry(AbstractReplicatorRetrier.java:49)
	at com.moilioncircle.redis.replicator.RedisSocketReplicator.open(RedisSocketReplicator.java:123)
	at com.moilioncircle.redis.rdb.cli.ext.XRedisReplicator.open(XRedisReplicator.java:238)
	at com.moilioncircle.redis.rdb.cli.cmd.XRmt.call(XRmt.java:139)
	at com.moilioncircle.redis.rdb.cli.cmd.XRmt.call(XRmt.java:61)
	at picocli.CommandLine.executeUserObject(CommandLine.java:1953)
	at picocli.CommandLine.access$1300(CommandLine.java:145)
	at picocli.CommandLine$RunLast.executeUserObjectOfLastSubcommandWithSameParent(CommandLine.java:2358)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2352)
	at picocli.CommandLine$RunLast.handle(CommandLine.java:2314)
	at picocli.CommandLine$AbstractParseResultHandler.execute(CommandLine.java:2179)
	at picocli.CommandLine$RunLast.execute(CommandLine.java:2316)
	at picocli.CommandLine.execute(CommandLine.java:2078)
	at com.moilioncircle.redis.rdb.cli.Rmt.main(Rmt.java:28)

log/redis-rdb-cli.log

2023-01-03 17:20:20,242 INFO c.m.r.r.RedisSocketReplicator [main] Connected to redis-server[1.2.39.4:6379]
2023-01-03 17:20:20,245 INFO c.m.r.r.RedisSocketReplicator [main] AUTH #a24ae4325aba485f846b43c37fdc19ae03f6d4a5a24ae4325aba485f846b43c3
2023-01-03 17:20:20,247 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,247 INFO c.m.r.r.RedisSocketReplicator [main] PING
2023-01-03 17:20:20,247 INFO c.m.r.r.RedisSocketReplicator [main] PONG
2023-01-03 17:20:20,248 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF listening-port 44910
2023-01-03 17:20:20,248 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,248 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF ip-address 10.0.0.100
2023-01-03 17:20:20,249 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,249 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF capa eof
2023-01-03 17:20:20,249 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,249 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF capa psync2
2023-01-03 17:20:20,250 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,250 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF rdb-only 1
2023-01-03 17:20:20,250 INFO c.m.r.r.RedisSocketReplicator [main] OK
2023-01-03 17:20:20,250 INFO c.m.r.r.RedisSocketReplicator [main] PSYNC ? -1
2023-01-03 17:20:25,035 INFO c.m.r.r.RedisSocketReplicator [main] FULLRESYNC a24ae4325aba485f846b43c37fdc19ae03f6d4a5 42
2023-01-03 17:20:25,470 INFO c.m.r.r.RedisSocketReplicator [main] Disk-less replication.
2023-01-03 17:20:25,485 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB redis-ver: 7.0.7
2023-01-03 17:20:25,486 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB redis-bits: 64
2023-01-03 17:20:25,486 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB ctime: 1672766425
2023-01-03 17:20:25,486 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB used-mem: 24348366600
2023-01-03 17:20:25,486 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-stream-db: 0
2023-01-03 17:20:25,487 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-id: a24ae4325aba485f846b43c37fdc19ae03f6d4a5
2023-01-03 17:20:25,487 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB repl-offset: 42
2023-01-03 17:20:25,487 INFO c.m.r.r.r.DefaultRdbVisitor [main] RDB aof-base: 0
2023-01-03 17:20:25,684 INFO c.m.r.r.RedisSocketReplicator [main] socket closed. redis-server[1.2.3.4:6379]

Looks like something was migrated

# Keyspace
db0:keys=1475,expires=0,avg_ttl=0

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 3, 2023

Another try following your advise in case we have a big keys

Convert RDB to AOF

/opt/redis-rdb-cli/bin/rct -f resp -s /data/dump.rdb -o /data/dump.aof -r
/[ 20.4GB| 85.5MB/s]

Import the data

cat /data/dump.aof | redis-cli -h localhost -p 6379 --pipe

ERR Protocol error: invalid bulk length

Some data was imported

# Keyspace
db0:keys=681038,expires=0,avg_ttl=0

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 3, 2023

I also tried to use rct example - Find top 50 largest keys

/opt/redis-rdb-cli/bin/rct -f mem -s /data/dump.rdb -o /data/dump.mem -l 50
tail /data/dump.mem
0,module,"tree:13:2","43.3MB",module2,1,"43.3MB",""
0,module,"tree:10:2","40.2MB",module2,1,"40.2MB",""
0,module,"tree:7:2","35.4MB",module2,1,"35.4MB",""
0,module,"tree:6:3","25.6MB",module2,1,"25.6MB",""
0,module,"tree:12:3","25.6MB",module2,1,"25.6MB",""
0,module,"tree:14:5","25.6MB",module2,1,"25.6MB",""
0,module,"tree:5:2","25.6MB",module2,1,"25.6MB",""
0,module,"tree:17:7","25.6MB",module2,1,"25.6MB",""
0,module,"tree:15:4","25.6MB",module2,1,"25.6MB",""
0,module,"tree:16:4","25.6MB",module2,1,"25.6MB",""

So, there is nothing above the 512MB limit?

@leonchen83
Copy link
Owner

leonchen83 commented Jan 4, 2023

hi

/opt/redis-rdb-cli/bin/rct -f mem -s /data/dump.rdb -o /data/dump.mem -l 50

this file ordered by desc, try head /data/dump.mem

@leonchen83 leonchen83 self-assigned this Jan 4, 2023
@leonchen83 leonchen83 added the bug Something isn't working label Jan 4, 2023
@leonchen83
Copy link
Owner

leonchen83 commented Jan 4, 2023

cat /data/dump.aof | redis-cli -h localhost -p 6379 --pipe

ERR Protocol error: invalid bulk length

try /path/to/redis-7.0.0/src/redis-check-aof /data/dump.aof

you will see error like following

Start checking Old-Style AOF
0x              53: Expected \r\n, got: 7365
AOF analyzed: filename=./resp.aof, size=233, ok_up_to=52, ok_up_to_line=1999, diff=181
AOF ./resp.aof is not valid. Use the --fix option to try fixing it.

The important info is ok_up_to_line. got that line number. for example 1999. and fetch related context of this line number(sed -n '1000, 3000p' /data/dump.aof). and paste to here. if it contains sensitive information. you could mail to [email protected]

@leonchen83
Copy link
Owner

about error

java.lang.ArrayIndexOutOfBoundsException: Index 26 out of bounds for length 26
	at com.moilioncircle.redis.replicator.util.ByteArray.set(ByteArray.java:74)
	at com.moilioncircle.redis.replicator.util.Lzf.encode(Lzf.java:356)
	at com.moilioncircle.redis.replicator.rdb.BaseRdbEncoder.rdbSaveEncodedStringObject(BaseRdbEncoder.java:188)
	at com.moilioncircle.redis.replicator.rdb.BaseRdbEncoder.rdbGenericSaveStringObject(BaseRdbEncoder.java:209)
	at com.moilioncircle.redis.replicator.rdb.dump.DumpRdbValueVisitor.applyHashListPack(DumpRdbValueVisitor.java:398)
	at com.moilioncircle.redis.rdb.cli.ext.rmt.AbstractRmtRdbVisitor.doApplyHashListPack(AbstractRmtRdbVisitor.java:169)

It's a Lzf compress bug. will be fixed in version 0.9.2

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 4, 2023

this file ordered by desc, try head /data/dump.mem

Aha, now it is clearer

head dump.mem
database,type,key,size_in_bytes,encoding,num_elements,len_largest_element,expiry
0,module,"tree:20:1","792.1MB",module2,1,"792.1MB",""
0,module,"tree:22:1","792.1MB",module2,1,"792.1MB",""
0,module,"tree:21:1","792.1MB",module2,1,"792.1MB",""
0,module,"tree:15:1","484.9MB",module2,1,"484.9MB",""
0,module,"tree:19:1","484.9MB",module2,1,"484.9MB",""
0,module,"tree:18:1","484.9MB",module2,1,"484.9MB",""
0,module,"tree:16:1","484.9MB",module2,1,"484.9MB",""
0,module,"tree:17:1","484.9MB",module2,1,"484.9MB",""
0,module,"tree:14:1","480.1MB",module2,1,"480.1MB",""

This also is an answer to my question #39. During v7/v7 migration sometimes it was successfully, but we missed 3 keys and RBD size was ~ 3 GB smaller than on the source.

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 4, 2023

try /path/to/redis-7.0.0/src/redis-check-aof /data/dump.aof

An AOF check - but on Redis v6, because this is where we import it

redis-check-aof /data/dump.aof
AOF analyzed: size=20629618644, ok_up_to=20629618644, diff=0
AOF is valid

So, looks good - let's try to import again

cat /data/dump.aof | redis-cli -h localhost -p 6379 --pipe
ERR Protocol error: invalid bulk length

@leonchen83
Copy link
Owner

did you convert aof file contains these 3 big keys?
if yes, that can solve invalid bulk length question.
redis.conf by default bulk length is proto-max-bulk-len 512mb. you should change this value to 1024mb on target redis.conf file

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 4, 2023

Config

redis-cli config set save ""
redis-cli config set proto-max-bulk-len 1024mb

Import

cat /data/dump.aof | redis-cli -h localhost -p 6379 --pipe

All data transferred. Waiting for the last reply...
Last reply received from server.
errors: 0, replies: 2995160

Comparing

# Redis v7
# Keyspace
db0:keys=1994658,expires=0,avg_ttl=0

# Redis v6
# Keyspace
db0:keys=1994658,expires=0,avg_ttl=0

Does comparison by the keys number can guarantee that all the data was migrated exactly and correctly, is there another way to check if we have same data on source and destination?

Will try to reproduce this with the KeyDB.

@leonchen83
Copy link
Owner

you could sampling some different types of keys to compare whether they are consistent.
this tool can't do above. it should exactly and correctly if there is no bug.

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 4, 2023

It also works fine on KeyDB v6 and there is a full guide how to perform that - Migrate the data from Redis v7 to KeyDB v6 using redis-rdb-cli

And the following questions are remained

  1. Is there a way to increase 512MB keys limit on redis-rdb-cli side?
    It will remove a necessity to use AOF

  2. How to check the migrated data consistency?
    I will create a separate Feature request for that.

@leonchen83
Copy link
Owner

I see the migration guide and well done.

Create a RDB snapshot on Redis v7

redis-cli -h localhost -p 6379 -a password
bgsave
Copy RDB snapshot to the node with KeyDB v6

scp ubuntu@keydb6:/data/dump.rdb /data

Actually redis-rdb-cli can backup remote rdb file to local by rdt command and do more than that like filter db, type, key

rdt -b redis://host:port?authPassword=xxxx -o /data/dump.rdb

@leonchen83
Copy link
Owner

  1. Is there a way to increase 512MB keys limit on redis-rdb-cli side?
    It will remove a necessity to use AOF

try export JAVA_TOOL_OPTIONS=-Xms2g -Xmx2g

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 5, 2023

try export JAVA_TOOL_OPTIONS=-Xms2g -Xmx2g

Will it work for the cases when we do have large keys, over 512MB? It is a question related to your precedent comment about large keys.

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 5, 2023

Did a test on Redis-7 to Redis-7 with a large keys as shown above

export JAVA_TOOL_OPTIONS="-Xms6g -Xmx6g"

rmt -s /data/dump.rdb -m redis://localhost:6379?authPassword=password

Picked up JAVA_TOOL_OPTIONS: -Xms6g -Xmx6g
\[ 20.5GB|102.8MB/s]
# Source
db0:keys=1994658,expires=0,avg_ttl=0

# Destination
db0:keys=1994655,expires=0,avg_ttl=0

So, big keys didn't migrate as it was already discussed. Is there a way to not skip them?

@leonchen83
Copy link
Owner

hi. this is also need to change proto bulk length to 1024mb of target redis

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 5, 2023

Set proto-max-bulk-len 1024mb - all keys are migrated.

So, we just need to increase the memory for app using JAVA_TOOL_OPTIONS and then set required limit on the Redis side.

@leonchen83
Copy link
Owner

Hi
please help test v0.9.2

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 6, 2023

I did more that 10 test on Ubuntu 22.04 and Java 1.8, using rmt and rct
Tests

rmt -s redis://redis-6:6379 -m redis://redis-7:6379
rct -f mem -s redis://redis-7:6379 -o /data/dump.mem -l 50

Environment

redis-rdb-cli/bin/rmt --version
redis rdb cli: v0.9.2 (01d75ad66e6a021464b77b6a3b9a0724c3493253: 2023-01-06T04:21:51+0000)
home: /opt/redis-rdb-cli/bin/..
java version: 1.8.0_352, vendor: Private Build
java home: /usr/lib/jvm/java-8-openjdk-amd64/jre
default locale: en, platform encoding: UTF-8
os name: Linux, version: 5.15.0-25-generic, arch: amd64
  1. Data migrated successfully
  2. Dump with big keys created successfully

All works fine, thank you for the quick fix - now migration is simpler :)

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 6, 2023

A note about rmt with -d option

rmt -s redis://redis-7:6379 -m redis://localhost:6379 -d 1
/[ 20.2GB| 72.1MB/s]

real	4m41.562s
redis-cli info keyspace
# Keyspace

So, all is good - no data copied, because db1 is empty - but why did we copy 20GB over the network?

@leonchen83
Copy link
Owner

Hi
That because this tool rely redis replication protocol to migrate data to target.
But redis replication protocol does not support db filtering. so the only thing we can do is load and filter all rdb data to target.

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 6, 2023

A note about rst

rst -s redis://redis-7:6379 -m redis://localhost:6379
/[614.4MB|  5.1MB/s]
  1. Data copying is slowed down over the time

  2. Copying is newer finished - info keyspace hand on some value and not changed anymore

  3. redis-rdb-cli is full of the messages

    2023-01-06 07:22:27,082 INFO c.m.r.r.RedisSocketReplicator [main] Connected to redis-server[1.2.3.4:6379]
    2023-01-06 07:22:27,082 INFO c.m.r.r.RedisSocketReplicator [main] AUTH #a24ae4325aba485f846b43c37fdc19ae03f6d4a5a24ae4325aba485f846b43c3
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] OK
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] PING
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] PONG
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF listening-port 44416
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] OK
    2023-01-06 07:22:27,083 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF ip-address 192.168.1.108
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] OK
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF capa eof
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] OK
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] REPLCONF capa psync2
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] OK
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] PSYNC 423961153277141236432c4a69ae5c309a7c2b54 2376195463
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] CONTINUE 423961153277141236432c4a69ae5c309a7c2b54
    2023-01-06 07:22:27,084 INFO c.m.r.r.RedisSocketReplicator [main] heartbeat started.
    2023-01-06 07:22:27,184 INFO c.m.r.r.RedisSocketReplicator [main] heartbeat canceled.
    2023-01-06 07:22:27,184 INFO c.m.r.r.RedisSocketReplicator [main] socket closed. redis-server[1.2.3.4:6379]
    2023-01-06 07:22:27,184 INFO c.m.r.r.RedisSocketReplicator [main] reconnecting to redis-server[1.2.3.4:6379]. retry times:1
    
  4. Remote redis log is full of the messages

    55439:M 06 Jan 2023 07:26:54.751 * Replica 192.168.1.108:44898 asks for synchronization
    55439:M 06 Jan 2023 07:26:54.751 * Partial resynchronization request from 192.168.1.108:44898 accepted. Sending 0 bytes of backlog starting from offset 2376195477.
    55439:M 06 Jan 2023 07:26:54.852 # Connection with replica 192.168.1.108:44898 lost.
    55439:M 06 Jan 2023 07:26:54.852 # Client id=1591 addr=2.3.4.5:52861 laddr=192.168.1.108:6379 fd=9 name= age=0 idle=0 flags=S db=0 sub=0 psub=0 ssub=0 multi=-1 qbuf=0 qbuf-free=20474 argv-mem=0 multi-mem=0 rbs=1024 rbp=5 obl=0 oll=1 omem=805306392 tot-mem=805328664 events=r cmd=psync user=default redir=-1 resp=2 closed for overcoming of output buffer limits
    
  5. Remote redis client-output-buffer-limit

    redis-cli config get client-output-buffer-limit
    
    1) "client-output-buffer-limit"
    2) "normal 0 0 0 slave 268435456 67108864 60 pubsub 33554432 8388608 60"

@leonchen83
Copy link
Owner

Hi
refer to redis master slave sync infinite loop
The root cause is client-output-buffer-limit is too small and this tool consume rdb event too slow.
so try client-output-buffer-limit slave 0 0 0 but this change has risk. may crash redis master by OOM.

@air3ijai
Copy link
Contributor Author

air3ijai commented Jan 6, 2023

  1. Applied new limit

    redis-cli config set client-output-buffer-limit "slave 0 0 0"
    
  2. Copied the data using rst

  3. Returned limits back

    redis-cli config set client-output-buffer-limit "slave 268435456 67108864 60"

All works fine, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants