Describe the bug

6.2.6 redis version on both master and replica before slave went down with

internal error in rdb reading offset 0, function at rbd.c :1627 -> Duplicate set members detected

and eventually master went down with

Internal error in RDB reading offset 39843972, function at rdb.c:1992 -> Hash ziplist integrity check failed.

To reproduce

A new replica was added with very less(25MB) data in master, then replica started to go down with below error

internal error in rdb reading offset 0, function at rbd.c :1627 -> Duplicate set members detected

we have restart logic in place and replica would get added and within milliseconds of attempting to become replica it would fail with above error. Then eventually master also went down and attached is the bug/error report message on master

Note: when master went down the data was around 240MB

Expected behavior

New replica would sync up seamlessly and both master and replica are operational

Additional information

Any additional information that is relevant to the problem.

Attached in the file

redis-bug-report.pdf

Comment From: sundb

seems that the rdb has been corrupted. there are still some unknown issues in ziplist, but we have replaced ziplist with listpack since 7.x, maybe you can try to upgrade to a newer version.

Comment From: abhi132788

Thanks for that input, we plan to update the redis to 7.x Is there any alternatives to save/keep a copy of data and use it to bring it up on a different redis instance or any alternatives to get the data in such case.

We already attempted deleting the RDB file and we have save config running periodically which created the RDB again and still the issue continued

We tried adding replica which might sync data from master, but that did not work as well.

Comment From: sundb

@abhi132788 Is there any sensitive data in your rdb? If not, could you give me(debing.sun@redis.com) one? Or you can delete these sensitive data in version 6.

Comment From: abhi132788

Sorry, We do have sensitive data in the RDB file and cant share it Any alternative ways to retrieve the data

Comment From: sundb

@abhi132788 did you try to check this rdb file by using redis-check-rdb? Since you can still start with this corrupted rdb, if you can know which key is corrupted, you can try to delete it and then save this rdb.

Comment From: abhi132788

Thanks for the information , Can you please point me to the documentation related to redis-check-rdb

Comment From: sundb

@abhi132788 Usage: ./src/redis-check-rdb <rdb-file-name>

Comment From: abhi132788

Thanks for that, I was trying to find the location and all available commands we can run against it. I got it in my downloads, Thanks

We were able to see that, its specifically was complaining about SET. Scenario: We have new SET getting created for every 1 hour and key will be the date and time at which the set is getting created and we store 1000's of keys in those sets.

This is the below information: [offset 0] Checking RDB file /Downloads/redis-dump-3-1-9-2.rdb [offset 26] AUX FIELD redis-ver = '6.2.6' [offset 40] AUX FIELD redis-bits = '64' [offset 52] AUX FIELD ctime = '1747687382' [offset 67] AUX FIELD used-mem = '311840904' [offset 85] AUX FIELD repl-stream-db = '0' [offset 135] AUX FIELD repl-id = 'f37b5515f9049581c7fb7ab693c48ff1cbc15621' [offset 162] AUX FIELD repl-offset = '8272679972584' [offset 178] AUX FIELD aof-preamble = '0' [offset 180] Selecting DB ID 0 --- RDB ERROR DETECTED --- [offset 8591216] Internal error in RDB reading offset 0, function at rdb.c:1961 -> Duplicate set members detected [additional info] While doing: read-object-value [additional info] Reading key ‘9dummyvalue|dummyvalue|xx|2025-05-09 14' [additional info] Reading type 2 (set-hashtable) [info] 13239 keys read [info] 13217 expires [info] 7566 already expired 34405:C 19 May 2025 15:51:01.221 # Terminating server after rdb file reading failure.

We removed the set from the redis but still that did not help, we had to go over all such sets remove them and then rest of the data was good.

I believe the values inside the SET which are strings are in someway getting duplicate. Not sure how the uniqueness in the SET is guaranteed, but could be some issue around that