I’m duplicating my server hardware and moving the second set off site. I want to keep the data live since the whole system will be load balanced with my on site system. I’ve contemplated tools like syncthing to make a 1 to 1 copy of the data to NAS B but i know there has to be a better way. What have you used successfully?
Rsync and rclone are the best options as mentioned in other comments. If you want to get real-time with it, and the previous cron-based solutions aren’t what you want, look at the myriad of FOSS distributed filesystems out there. Plenty of live filesystems you can run on any Linux-based storage system.
I think the better question would be: what are you trying to achieve? Live replica set of all data in two places at the same time, or a solid backup of your data you can swap to if needed? I’d recommend the rsync/rclone route, and VPN from the primary data set whenever you need, with the safety of having your standby ready to swap out to whenever needed if the primary fails.
syncthing falls down when you hit millions of files.
Platform agnostic? Rsync from the perspective of duplicating the client-presented data.
Or rclone, another great tool. I tend to use this for storage providers instead of between self hosted systems (or data center fully self-managed systems.)
If the NAS uses zfs then zfs send/recv is the best, because you can send only the changed blocks. Want to redo the names on every single movie? No problem! I do recommend sanoid or syncoid, I don’t remember which. ZFS snapshots are not intuitive, make sure to do LOTS of testing with data you don’t care about, and which is small.
In terms of truly duplicating the entire NAS, we can’t help without knowing which NAS you’re using. Or since this is selfhosted, which software you used to build a NAS.
+1 for rclone
Rsync or rclone are better ways than syncthing. Rsync can copy over ssh out of the box. Rclone can do the same but with a lot more backends. FTP, SSH, S3… It does not matter. Imo is rclone the better choice than rsync in this case. Take a look at rclone.org
Just have NAS A send a rocket with the data to NAS B.
deleted by creator
Rsync over FTP. i use it for a weekly nextcloud backup to a hetzner storage box
I suggest to use sftp/ssh with rsync instead. Much more secure then FTP.
Seconded
What about rclone? I’ve found it to be amazing for cloning or copying.
My favorite is using the native zfs sync capabilities. Though that requires zfs and snapshots configured properly.
I want to keep the data live since the whole system will be load balanced with my on site system.
Is this intended to handle the scenario where you accidentally delete a bunch of important files and don’t realize until the delete has synced, and deleted them on the remote site too? Consider using versioned backups too, to handle that case.
- rsync + basic scripting for periodic sync, or
- distributed/replicated filesystems for real-time sync (I would start with Ceph)
Finding the right solution will depend entirely on what kind of load you’re balancing.
deleted by creator
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
Fewer Letters More Letters NAS Network-Attached Storage SSH Secure Shell for remote terminal access VPN Virtual Private Network ZFS Solaris/Linux filesystem focusing on data integrity
4 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.
[Thread #747 for this sub, first seen 14th May 2024, 02:55] [FAQ] [Full list] [Contact] [Source code]
Sounds like you want a clustered filesystem like gpfs, ceph or gluster.
Better options have already been mentioned. With that said another option might be torrenting.
You would need to create a new torrent whenever new files are added or edited. Not very practical for continuous use.
If you want to mirror the entire system, OS and all, then clonezilla is the best option.
That won’t keep it constantly in sync though