Search the Community

Showing results for tags 'redundant'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Resilio Sync
    • Sync General Discussion
    • Sync Troubleshooting
    • Sync for NAS (Network Attached Storage)
    • Sync Stories
    • Developers
    • Feature Requests

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 2 results

  1. This post goes over nested folder sharing with Sync. It makes sense but it limits the redundancy you can have with a network of Sync devices. Introducing a brief overview of how I use sync (image attached). I use a RaspberryPi to house absolutely all my files from photos to music to documents, I keep everything in one folder called 'Documents'. This folder is mirrored between my laptop/desktop/and RaspberryPi and it works great. The problem is when my RaspberryPi goes down. My laptop and desktop still mirror each other, but they don't get mobile phone photos and phone #2 doesn't get music despite those files be available on multiple sources (desktop and laptop). I know it would be difficult to fix Sync to overcome this problem, but it would without a doubt be worth it. You'd get much better redundancy and wouldn't have any downtime on other machines just because one machine went down.
  2. I've been testing out a particular solution and it seems pretty legit, but any clarification or insight is appreciated. I have a source server that will be dropping small XML files into a BTSync folder. There are 5 destination machines with full access keys and each will sync to shared folder (UNC path) on 6th another machine. Another application will be consuming the XMLs from the UNC path, deleting them from the folder. The objectives: 1) the XMLs make it to the shared folder from the source 2) the XMLs are consumed only once by the XML pickup (i.e., they can sync multiple times, as long as it's not after the pickup) 3) the XMLs are archived from the source folder The 1st and 3rd are easy. I've not gotten the 2nd to fail, but this is where I could use clarification since it seems a few events could all occur here, and this is the most critical objective. Less relevant is that all devices have BTSync installed as a Windows service and the XMLs aren't modified. Also, if you're wondering about the 5 machines, don't ...the bottom line there is that a higher degree of redundancy is needed. The environment isn't that volatile, but the data is. The less latency the better, too, of course. So far, my tests have been positive, though I've only reproduced it in a smaller controlled VM environment of 1 source and 2 destinations (1 of which also hosts the shared drive). Also, folder_rescan_interval was set to 10s (from 600s). That said, my questions: 1) Assume a 50KB file. Race conditions to download that file by n machines? If the file exists in the destination, do other machines attempt to download it? If yes, are newer files discarded (file is unmodified) or do they overwrite the older file? Is the file locked during either process? 2) Assume a BTSync service is stopped before a file reaches the destination folder by another BTSync instance (i.e., it's not aware of the file in the folder). If I delete the file from the destination folder, but then start up the BTSync instance before the file deletion has occurred at the source, will the newly-started instance see the to-be-deleted file in the source and attempt to re-sync it? My tests indicate that it does not download the file, but how does it know not to? 3) Similar to the previous, assume a BTSync service is stopped after a file reaches the destination folder either by another BTSync instance or itself (i.e., it's aware of the file in the folder). Same tests results, but same questions (and perhaps the same answers). Again, all my tests look good. Any oversights? What events might violate objective #2? Will it scale? Thanks for any input. Let me know if I need to clarify anything.