SmallwoodDR82

Members
  • Posts

    4
  • Joined

  • Last visited

SmallwoodDR82's Achievements

New User

New User (1/3)

  1. so 0ABE is server#1. I only have 1 other peer and I confirmed that with searching that command you sent above. It is server#1. I have googled 1481533303 and it looks like it's 12/12/2016 @ 9:01am (UTC) but what does the -1 mean? and how would I find what file is causing this? I checked all my files written on 12/12/2016 and no times seem out of whack. Thanks!
  2. was just looking through the debug log and this is where all logs seem to stop. [20161215 14:38:16.823] MC[DC26] [0ABE]: processing files message with 1000 files [20161215 14:38:16.880] SyncFileEntry: got dictionary with bad time 1481533303 -1 [20161215 14:38:16.880] MC[DC26] [0ABE]: failed to construct file entry received from remote, aborting [20161215 14:38:16.880] assert failed /home/jenkins/slave-root/workspace/Build-Sync-Manually/SyncFolderMergeController.cpp:907 [20161215 14:38:16.880] SF[DC26] [0ABE]: State sync finished [20161215 14:38:16.881] D! 10SyncTcpReq[0x00002b890cdfc350][TCP-TUNNELL] [0000]: cancel 0CDFC350 - outgoing merge, refcount - 2 [20161215 14:38:16.882] SF[DC26]: Merge finished - tree_hash: AA9BC56742B8CE6D6CD4668A431ECA8AEE27E1C3, tree_ts: 1481741216, files_count: 27762, request: 0x00002b890cdfc350 [20161215 14:38:16.885] D! 10SyncTcpReq[0x00002b890cdfc350][TCP-TCP] [0000]: destroing 0CDFC350 cbcnt:9 - outgoing merge also I have no idea what /home/jenkins/slave-root/workspace/Build-Sync-Manually/ path is. It's not anything native to my setup. any ideas?
  3. Helen, Great catch I forgot to mention I am using the 'Overwrite' option. So the sync/index finished on server #2 and once again it stopped at 7.33 TB. I've looked through the logs and there are a few lines of errors but I don't know what to make of them. Looks like I will need an upload link to send them to support. Thanks
  4. First off thank you in advance for assistance and love the product and if I can get this to work as I hope, I will be purchasing Pro for sure. (I'm on 30 day trial right now) Quick Facts - I have 2 NAS (Xeon CPU) unRAID servers and both are running Docker with Sync as a container. Both running Sync 2.4.4. - The idea here is server #1 is my "master" all read/write happens on this server and server #2 is strictly read only from server #1. - I have 6 folders created on server #1 shared as read only to server #2. The size of these folders range from 11GB to 13TB. - Because both servers were sort of mirrors of each other before (more manually) most files are on both servers with same folder structure. - I'm trying to introduce sync now so it will be automated. So after adding the 6 folders to server #1 all the folders indexed correctly (took about 30 hours) after that I added each folder one by one to server #2. Upon accepting the folder it asks are you sure because the destination folder already has files, I click yes and the "syncing" starts. Now because most items are identical, on the first pass it didn't do much transferring most of it was just rechecks and what not. Rinse and repeat for the next 4 folders. Now here is where my issue starts. My final folder is 13TBs and about 48,000 files. The indexing of this folder on server #1 went just fine. However when I share the folder as read only with server #2 and the indexing/recheck starts it goes for about 10 to 12 hours and then just stops. Usually around 7.5TBs or so but never finishes. I looked in the logs and didn't really see anything. I've removed the folder and re-connected it twice now and it's the same result. It's running again right now for the 3rd time and so far so good (about 2TBs in). So I don't have any logs to share just yet and I'll make sure I turn debug on during this process. I'm just curious if anyone has run into this and is there any setting or feature I'm missing? Hope to post more info tonight. Thanks, Daniel!