Shaav

Members
  • Posts

    21
  • Joined

  • Last visited

Profile Information

  • Gender
    Not Telling

Shaav's Achievements

Member

Member (2/3)

  1. I have been struggling for a couple of months now trying to get 4 QNAP NAS drives syncing about 2TB of data, all of which had been pre-loaded. Indexing has been a nightmare and never seemed to move forward. For weeks and weeks. Finally, I found that if I stopped btsync on all but one NAS at a time, it would successfully sync all 2TB in about 12hrs. Note, btsync had to be *stopped* completely (i.e. I would ssh in and run ./btsync.sh stop --- not just paused in the UI. Pausing seems to stop the transfer of files (?) but not the back-and-forth communication about the files and somehow, it seems that interferes with the indexing sufficiently to grind it completely to a halt.
  2. @adam1v Not as far as I am aware... but I'm not necessarily in the the loop either. Just didn't want to leave you completely hanging since there have been no other replies...
  3. +1 Would also be nice if there were two levels --- a default global for all shares and then a share-level IgnoreList that would sync across all devices using the share.
  4. @GreatMarko Thansk for the link; will do that.
  5. @RomanZ It is entirely possible that I mistakenly put it there in the past; I just don't remember. Also, because the documentation isn't entirely clear on that point (something that could maybe be addressed?), I possibly put it in both locations "just to be safe". However, increasing the number of devices/shares certainly makes that less feasible quickly. It would be nice if there were a global IgnoreList and a local IgnoreList BTW... But I realize that's a very linuxy kind of thing to do. It would also be nice if the IgnoreList become a propogated property of the share so that if you edited one, that change would propogate rather than having to try and edit it on each installation... But thank you for the reply, as always your expertise is much appreciated.
  6. @Moe Frankly I'm having enough of a miserable time experimenting around trying to understand the behaviour of this software --- I thought *maybe* it was a simple enough question to get straight, conclusive answer instead of trying to reverse engineer it from observation.
  7. Just want to quickly clarify something... The documentation says the IgnoreList goes in the hidden .sync file; my interpretation is that it means the /share_folder/.sync folder rather than the /run_path/.sync folder. But I have IgnoreLists in both locations and I'm not sure if that's by design, because of upgrades where the location changed, or if it was something that I just mistakenly did at some point in the past. What I kind of *hope* is that you can put a global IgnoreList in /run_path/.sync that applies to *all* folders and then a share-specific IgnoreList in /share_folder/.sync and that btsync combines them for the share... What exactly is the behaviour with IgnoreLists?
  8. I realize this is an issue that people bring up here a lot; I haven't really seen much by way of solutions---seems like a lot of them abandon btsync before it gets resolved. I'm hoping that I have more detail to offer about what happens than others... Situation: 4 QNAP NAS drives syncing about 2.3TB of data; ~700K files. All 4 drives are using the latest 2 version. All the drives currently have ~99% identical content; just differences that have occurred from their use since the problems began. 2 of the drives have been functioning perfectly for about 8 months; the problem came when we tried to add 2 new drives. Left them to index for days and days without there seeming to be any change. Because of an accidental deletion, one of the original NAS's ended up dumping all 2.3TB into "Archive"; those were moved back in place, but of course, then that one needed to reindex everything too. All of them have their folder rescan set to 24hrs, which until now, I thought would be more than enough to handle the indexing of what I currently am shifting... Not getting anywhere, I eventually stopped the original folder from syncing at all and set up a new sync folder and started shifting small amounts of data from the old to the new with the idea that if there's something corrupt about the share, then that will get fixed; if it's a problem with files or communication, that would be easier to diagnose with a smaller data set and if it's a problem with resources then it should be easier to manage and actually see tangible progress, get a handle on how long it takes to actually do stuff. I started with 12GB --- went OK. Next I did about 30GB -- took *I think* about 4hrs to index and sync. Last night I did approximately 76GB and now more than 24hrs later, it looks like progress has halted. All four report different sizes, only one of which I think matches the size of the share (trying to take into account excluded files and rounding differences etc). Two report all 3 peers on line, 1 reports 2 peers online and 1 reports 0. I have written some scripts to pull file lists from the drives and filter them with the excludes so that any differences are reflective of real differences and I can confirm that right now, no files are being updated. I'm happy to provide log files --- I'm not making much sense of them --- but I'd rather not post them publicly. This is a client's system and they use a lot of identifiable personal information in filenames/paths. Is there another way I can upload them a little more privately? There is definitely a lot of looping going on. Two have a lot of "rejecting until file info is updated". Another has a lot of "Failed to create empty suffix for file". The one that looks like it actually indexed everything appears the most normal, but still have a bunch of stuff like "Trash: requested file was not found". Clearly, a big part of the problem is the combination of data volume and the processing capabilities of the NAS drives themselves. So one thing I'm wondering is whether it is possible to bootstrap that process by, say mounting the share on my hexacore behemoth as an nfs drive or something, and have it do the initial indexing? When I do an initial mirror of data from one to the other, could I copy the hashes too? One thing that is profoundly frustrating about the UI, is that it is extremely difficult to determine what is actually happening. Especially when the devices are under a heavy load, it doesn't accurately reflect the actual state of the sync. Even trying to load the options, I sometimes have to wait 10s of seconds for the options to appear, particularly pre-defined hosts. The "indexing" indicator flashes on and off seemingly at random and you have no idea if it done or not done. I'll try to pause/resume the syncing and sometimes it doesn't work. Often a 'du' of the folder is wildly out of whack with what it is reporting as the size in the UI, even when it doesn't seem to be indexing (but possibly is). Sometimes, the size changes and seems to reflect indexing going on; sometimes it just sits at the same amount for minutes/hours even when it seems to be indexing (but could be stuck). It would be really useful if, especially for these massive initial indexes, there was a mode where you could just tell the device to just index the folder and provide a progress indicator and clear indication when it is done. If there's a lot to index --- maybe pick a threshold --- I wish it would put everything else on hold until it's done that process. Or at least it was a configurable option, especially when you add a folder that already has content. Any help would be greatly appreciated.
  9. @RomanZ I was eventually able to verify that they could talk to each other, FYI. There's just other stuff going on. Topics for other threads.
  10. @RomanZ Will look into trying that. Unfortunately, these are all QNAP NASs that don't have netcat on them. Will need to figure out getting in on there first...
  11. Specifically here, I think it would be a lot more functional if rather than renaming the files the way you do now, is always have the most recent version keep the same file name and then rename the versions such that the highest number is always the oldest. I know that means renaming more files on a regular basis, but it would become much easier to, say, write a script to deal with problems like mine.
  12. @RomanZ Yeah... sadly in the situation I'm in, all the files from one peer were deleted which propogated... I rsynced the files back (including all the .n archives cause... hard to separate them out) and neglected to use -a so all the modification times got updated. So no help there. Thanks for the info though.
  13. @Romanz That is what I mean I did when I said "EVEN when I provide explicit VPN IPs and ports for the peers". Does not work.
  14. When files are changed and archived, .sync/Archive/path/ has multiple versions of the file named: file.n.ext and it appears that the highest "n" is the most recent file. Consider this scenario where the default retention period of 30 days is in effect and the same file (test.txt) is edited on a weekly basis: .syncArchive/path: 2015-06-12 test.4.txt 2015-06-05 test.3.txt 2015-05-29 test.2.txt 2015-05-22 test.1.txt 2015-05-15 test.txt by the time 2015-06-19 rolls around, the 2015-05-15 test.txt will be deleted. What will the name of the new archived file? test.txt? or test.5.txt?
  15. Thanks @RomanZ --- this is kind of an unusual case and I'm using VPN because I don't have direct access to the networks in question (except 1 --- not the one I'm particularly interested in. ). So I don't have any idea what the routers or their configurations are like for 3/4 devices that are syncing. I've been experimenting quite a bit and the peers do not seem to find each other on the VPN. I.e. if I turn off relays/tracking/DHT and just leave searching the LAN, they do not find each other. But more surprising, EVEN when I provide explicit VPN IPs and ports for the peers, they won't find each other and won't sync until I turn on using the tracker. Any thoughts on that? They definitely are accessible to each other through the VPN. I ssh into one and then can ssh into the others without any difficulties (for instance).