hitchhiker

Members
  • Posts

    10
  • Joined

  • Last visited

hitchhiker's Achievements

Member

Member (2/3)

  1. I'm sorry, I was depending on the 'check version' button in the app - it tells me 'Client version up to date' - my apologies, I'll try with that. Cheers, Frank.
  2. Hi guys, Sorry to revive an old thread. Just to let you know that I tested with the latest release (1.1.82) and the issue is still present... -------- sorry LeTic, it just seemed appropriate to copy it directly -------- Similar (perhaps same?) problem; very scary bug indeed. It's happening with win8/mavericks. I'm losing files each day by overwrites by older files. Clocks in sync, timezone in sync.. This is something else.
  3. Is there any news on this, i'm seeing it happen on 1.1.82 between win 7 / mac
  4. Hi, I've noticed that a file being synced can't be deleted. BTSYNC seems to take out a write lock on the file of some sort. This means it's not useful for synchronizing app config directories and such. IMHO - It shouldn't take out a lock, it should simply stop if the file is deleted, or restart if it's changed (with a debounce ofc) Cheers, Frank.
  5. Some of the development apps I'm using can't handle symb links either, so it's not always possible to work like that. However yes, for the time being I'm working around it.
  6. Hi, Great product, very few problems lately. One of the reasons I moved from dropbox (i'm now running a cloud-style btsync system with my team) was because db didn't detect file changes within symbolically linked or junctioned folders. It seems btsync is now having a problem with that also (windows 7/8) - is it just me? Cheers, Frank.
  7. This feature needs to be implemented as fully as the pack-rat feature on Dropbox - it's something (i'm guessing) a lot of people use to guarantee the availability and safety of important repos. I was excited to see it available at all, it's unfortunate that it's so 'sparse' an implementation. I currently use 'timeline' software (on win) on a second node to make constant backups of the repo.
  8. Bumpty bump - The latest version of BTSync is spectacular, I'm even going so far as to drop my team dropbox account. I'm using a dedicated server as an additional node (to ensure 'offline/quiet-time/single-user' sync). I'm using NSSM to run it as a service, but (as noted earlier), there's no interface. I'm using a backup utility on a third node to enable 'revisions' - which is a great feature you might want to consider in the near future. Basically what Dropbox call 'pack-rat' https://www.dropbox.com/help/113/en Any word on a timeline for the service feature? Cheers, Frank.
  9. Thanks for the reply - glad to hear you are working on it: Update: I removed most of the content, down to around 50k files, worked just fine (after i manually purged the cache). FYI: I don't think it can actually (real-world) support 1M files, a lot went wrong (memory-wise) around the 300k mark - each time it froze things got worse (as the cache became out-of-sync in various ways). I have 8GB, memory usage was way past vmem 1GB, up to 2GB at times. Either way page faults galore, disk mashing, environment fail etc. Stopping the transfers by switching off the other client helped up to a point. The sync never succeeded (max transfer was 2GB) = 2 days and nights of trying. As @kos13 mentioned: You absolutely need to implement one of the many open source document (lighter, handle larger sets, faster, more and more reliable) or RDBMS storage methods for this. Many are light weight, tried and tested and will speed things up immensely. No point reinventing that wheel. Best of luck, it's a great concept - thanks for your time Cheers, Frank.
  10. I've been trying (for 2 days) to sync between two Win8/Win7 computers on the same network a large and unwieldy folder set. There are about 400k files, in only 60GB of space. It's old directories full of subversion remnants, massive data export dumps in single files, things like that. It has been impossible with BTsync. Fair enough, it was a tough (unusual) task. -> metacache! NOT a great idea. Basically, the entire problem was exaggerated by the need to have 100,000s of files in your metacache structure. It might be perhaps a better idea, rather than to shard them as you have done, to merge them into file blocks. This would free up much of the MFT / Filesystem resource intensive work needed to sync anything large. I'm not sure, but it seems like you maintain a separate cache item for each item we need to sync. That's doubling the effort required. Just my two cents, I think the overall idea is great - can't wait to get to use it. Cheers, Frank. Edit: Sorry I have no debug logs for you, most things ended in a forced close - I noticed these things also: -Some files were missing, but still trying to sync (cache out of sync?) -Massive changes in used memory >2GB at certain times then back to 500mb -Out of memory exception at one point, didn't catch it.