larsen161

Members
  • Posts

    5
  • Joined

  • Last visited

Posts posted by larsen161

  1. What's the best way to do this? I just synced a very large folder, 4TB then removed it. I went and created 4 new secrets on the 4 primary sub-folders on the source server. After indexing was complete on these 4 locations I started to add the secret to the sub-folder on the 1st destination server only to find it start to download all 1TB of files again even though they exist.

  2. It seems that indexing currently only happens on a single share at a time. When adding multiple new shares this would drastically improve speed as now it takes hours/days to sync multiple folders with 100k's of files and TBs of data. Currently, btsync is only using ~15% cpu on an m1.xlarge aws instance.

  3. I have got the same issue at the moment, just updated, and wiped the Sync folder clear.

    I have tried default, and with a sync, I have tried with and without nodaemon, with and with out config, and config with nodaemon...

    I have tried changing the default port for the gui and the default receive port. Ubuntu 12.04 LTS, and lots of space, ram wise, and hardware wise. Any suggestions?

     

    Have you tried clearing the browser cache? Any other services running on that box? e.g. media server...