• Content Count

  • Joined

  • Last visited

  • Days Won


About hhz

  • Rank
  1. FYI: This undesired behaviour still exists in 1.3.106 and I still consider it an issue that needs to be fixed. Situation today: An office machine with a sync folder of 100 GB and a home NAS with that same sync folder. These two are synced. I have just set up a laptop at my office and added the sync folder there. Right now, the laptop is downloading files from the remote NAS at the top of its pipe's upload capacity, clogging the internet. While a full copy is available just 1 meter next to it on the local network. This is bad for the internet. btsync should not hog traffic bandwidth w
  2. I'm very aware that this is beta software. Because of this, I sent a bug report to the email address that the developers asked us beta users to send bug reports to. Also, I wrote about the issue here in this forum. To both, there was no response from the developers. Forgive me for not trying to recreate a bug when the developers don't announce that they looked into it. I checked the release announcements and saw no mentioning of it, either. Also, I wasn't not the one with the Android issue or the almost-lost homework, that's dephiros.
  3. I did not use the Android version, only Ubuntu / Debian as described in the first post. I have never received a response to my bug report (via mail) or a specific reply to this thread here. Since that bug report, I have stopped using btsync here for fear of further data loss.
  4. Is this bug fixed by now? I never got a reply to my bug report.
  5. I have reported a serious bug with btsync (well, at least I'd think that it's serious) to the email address mentioned in this thread's first post. I have not received a response, not to the email and not to the forum post about it. Is there any way I can find out if this bug is confirmed and being worked on?
  6. Hi! running btsync 1.1.70 on Linux (Ubuntu) three machines (home, office, laptop) time settings are ok, using ntp on all three machines using btsync to keep track of my FLAC collection while re-tagging files there's a folder Audio/Archiv/flac/ that contains a digital archive of my CD collection, ripped to FLAC files there's another folder Audio/Sonos/flac2mp3/ that contains MP3s encoded from these FLAC files, so that I can put them on my MP3 player on August 27, I stopped btsync on the office machine threw away the old flac2mp3 folder created a new empty one and re-encoded the whole folder fro
  7. Hi! I'm using btsync for my music collection. A long time ago, I ripped all my CDs to FLAC, but only now am I adding cover art and correcting the tags. For this, I'm using several computers and btsync is very nice to make sure that all is in sync. As you can imagine, adding a cover image or correcting a typo in a song title only changes very little of a FLAC file, while the actual music content stays the same. However, the btsync checksum algorithm cannot detect this and afaict transfers the whole file yet again. Is there no smarter way to handle this? Would it be possible to add some extra kn
  8. I don't think that this is significant drop in performance. I'd guess that in my case, a sync that took two hours with the external source would take two hours and five minutes instead. Being nice to The Internet out there and keeping my pipe clear by not creating unnecessary traffic is more important to me than a negligible improvement in speed.
  9. Hi, here's a feature request: btsync should never retrieve data from a remote sync server if that data is also available from a sync server on the local network. Reasoning: Assume there are three btsync machines, one at the office, one at home, one laptop. The laptop serves as sneakernet-acceleration - user makes a major change on the office machine, syncs to laptop through the office LAN, brings laptop home, turns on home machine. For this case, btsync on the home machine will retrieve data from the office machine even if the laptop is already present on the home network as a 100% up-to-date
  10. It is not. Read those ToU again. It is a violation to reverse engineer the binary code of btsync, but not a violation to reverse engineer the protocol used by the program. There is a difference. It should also be noted that reverse engineering clause of the ToU is most likely legally invalid in mreithub's country and thus very difficult to enforce against him. (Welcome to the Internet.) Allow the question: Why are you (greatmarko) so upset about this? Obviously, you're not with BitTorrent or a developer of btsync. Why the zealotry that the btsync developers didn't even ask for?
  11. ChrisH, please note that I did not experience data loss. I only had files restored that should have been removed. koz, I did not upgrade in the middle of a sync. When I turned off the fourth machine, its sync had been completed. When I turned it on, the three others were synced, as well, but in a newer state. Still, the result was unexpected.
  12. I presume that an actual change history that can be merged from several distributed machines (a la git-annex or Sparkleshare) would be more robust against the kind of sync issues I keep having with btsync right now.
  13. But that defeats the original purpose of btsync to have it run on many machines that independently sync each other. If an admin has to make sure that all users of a shared directory must upgrade at the same time and when all machines are in sync, it's not decentralized anymore. Sorry, I still think that using timestamp and file existence to find out what to sync is the wrong approach. I have been using btsync for a very short time now and already stumbled over sync issues because of this several times.
  14. Ok, here's what happened. Four machines with btsync version $old. All synced. Now upgrade three of them to btsync version $new. This has a new index file format and requires the machine to re-index the shared files. Let $new re-index on the three upgraded machines, but let the fourth machine stay offline. Then remove a few files, create a few other files and let the three $new machines sync each other. Finally, boot the fourth machine, upgrade its $old version to $new. It will now re-index its content and re-transmit the old files, which had been removed on the other three machines. Tada! The
  15. I, for one, welcome an open source implementation and don't consider it "unwise". Congrats!