yottabit

Members
  • Posts

    143
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by yottabit

  1. S/he could delete the share (from BTSync only), and then you could give the read-only secret again (or maybe a one-time read-only secret). In theory, after indexing completes, only the missing/changed files will transfer. Many others have requested the ability to "force" a resync of missing files in a read-only share.
  2. BT has said they're working on a FreeBSD client and it should be released soon. I've also asked for a FreeBSD 8.x-compatible version for FreeNAS specifically. Unfortunately FreeBSD 9 implements a call similar to inotify, so BT has already said it will have to poll the file shares every 10 minutes like on Mac. TrueCrypt is a great option for this, as long as you create a sparse container. And don't defrag your TrueCrypt volume, haha. BT has said they will work on a more resilient and complete delta transfer in the future, as well. Another alternative is EncFS. You create a directory to place your encrypted source and it's nothing but a bunch of files that mirror the unencrypted (mounted) EncFS filesystem, but the data and even filenames are encrypted. This would work very well with BTSync because only updated files need to be sync'd, as opposed to changed blocks within a (very large) TrueCrypt container. Probably best to experiment with both and see which meets your needs the best. In the past I've used EncFS on top of SSHfs successfully. I've also used a PGP volume, and then later a TrueCrypt volume, for many years until taking employment with my current company where they have a very lax security policy and don't run spyware on my lappy. Now I just encrypt my entire lappy disk with TrueCrypt. EDIT: And I say as long as you create a sparse container because hopefully that will enable BTSync to efficiently sync a new container quickly, in the future. Currently BTSync isn't sparse-aware. It also doesn't employ any kind of compression in the transfer protocol yet, so a 10 GB file consisting only of zeroes (whether sparse or fully allocated doesn't matter) still transfers all 10 GB.
  3. That's actually a great request for MIPS Linux support, such as the RouterBoard RouterOS runs. They make fantastic products that are enterprise-class, yet affordable. Many of them have USB port available for external flash and hard drives, to enable file sharing. Going the next step with having native BTSync would be awesome. EDIT: This is the device I often install and recommend for home & small office use. http://routerboard.com/RB751G-2HnD And here is RouterOS. http://www.mikrotik.com/software.html
  4. Microsoft/Windows just likes to make things more difficult for very little good (if any!) reason. :-)
  5. Later in the post I realized he was using D: and E:, which could be external drives without live Windows installation, although not necessarily. The registry hives are indeed files.These files are kept in the %windir%\System32\Config directory: Software System SAM Security Default UserDiff There is also %userprofile%\NTuser.dat. For more info: http://en.wikipedia.org/wiki/Windows_Registry#Windows_NT-based_operating_systems
  6. Don't do this. Seriously, though. It's a bad idea. You're now syncing tons of very small, temporary, and often access-locked files that will never complete (Recycle Bin, Registry, many others). You're just asking for trouble. If you want disk-level cloning, BTSync is not what you want. Look at Acronis TrueImage or Macrium Reflect. If you just want to sync all of your user data, which Windows does a decent job of organizing under your profile now, take a look at this thread: http://forum.bittorr...e-the-smart-way You can edit the script to add in other folders you may want that are atypical or outside your profile. EDIT: it just occurred to me that if these are not your C: drive, you may only have data on these drives, not Windows and applications. In that case, just add the Recycle Bin folder to your .SyncIgnore and see if that helps. Remember to restart BTSync after modifying .SyncIgnore.
  7. Just to clear something up ... folders cannot be "locked" like files can. The only way this can possibly be misconstrued as exclusive access locking is if you were to have explorer, cmd, bash, etc., present in a directory of which you were trying to delete. For example: cd folder rmdir ../folderThat would result in an error on any operating system (at least without specifying some manual flags to the command), but this is not the same as exclusive access locking used for files, where a given file is open for exclusive access by an application so all other applications are prevented from accessing that file.
  8. I agree. In one of my use cases the recipient of the read-only secret is purposefully deleting files and doesn't want to ever see them again. However, having an option in the read-only peer to "force redownload of missing files" would be nice for some cases.
  9. I'm curious why you mentioned Recycle Bin. You didn't Share your entire C:\ drive did you? :-) In my experience, BTSync will give you the 0.0 kB/s transfer status for files that are locked by the operating system or an application. Once the file locks are resolved, they will transfer.
  10. I've been playing around with .SyncIgnore and have noticed the following things: If you specify a wildcard, .e.g., file*, everything matching that--including in all subdirectories--is ignored. If you add a file/dir to .SyncIgnore after it has already sync'd, it will be removed from other peers (actually just placed in their .SyncTrash directories). I have also just tried a number of different permutations of a subdirectory in an attempt to get BTSync to ignore but it doesn't seem to work on directories--only on files (and even this seems hit-and-miss when adding file masks to files that have already sync'd)... You may have to close the .SyncIgnore file when you're done editing it (not just save). Then, I exited BTSync on the host of the files (readonly secret, remember), and restarted BTSync ... all of a sudden it deleted on the peer all of the subdirectories I had specified in .SyncIgore (well, moved to .SyncTrash of course). It also deleted a filemask I had specified after the file had already sync'd (although earlier in the test this worked without restarting BTSync). So I guess the lesson for now is just to restart BTSync when you update .SyncIgnore. I would think only restarting BTSync on the host with the changed .SyncIgnore is enough (at least it was in my case, and restarting the read-only peer didn't have any effect). To ignore a subdir I simply specified it like: subdir Or a file within: subdir/file Nothing special. I used "/" in the UNIX-compatible syntax even though it was on Windows. I tried relative and absolute paths (including drive letter) and it didn't work until restarting BTSync. And when I restarted BTSync and saw the changes take effect, I was only using relative paths. I also upgraded the host to build 125 and didn't see any difference in this behavior. I was using Win7 as both the source and the read-only peer for these tests. I observed changes tended to replicate within about 5 seconds on my LAN. Hope this helps a bit!
  11. Wondering when we will see build 125 "officially" released and auto-updated... I don't want to upgrade until I know it isn't just a test build to see if a problem was fixed but hasn't had a lot of scrutiny otherwise... would also feel more comfortable reading changelog before upgrading. Don't want to update one computer ahead of others if it may break their peering due to, e.g., protocol change.
  12. I use btsync on Linux to a mounted ZFS filesystem (hosted on FreeBSD) over CIFS. This is abstracting ZFS from btsync quite a lot. (NFS performance on FreeNAS sucks.) BT has mentioned they will have a FreeBSD version published this week, so I'm eager to try that out natively on the ZFS.
  13. I just say BTSync! That's a 25% savings in syllables and only 50% more syllables than Dropbox. We're so lazy in English, LOL. How about BitSync?
  14. Transfers are definitely slow until indexing completes. I've seen this behavior often, even when indexing from SSD (though it was a bit less affected).
  15. Conversely I really like the name because it gives me chance to educate some people out of their ignorance.
  16. You need to make sure you're mounting the disk with permissions of the user under which you're running btsync. See 'man mount.cifs' for the options. I don't know what sort of behavior to expect if you mount from GUI. Alternative, and probably easier, is to reformat the drive with ext4.
  17. Right now using it to back up a bunch of machines to a Linux VM that has the actual BTSync storage mounted to a FreeNAS (FreeBSD) server. Clients are all read-only secrets and include a half dozen Windows 7 (2/3 x64 and 1/3 x86) and a couple Windows XP. So far only have one 3-way pairing setup; all others are 1-to-1. Oh, and using it to share some photos with the parents-in-law.
  18. There are only a few parts of the TCP protocol that are offloaded to the network PHY. Usually the checksum, and there's one other thing less often that I can't remember right now. Most of TCP is still processed by the network stack (OS/CPU). I really am surprised by the claim in the FAQ. By all accounts UDP is a leaner, meaner, faster protocol both in terms of smaller header and faster CPU processing. I haven't checked the protocol specifics in a very long time, so I'm not certain whether or not UDP even has a checksum. If it does, perhaps its checksum function isn't typically offloaded to the network PHY whereas TCP's is. Even assuming that's the case, it's still hard for me to see that TCP is faster. And on a LAN where you have very high transmission speeds and very low probability of errors/collisions, UDP seems like it would really be the shining winner. If I get some time I'll do some speed tests. I have quite a few SSDs around and a very large RAID-Z2 array. The whole network is gigabit speed, too. I should be able to get a rough comparison.
  19. Another wishlist would be for btsync to support sparse files. All modern filesystems implement sparse files, although it usually has to be flagged/enabled by the application doing the writing, and isn't by default (for reasons of fragmentation). However, lots of us are now running large arrays and/or SSD and fragmentation becomes less of an issue. Say I want to sync a VMware VMFS that's created as a sparse 100 GB file but only has 3 GB currently allocated. I would want btsync to only transfer the 3 GB, and keep the file sparse on all other sync partners. TrueCrypt is another application that can use large, sparse files (albeit with slightly less security). I think sparse files are also a way to maintain a large file such as VMFS or TrueCrypt volume so that btsync only syncs the changed 4 MB blocks because the filesize doesn't ever appear to change, even though the real data being maintained on the disk is much smaller. You can think of the filesystem as being "oversubscribed." Perhaps btsync already supports sparse files; I haven't checked. But I doubt it since few people seem to think about sparseness but me. EDIT: btsync apparently does not support sparseness presently, as @thunder inadvertently pointed out in an earlier post. He suggested adding gzip/compression to the transfers so an empty 10 MB file wouldn't transfer all 10 MB. I agree that some very fast, lightweight encryption (lzjb would be great!) would be great, but in this particular case, making btsync aware of sparseness solves the problem.
  20. The current version hashes 4 MB blocks of the file. If the file length doesn't change, only the changed 4 MB blocks are updated. If the file length changes, the entire file is resync'd. Search these forums. There are plenty more details about that available.
  21. This is likely a problem for TrueCrypt, but not encfs since the encryption is maintained file-by-file, rather than an entire volume. But yes, it's a bother. I have my whole lappy encrypted with TrueCrypt. But I backup to my own, secure NAS. Hey, options are good though! I've played with encfs quite a bit.
  22. encfs allows you to easily transfer the already-encrypted files. When you look at the unmounted encfs store, it's just a bunch of oddly named files. (Both the filenames and content are encrypted.) So you set your encfs store to be the shared folder with btsync. You work on your files in the mounted (decrypted) encfs mountpoint, and every time you update a file, its encrypted source will be sync'd. With TrueCrypt, I think it would still work just fine, but using the container locally on your computer. You can make the TrueCrypt container a fixed size (sparse files for the win! The "dynamic" option in TrueCrypt for Windows), so only the 4 MB chunks that are changed within the TrueCrypt container are resync'd with btsync. Just don't defragment inside your TrueCrypt mountpoint, haha!
  23. Oh, c'mon. "Impossible"? The concept is actually pretty simple. The implementation is the part that would require some effort. Perhaps you misunderstood what I meant by a mapping. btsync already has two special files that are not sync'd between machines: .SyncTrash and .SyncIgnore. Let's just add another one, say .SyncMap. Here's how I envision it working. Say Machine1 is UNIX and Machine2 is Windows: M1 wants to sync file:1 to M2. M2 responds that it will map the file as file_1 on its end. Both M1 and M2 write in their .SyncMap file the mapping. I envision the file would look something like this: Machine1 file:1 | M2 | file_1 | sha1 Machine2 file_1 | M1 | file:1 | sha1 (I'll get to the sha1 later.) The problem is that .SyncMap as a text file would need to be very specially crafted in order to honor all valid UNIX characters and still maintain the mapping sequence fields, Or the field delimiters could be a binary, non-printable character that's invalid on all systems. Either way, this file can be very easily corrupted accidentally by a user. So let's say M1's .SyncMap becomes corrupted: M1 deletes the corrupted file, and now doesn't know it has a pairing with all of those differently-named files on M2 M2 wants to transfer all of the renamed files back to M1 M1 now ends up with two closely named copies of every file But what happens when M1 wants to transfer the invalidly named files back to M2? The two problems above can be handled by two methods. First, the sha1 digest allows both machines to know if a file contains the exact same data, regardless of its filename. So when M2 wants to start transferring back to M1, or when M1 wants to start transferring to M2, the two machines can work out that the files are already present on both systems and simply use the mapping that already exists on M2. Or second, instead of each system maintaining its own unique .SyncMap, the mapping file itself can be synchronized across all systems and look something like this: M1 | file:1 | M2 | file_1 | sha1 Now if one .SyncMap is corrupted, it can be resync'd from another machine with ease. But now we have another problem. Great, we've come up with an elegant way to maintain a .SyncMap file across all systems that survives corruption. But let's address the corruption problem to begin with, and allow for simultaneous changes to the .SyncMap in asynchonous methods so that if, for example, a third and four machine are separated from M1 and M2, and both pairs of machines (M1 & M2, M3 & M4) are making changes to the .SyncMap without involving the others. Instead of maintaining .SyncMap as a text file, make it a sqlite file. This accomplishes quite a few things: Keeps users from corrupting the file as often with a simple text editor Allows all legal filename characters to be easily handled without trying to make this fit nicely in a flat text file Allows serialization of the records so that asynchronous changes can be made between all machines in the swarm and later synchronized together into a coherent database Using sqlite is easier than you may think. It's FOSS and libraries are available for everything. It would be very easy to add into btsync. Working out the change in the btsync protocol is the tough part that would require significant effort. Now let's think a little bit about the "hardest" scenario I mentioned in my previous post. Let's use the same scenario again. M1 wants to sync file:1 to M2. M2 already has a different file_1, so map the file as file_1[1]. But file_1[1] already exists, too, so map the file as file_1[2]. ... You see where I'm going here. That's a race-condition. However unlikely it is to happen, it still could. So btsync must have a counter that stops after so many iterations. Five is probably plenty, but let's make it 100 just to be safe because why not? As long as there's a limit where it finally stops trying to sync that particularly named file. btsync then should throw a warning somewhere the user is likely to see when using the application, for instance with a "!" icon in either "Devices" or "Shared Folders" tab. Clicking the icon would take you to a filtered list in the "History" tab that would show the problematic files. The whole situation is solvable, and it's really not that hard. But it would more than likely require another protocol change. In the meantime, I would propose an update to btsync that would simply skip files that are illegally-named for the destination machine. Use the "!" icon I mentioned above. Hopefully the btsync protocol already allows btsync to know the types of other machines in its swarm, so it would be able to intelligently and active not attempt sync of UNIX files that would be illegally-named on the destination machine.
  24. We all know there are many characters illegal in Windows filenames. The point is, however rarely this happens, there can be conflicts between legal characters in the operating systems for which BTSync is built and used. Therefore, these cases need to be handled. Here are the possibilities I can think of: Easiest: if there's a conflict, simply throw a warning and not sync (or delete!) those files Harder: maintain a compatibility mapping table on each system that maps illegal characters from one system to legal characters on another system Hardest: dealing with actual filename conflicts when a compatibility mapping (name change) is made when using the "Harder" method above. This could easily lead to a race condition.