• Posts

  • Joined

  • Last visited

Everything posted by splat

  1. You're absolutely right that using tr to drop characters not in the base64 alphabet results in a secure random key (eventually). Perhaps I jumped to the wrong conclusion, but I read "alphanumeric" as [a-zA-Z0-9] and not [a-zA-Z0-9+/] (the latter being the base64 alphabet). In either case, I was referring to the distribution of the encoded bits, not the distribution of the bits in the encoding. The bit-per-bit premium we pay in base64 introduces redundancy (reduces entropy) to facilitate transmission, but the whole purpose of encoding is that the encoded bits retain maximum entropy. If the generated key is considered base64 encoded but can never contain a + or / character, I have a slightly better than 50% chance of guessing the next bit in the encoded stream. Not astronomically unlucky -- the stars exist, whereas the chance a collision will happen is so small it's reasonable to say it's impossible. Note that almost all of the web's infrastructure is guarded with public-key cryptography, which in your analogy, is also an address (in an unimaginably vast space) that lacks a front door. That said, a whitelist like #1 is reasonable and currently achievable (disable the relay server, tracker, and LAN search and use only predefined hosts and I think you're there). As for #2, I believe penalizing incorrect secret attempts has been suggested and accepted before, but I'd posit that it's very hard to maintain an effective blacklist (IPs are not so much identity as an accessory) and the security of a second secret is marginal (especially vs. just including those bits in your original secret).
  2. Your modified pseduo-random stream does not pass the next-bit test; knowing (or guessing) your enforced distribution gives me a much better than 50% chance to guess the next bit in the stream. Again, the correct answer is getting Sync to generate a secure 32-byte secret for you, but if you must do it yourself I'd recommend piping the /dev/random output through a utility like "base64" to get a clipboard-capable string: </dev/random head -c 32 | base64
  3. I think what you've got there is a great argument agains the current sorry state of web authentication more than anything else. It's not so much that it isn't meant to be secure, it's that it'll give you insecure randomness when it runs out of the good stuff. /dev/random will block until its entropy pool is sufficient, which can happen arbitrarily and for arbitrarily long periods. So is secure, except when it's not, and it won't tell you when it flips between the two modes. The tr bit is slightly more worrying to me -- unless it was an injective mapping (e.g. switching the encoding alphabet, or replacing all 8s with 7s and all 7s with 8s), you've done some damage to your possibly pristine / possibly already broken randomness.
  4. Fear not, your request has been made before -- there's a detailed publication in the works on the protocol and the various security bits and bobs. As for the read-only / time-limited keys, what we know is that the generating Sync node stores them and matches incoming requests against its per-folder set of secrets. If it's a time limited secret, the generating node forgets it after 24 hours. If it's a read-only secret, presumably the master node knows to drop writes on the floor. As for how the transmission is protected, data is AES encrypted (in some mode, with some key presumably derived from the secret somehow), but the authentication handshake is still a mystery. For what it's worth, I can't seem to sniff my key out of my local traffic (caveat emptor: I don't know which base32 alphabet is being used, the one I picked is probably wrong) so it seems to be obfuscated in some way. Whether that obfuscation is cryptographically secure or merely ROT13, I can't comment.
  5. You're confusing the encoding with the secret length. It's a random 21 bytes that have been "base32" encoded (there's a lot of base32 alphabets, and I don't know which one was used, hence the quotes). 21 bytes is 168 bits, which is the key strength of Triple DES. Personally, I'd like the default strength to increase to 32 bytes of entropy (256 bits), but that's not to say the current length of 168 bits is anywhere near insufficient. If we increase the minimum secret size by enough to accomodate a digest of the folder's name, we'd be strictly better off using those same bits for purely random entropy. If you have a 5-character folder name, that's approximately 29.7 bits of entropy presuming a purely random choice (in reality, there will be far less). Contrast that with the 40 bits of entropy using those same 5 bytes to store a (secure pseudo-) random number. Which would you prefer? Remember, this is a log scale, so having 1/3 more bits makes you far more than 1/3 more secure. The danger with public key cryptography is that it's possible, though infeasible, to reconstruct the private key given the public key. The definition of infeasible is a moving target, however, hence the need to have more bits and up the ante with some regularity. Symmetric keys can be much shorter, and in fact triple-DES uses exactly a 21-byte key (the BT Sync secret length). I disagree entirely with that statement. Security is far better when automated -- could you execute AES256 properly by hand? Moreover, could you come up with a secure cipher by yourself? Even if you could, how many side-channel attacks would it be vulnerable to? Coming up with secure entropy is in the same general ballpark of difficulty, and I have every expectation of myself and others to muck it up (see also: passwords that are children's or pet's names, or common words, or even 'password'). Taking away control from the users does dramatically increase security, but decreases the feeling of it. That's not a small hurdle (largely, I'm asking for the "secret generation" page to train users about probability), but giving users more and more ways to shoot themselves in the foot is solving the wrong problem. That would be a concern, as sha2 is not designed for secure storage (it's designed to be extremely fast, for message authentication). The attackers who compromised the tracker's state would have a monumental task, but the ridiculous rates at which they can compute sha2 hashes on a GPU makes the game less in our favor. If the tracker used something like bcrypt or a fast digest in a PBKDF2-like mode, that would be far less of a concern.
  6. Wow, thanks for the speedy response! I'd strongly encourage you to use bcrypt(secret) rather than SHA256(secret), as the former is designed exactly for this scenario and the latter was designed for a different purpose (message authentication). Also, side-channel attacks being what they are, OK for me to read "expose all security details" as "release the code"?
  7. As a general rule, users cannot be trusted to compute securely random strings of any length. Plus, as PeterVerhees points out, having a secret longer than 32 bytes doesn't help much -- and 32 bytes of entropy is plenty. Allowing users to pick their own keys (and lengths) merely adds to the feeling of security while reducing the reality of it. That's what Bruce Schneier calls "security theater." I'd much prefer fixing Sync secrets to 32 bytes, preventing users from generating their own, and publishing the details (or the code) of the internal generation process for review.
  8. Paul, that doesn't prevent (accidental) collisions, it just means that two users have to additionally collide on name and date/time. The entropy of that information is actually relatively low per bit (<1, since you and I will end up with the same 5-byte computer name far more frequently than we'll generate the same 5-byte random string). We'd stave off more collisions by taking the same number of bits used to prefix with that low-entropy information and replacing it with high-quality random bits, buying us a full bit of entropy per bit of storage (assuming a secure generator with a high-quality source). It certainly doesn't help prevent attacks, either, since computer name and date/time are also extremely predictable. Again, if we're allocating that much more keyspace, I'd rather use it for purely random bits.
  9. rdebath: Excellent analysis. As I understand it, the birthday paradox is the fact that collisions will happen in a uniformly random distribution more quickly than the (naïve interpretation of) the number of permutations would suggest. This becomes a problem when the width of the distribution is used in lieu of the actual collision probability when proving security. That said, with the current 21-byte width of secret keys, the possibility of an accidental collision is nearly impossible. Assuming new keys are created at a rate of 100,000 a second (meaning every living human is adding a new sync folder daily), there'll be roughly 450 universe ages before we have to worry. So the question becomes: can an attacker beat 100,000 hz, keeping in mind that the birthday problem only applies to finding any collision (so the attacker gains access to some network) and not finding a specific collision (gaining access to a specific network), which still requires brute forcing the entire keyspace. I think the distributed nature of the network helps quite a bit, here. No global view of every existing secret exists (save, perhaps the set of trackers, but I'll come back to that in a moment), so in order to check my guesses against the previously generated secrets I have to physically transmit them all over the earth. That introduces a lovely bandwith-delay product limit on how quickly I can guess and check; some back of the envelope calculation suggests that I'll need a few orders of magnitude more capacity than exists in the internet (but don't take my word for it -- I hope that someone else will do the math and end up with a similar result). So, my only hope is to localize the guess & check as much as possible. Depending on what kind of state the trackers keeps as they mediate connections, this may be relatively easy -- if I can break into a tracker and steal its state, then I have at least a partial view into the network-space and a easy local guess & check algorithm. My understanding is that the trackers keep sha1 hashes of the secrets, which is unfortunate. If I can compromise their data structures, I can use my GPU to check billions of guesses per second and the 2^84 hashes I'll need to compute on average to find a collision doesn't seem quite so large. It's still significant, presuming the keys are generated by a secure pseudorandom number generator, but Moore's law stops being our friend. Happily, this is a solved problem; the tracker should just use bcrypt, which effectively introduces the per-guess latency locally that the internet gives us for free. Or, using 32 bytes of entropy instead of 21 means, even in expected random guesses until collision terms, there's a roughly 2^-128 probability of a collision. This, I think, is the most important point of all. Kos et al, are you planning on publishing the specifics of the secret mechanism (or, better yet, the code)? I'd also be interested in the details of how the inter-node communication encryption is handled, but that's a problem for another day.
  10. Not necessarily; the key should indeed be fully random, but the encoding doesn't need to be. For example, the base32 alphabet described in rfc4648 omits 0, 1, and 8 due to their similarity with O, L, and B respectively. It is a painful breaking change to switch the secret encoding alphabet, but hey, pre-alpha software right?
  11. It's an either-or: if you use the config.json to specify syncing, that disables the web interface. See also: /* !!! if you set shared folders in config file WebUI will be DISABLED !!! shared directories specified in config file override the folders previously added from WebUI. */
  12. I'll second this idea. Just because I trust a machine to have some level of availability should not imply that I trust whoever owns that machine with my data.
  13. You'll ned to specify which folders you're trying to share. See this block: /* , "shared_folders" : [ { // use --generate-secret in command line to create new secret "secret" : "MY_SECRET_1", // * required field "dir" : "/home/user/bittorrent/sync_test", // * required field // use relay server when direct connection fails "use_relay_server" : true, "use_tracker" : true, "use_dht" : false, "search_lan" : true, // enable sync trash to store files deleted on remote devices "use_sync_trash" : true, // specify hosts to attempt connection without additional search "known_hosts" : [ "", "myhost.com:6881" ] } ] */ You'll need to remove the block comment (/* and */ on the first and last lines) and configure the "secret", "dir", and "known hosts" blocks appropriately.