Xanza

Members
  • Posts

    102
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Xanza

  1. This feature is not implemented yet. Honestly speaking, I don't know how to implement it. Keep in mind that Sync is cross platform, so if you have file that is created by jon and synchronize it with Windows, how we should preserve the user name.

    You would have to detect the operating system of the client, and if it's being transferred from Windows to (l/u)nix then you could attempt to save it as root, but that would probably require that the BTSync daemon was spawned by root.

  2. Because you're routing the traffic through a VPN it's possible that the firewall on the VPS is blocking the ports necessary for communication. You shouldn't immediately blame the service before you know what the issue is... Remove the VPS, and test it on your open, un-proxified network first; then make assumptions as to the issue if it still doesn't work.

    Also you should NEVER assume that you know what an issue is, especially because of your profession. That's ignorant and makes you look ignorant. Besides, you repair PCs, you're not a network administrator.

  3. I think the memory on your Rpi are running low. I read somewhere that BTSync use a lot of memory.

    Have you checked that?

    Yes, BTSync consistently uses anywhere between 10-70% of the total CPU power of the Pi unit, however, it never really reaches 100% -- so I don't believe that's the actual issue. (I've monitored during transfer with htop).

  4. I'm gonna post this in hopes that anyone who wants to put Sync on a Raspberry Pi might see it:

    Thanks! I'll give it a shot and let you know -- everything seems to be working and I'm not receiving any errors, but a simply symlink won't hurt.

    EDIT:

    No, the symblink does not work as the resource is already located in the /lib/ directory.

    EDIT:

    Tried running btsync as root -- same issues. Any transfer that takes longer than 10-12 seconds drops down to near completely stopped, or completely stopped.

    SEMI-FIXED:

    Part of this issue was caused by the default Raspberry Pi settings. By default the partition size of the disk is only 1.8GB to which I filled up VERY quickly. After attempting to transfer a larger (700MB) file, the transfer peaks at 2.8 Mbps (pretty much my saturation cap) then drops all the way down to 20 Kbps, then down to 12 Kbps, then finally down to 0 Kbps.

    Read More: http://www.ardupi.co...ce-left-on.html

    All in all, the transfer speeds for smaller files are fine now, but again, anything that takes more than 10-15 seconds to transfer basically drops to a speed where it almost stops.

    Any suggestions?

  5. Two questions:-

    1. Why continually send data that hasn't changed? that's inefficient - you should only send the directory tree data when something has changed, and you should only send the part that has changed.

    2. Why does it still multicast just as much data even when no other peers have been detected?

    Sorry guys, but anything that uses that much bandwidth constantly AT IDLE on my network isn't fit for purpose.

    To expand upon that idea, why not just create another .SyncSums file or something that calculates the total MD5 (or SHA1 whatever) sum of all files located within the directory tree that are set to sync. Then simply compare this single MD5/SHA1 hash between clients, if it's different, then something has changed, if it's the same, then things haven't.

    Would probably work a bit better than polling every XX seconds with a real time analysis of whatever is in the directories. (which will probably get VERY slow as you accumulate a ridiculous amount of files (100,000+).

    Any thoughts?

  6. Also, check out this link. I had to symlink a lib to get it to run on my Pi. I was getting the same "No such file or directory".

    http://forum.bittorr...pbmc#entry28830

    Thanks! I'll give it a shot and let you know -- everything seems to be working and I'm not receiving any errors, but a simply symlink won't hurt.

    EDIT:

    pi@raspberrypi /lib/arm-linux-gnueabihf $ sudo ln -s /lib/arm-linux-gnueabihf/ld-linux.so.3 /lib/ld-linux.so.3

    ln: failed to create symbolic link `/lib/ld-linux.so.3': File exists

    I guess not. :(

    EDIT:

    Tried running btsync as root -- same issues. Any transfer that takes longer than 10-12 seconds drops down to near completely stopped, or completely stopped.

    SEMI-FIXED:

    This issue was caused by the default Raspberry Pi settings. By default the partition size of the disk is only 1.8GB to which I filled up VERY quickly. After attempting to transfer a larger (700MB) file, the transfer peaks at 2.8 Mbps (pretty much my speed cap) then drops all the way down to 20 Kbps, then down to 12 Kbps, then finally down to 0 Kbps.

    Before:

    $ df -h

    Filesystem Size Used Avail Use% Mounted on

    rootfs 1.8G 1.7G 0 100% /

    /dev/root 1.8G 1.7G 0 100% /

    devtmpfs 93M 0 93M 0% /dev

    tmpfs 19M 220K 19M 2% /run

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 37M 0 37M 0% /run/shm

    /dev/mmcblk0p1 56M 17M 40M 30% /boot

    tmpfs 37M 0 37M 0% /tmp

    After:

    $ df -h

    Filesystem Size Used Avail Use% Mounted on

    rootfs 7.2G 1.7G 5.1G 25% /

    /dev/root 7.2G 1.7G 5.1G 25% /

    devtmpfs 93M 0 93M 0% /dev

    tmpfs 19M 224K 19M 2% /run

    tmpfs 5.0M 0 5.0M 0% /run/lock

    tmpfs 37M 0 37M 0% /run/shm

    /dev/mmcblk0p1 56M 17M 40M 30% /boot

    tmpfs 37M 0 37M 0% /tmp

    Read More: http://www.ardupi.com/2013/01/raspberry-pi-raspbian-no-space-left-on.html

    All in all, the transfer speeds for smaller files are fine now, but again, anything that takes more than 10-15 seconds to transfer basically drops to a speed where it almost stops.

    Any suggestions?

  7. can i ask for a instruction how to get btsync working on Raspberry Pi ? I'm a newbie with linux - but i managed to untar the pakage and everything. When i'm doing ./btsync in the right directory only following message is displayed : "-bash: /usr/sbin/btsync: No such file or directory" ... file is there ... and i chmod to 0755 .... but nothing happens. Running on Raspbmc ... it would be wonderful if you can post some instructions ... thanks !

    Start over.

    sudo wget http://btsync.s3-website-us-east-1.amazonaws.com/btsync_arm.tar.gz ; sudo tar xvf *.gz ; rm -rf *.gz ; ./btsync --dump-sample-config > sync.conf

    Execute that whole command via SSH, it'll re-download the ARM version of btsync, unpack it, remove the tar.gz file, and dump the sample config to the console.

    After that, you should be able to setup the config the way that you want and then:

    sudo chown u+x ./btsync; ./btsync --config sync.conf

    And it should work just fine; at least it did for me. You might have downloaded the incorrect version of btsync (you need the ARM version).

  8. I've installed and configured btSync on my brand new Raspberry Pi and I have to say it's working great, however, I just tried to transfer a 115MB file into my backup storage (on my Pi; 8GB SD Card). The transfer starts at about 2.2 - 2.8 Mbps for about 10 seconds, and DROPS down to 0 Kbps. Then jumps between 30 Kbps and 0 Kbps until the transfer is complete (about an hour). As you can probably tell, this is pretty crazy, and not efficient at all.

    Upon inspection of the BitTorrent Sync application, under the "Transfers" tab, I can see my file "flashing" within the window. When the file is in the window, obviously it starts to transfer, then almost immediately is removed and the transfer drops to 0 Kbps.

    Anyone suffering the same symptoms?

  9. In my experience, BTSync will give you the 0.0 kB/s transfer status for files that are locked by the operating system or an application. Once the file locks are resolved, they will transfer.

    I've experienced quite the opposite -- I've been able to transfer files that are locked by the operating system.

  10. I'm going to assume that you're using WAN as a colloquial for wireless area network instead of it's correct abbreviation for wide area network -- assuming that, ensure that you have NAT turned on and correctly configured with universal plug and play.

    I have Sync installed on a local instance of Raspberry Pi running via Cat5e and I've been able to transfer 8GB of information (limited by the size of the SD card) in less than 2 minutes.

  11. I've been playing around with .SyncIgnore and have noticed the following things:

    • If you specify a wildcard, .e.g., file*, everything matching that--including in all subdirectories--is ignored.
    • If you add a file/dir to .SyncIgnore after it has already sync'd, it will be removed from other peers (actually just placed in their .SyncTrash directories).
    • I have also just tried a number of different permutations of a subdirectory in an attempt to get BTSync to ignore but it doesn't seem to work on directories--only on files (and even this seems hit-and-miss when adding file masks to files that have already sync'd)...
    • You may have to close the .SyncIgnore file when you're done editing it (not just save).

    Then, I exited BTSync on the host of the files (readonly secret, remember), and restarted BTSync ... all of a sudden it deleted on the peer all of the subdirectories I had specified in .SyncIgore (well, moved to .SyncTrash of course). It also deleted a filemask I had specified after the file had already sync'd (although earlier in the test this worked without restarting BTSync).

    So I guess the lesson for now is just to restart BTSync when you update .SyncIgnore. I would think only restarting BTSync on the host with the changed .SyncIgnore is enough (at least it was in my case, and restarting the read-only peer didn't have any effect).

    To ignore a subdir I simply specified it like:

    subdir

    Or a file within:

    subdir/file

    Nothing special. I used "/" in the UNIX-compatible syntax even though it was on Windows. I tried relative and absolute paths (including drive letter) and it didn't work until restarting BTSync. And when I restarted BTSync and saw the changes take effect, I was only using relative paths.

    I also upgraded the host to build 125 and didn't see any difference in this behavior.

    I was using Win7 as both the source and the read-only peer for these tests.

    I observed changes tended to replicate within about 5 seconds on my LAN.

    Hope this helps a bit!

    Thanks, useful info right there.

  12. The ability to set capacity caps for a given folder via the API would, I'm sure, a very highly sought after feature.

    Use Case: (hypothetical)

    I have a dedicated server which I use for backup storage -- a friend would like to use my online storage server as well. I agree, but I'd only like him/her to use a maximum of 1GB of storage space. Using the API, I should be able to set any given folder which is attached to his specific secret key, an (admin) set maximum storage value.

    This is a highly useful feature when thought about at length.

  13. The problem with secrets is that you can "crack" them without trying. It's perfectly possible that two users can end up with the same secret and thus see eachothers files. Ofcourse the chances are slim, but they do exist. Rather than creating stronger secrets where the collision chances are reduced I would like to see a system where there are two secrets and you need them both to access files. The first secret is generated by Bittorrent and is checked against a database to make sure it's unique. The other is user generated and doesn't need to be unique. Same as a username and password except the username is just a random string.

    Not only is this information wrong, it's kinda stupid to say that you can 'crack' these secret codes, but the more secure option would be to store said keys in a single location... lol If you would have said 'guess' these secret codes, then yes, I'd agree with you. But considering we're unsure yet how the secrets are generated, that's an improper assumption.

    Not to mention, if the system uses something as ubiquitous as unix time-stamp for example and a variation of other box specific notation, it would be neigh impossible to decipher the secret to begin with.

    We are going to change the way Secret works in next build. We will:

    - remove restriction on the length;

    - will use base64, so all bits will be used

    - will introduce one time passwords to safely pass secret over insecure media.

    I genuinely support the idea of a private, and a public (or secret) key, or if you fancy the wording better "Two secret keys." Base the key generation off of two separate algorithms, and off of misc or garbage information, and it would make even a single key very difficult to replicate. It would also ensure that someone can't simply generate a gargantuan list of possible secret keys and test all of them for valid secret keys, thereby stealing everyone's files.