If You Have Sync Issue


Recommended Posts

I run BTSync on three Linux machine and two machines (both being 32-bit) is experiencing segfault.

​I did not have any issue on 1.1.15, but 1.1.22 tend to crash on me after short period of time:

Two machines with frequent segfault observed:

Linux 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:36:13 UTC 2013 i686 athlon i686 GNU/Linux

Linux 3.8.0-19-generic #30-Ubuntu SMP Wed May 1 16:36:13 UTC 2013 i686 i686 i686 GNU/Linux

The log doesn't show anything suspicious...

Link to comment
Share on other sites

After some very extensive tinkering, I believe this might be an MTU problem. I managed to find out how that the problematic router (B ) connects to the internet via bridging with an MTU of 1300. Why they have set this weird value, I have no idea.

This idea is supported by the fact that syncing very small files or very few files works - if I just add one or two files to a synced folder and they are small, syncing works. Addition of a big file on the non-problematic network is propagated to the problematic network, but the transfer is very, very slow (around 0,1 kb/s). Addition of many files (say, around 5-6) is not propagated to the problematic network and every subsequent change in the filesystem of either computer goes unnoticed by the other one. This is really weird stuff.

So I'm going to try and get ahold of a technician at my ISP and try to explain the problem to them to get them to change their MTU, but you can imagine the odds of success on this.

Meanwhile, I'm wondering if this is not *also* a BTSync problem, i.e. how BTSync handles incoming packets? When they are fragmented due to weird MTUs? No other application has this issue.

Link to comment
Share on other sites

Fragmentation for TCP is handled at the TCP level in the operating system, not by the application level.

However, BTSync uses UDP and I have no idea how UDP fragmentation works. UDP is typically used for small packets (think VoIP or sometimes video streams) so fragmentation is not an issue. However, when used for file transfers, I would expect large packets.

There are some utilities that will allow you to perform UDP-based pings where you can manipulate the packet size. You might want to try something like that to see if your operating system will automatically fragment UDP packets (I don't even know if this is possible or how it works; I would have to read up on UDP specifics but I don't have time at present).

Link to comment
Share on other sites

Making BTSync use TCP packets for LAN did not help, I believe this is because, for status messages, it still uses UDP (as seen in Wireshark).

So it would appear BTSync cannot handle UDP fragmentation well. In no way am I an expert or even knowledgeable about networking specifics so I have no idea what could be done here. It's not even clear to me on what level the fragmentation occurs or where there might be some possible influence over fragmentation. But this seems like a problem - there will probably be more people with little influence over their MTUs and no influence over the networking settings and hardware. Is there anything that can be done at the BTSync level here?

Link to comment
Share on other sites

Sounds like implementing pMTU Discovery may be needed then. I agree it may be needed, usually on sat links and PPPoA DSL (PPPoE is probably more flexible).

Should be easy enough to implement but would require a protocol bump, unless it's handled independently by the clients before the transfer protocol takes over.

Link to comment
Share on other sites

Not being familiar with the BitTorrent protocol per se, is there a compelling reason why UDP is used for non-data transmissions? I'd be very interested in whether this is a UDP-specific problem. Nonetheless, it would appear you are correct.

For the record, after some *more* tinkering, I found out my router connects via "routed bridge encapsulation" - the VDSL line here is part of a pilot project in a small village, so I assume the VDSL line is simply the link and everything else is handled by common ethernet standards, since the router IP is obtained via DHCP.

Link to comment
Share on other sites

It happened with files I was editing in Netbeans.

I was often saving the second time before it was even synced and maybe it confused the client somehow.

Or maybe Netbeans' code scanner (for auto completion, etc) locked the file and that caused some problems.

This only happened 3 times in a few hours so I don't know what really caused this.

What I do know is that I worked on projects before and earlier versions didn't have this problem, or I was very lucky.

Maybe there should be a limit on how often can a file be synced. Say once in 2 minutes.

I'm guessing this would eliminate these kind of conflicts and also reduce the over syncing.

Link to comment
Share on other sites


I found some log entries about all 3 occasions. Sorry, I didn't check before.

These are logs from a linux peer, where I edited the files via a Samba share.

The .3 IP belongs to a Win8 client, where these files weren't even opened.

[20130530 23:42:07.401] Incoming connection from
[20130530 23:46:52.214] did not pick any blocks. blocking peer temporarily
[20130530 23:46:52.266] ImporterBase.cls.php: Piece 0 complete

[20130601 03:17:23.134] ReadFile error: CatLab.cls.php:0:22680:22680:3
[20130601 03:17:24.286] did not pick any blocks. blocking peer temporarily
[20130601 03:17:24.432] CatLab.cls.php: Piece 0 complete

[20130601 04:04:17.988] Incoming connection from
[20130601 04:07:28.557] did not pick any blocks. blocking peer temporarily
[20130601 04:07:28.610] CatLab.cls.php: Piece 0 complete

Also there are hundres of lines like these but they are way earlier, going back days..

[20130530 18:33:11.011] Failed read from file error = 9
[20130530 18:33:11.011] File changed during hashing

Link to comment
Share on other sites

Can you please explain your configuration in more details?

You use Netbeans on Linux and files are shared using Samba, and then shared to several Win 8 machines? And never version on Linux were overwritten with version from Win 8 machines?

Link to comment
Share on other sites

I have a home server running Linux. All shares are added on linux locally and on two win8 PCs also. The laptop is usually off.

If I'm working on a web project I open the files with Netbeans from Windows via a Samba share so I can instantly check it in the browser. (The webserver is on the linux peer.) So the win8 client just recieves files in the background, like backup.

But somehow 3 times a file was synced back overwriting a newer one on the linux peer.

Just in case it's not clear: All shares are added to a local path and I only work on one peer's files at a time.

Everything is also available via Samba but not everything is synced with BTSync.

Link to comment
Share on other sites

Here's the next one...

Logs on the linux peer where the file was edited:

[20130602 02:17:30.273] ReadFile error: Gastro.cls.php:0:3134:3134:3
[20130602 02:17:31.238] did not pick any blocks. blocking peer temporarily
[20130602 02:17:31.419] Gastro.cls.php: Piece 0 complete

Logs on Win8 where the file wasn't even opened (the local version of it):

[2013-07-02 02:13:37] ReadFile error: Gastro.cls.php:0:3134:3134:3
[2013-07-02 02:26:31] ReadFile error: Gastro.cls.php:0:3146:3146:3

The win8 client's clock was ahead by 2-3 seconds. The automatic NTP sync doesn't always work in windows...

So I'm thinking that few second time diff could've caused this if there's a read error. BTW why are there random read errors?

I force triggered an NTP sync. We'll see... But something is definitely broken.

Update: Now two files at once. Same read errors then an overwrite.

Am I seriously the only one experiencing this? :huh:

Link to comment
Share on other sites

The same thing is happening with 1.1.26 and droid 1.1.7. I edited a file - now on a win8 client - which was propagated to the other peers just fine. Then the droid client on my MyAudio sent out the previous version as if it's the newest...

I also have sync running on a HTC Desire and it didn't have this issue.

The same read errors are present in all log files. This is madness.

Link to comment
Share on other sites

I'm having a similar problem as 'Lightning'

Linux laptop with ext4 on LUKS, LVM.

NAS with samba.

Linux desktop with btrfs.

NAS is always on, and my linux box's are only on one at the time.

Old files were synced back from my laptop to my NAS yesterday. I fixed it with a backup.

Today Sync from NAS to desktop gives errors and sync doesn't complete.

sync.log on desktop:

[20130604 15:40:34.430] ReadFile error: workingsets.xml:0:3941:3941:3
[20130604 15:40:34.449] ReadFile error: AndroidManifest.xml:0:973:973:3

[20130604 15:45:44.835] did not pick any blocks. blocking peer temporarily
[20130604 15:45:44.835] did not pick any blocks. blocking peer temporarily

btsync 1.1.26 on all machines. Issues started with this version

Link to comment
Share on other sites

at the moment i would recommend not using live data that changes often with BTsync at the moment it's too easy for other BTsync to incorrectly sync and lose data (lucky enough BTsync had moved the newer version into trash), so went back to dropbox for that can't have keypas getting overwritten like that

the mobile phone android app btsync is doing it as i can see from the history that the mobile phone app added the file and removed the newer one (that was lucky placed into trash)

static data files for btsync is ok as that does not change

main issue at the moment they are relying on date to check if it has been modified that can be a big issue, so i would stear clear of BTsync for live data or you mite lose it, until they get it sorted out (needs to ask really before it overwrites or deletes files or set one BTsync clients as the boss client like dropbox)

Link to comment
Share on other sites

at the moment i would recommend not using live data that changes often with BTsync at the moment it's too easy for other BTsync to incorrectly sync and lose data (lucky enough BTsync had moved the newer version into trash), so went back to dropbox for that can't have keypas getting overwritten like that

Can you please describe your environment in more details? Starting with 1.1.27 works _much_ better with dynamic data and we are not aware about any issues with dynamic data for this build. Older builds had issues that are fixed.

Link to comment
Share on other sites

if i save/edit a file in this case keepass on a desktop pc and then look at the device copy, it seems to override the desktop copy,it even says so in the history <Phone> added file on desktop overriding the newer version, where as on desktop to desktop when i edit them it says <computer name> Updated file

(it did it twice 2 added from phone as it showed in the desktop history is when it uploaded a old version from my phone when the desktop had newer version)

but sometimes it says updated from <phone>, just when it says file added on desktop it wipes out the newer version from the desktop

it has happened at least twice as i noticed when i saved it first time keepass complained about the file been different, second time i forgot to make a backup before messing with me phone (as i guessed it was the phone that was overwriting with older file)

it may have been due to the clock on the desktop and laptop been about 20 secs behind

trying to make it do it again now but its not wanting to it again (well i did but not same result the test file ended up in the synctrash folder i am guessing that one came from the mobile device)


just watched it now it update the file from my phone that is 15mins old (the correct updated file is at 10:00, the old file that is on the phone 9:45 timestamp has overwritten the desktop copies, the newer files are in trash on the 2 desktops now)

Link to comment
Share on other sites


I'm having a problem with 1.1.27.

I have a rig of 4 PCs. Main one (A) is always on and has windows 7 ultimate 64bits, B has also windows 7 Ultimate 64bits, C has windows 8 Pro 64bits and D is a laptop with windows 7 Home Premium 64 bits.

I was using 1.1.15 without problems but decided to update to 1.1.25 one morning. I left all the computers indexing 3 different jobs some time, but had to go so I turned everything off. When I came back in the evening I started everything again but autoupdate was launched to 1.1.27. So I let every comp autoupdate before everything was indexed with the previous version.

Now I have some discrepancies between different computers. I have 3 main jobs:

1.- Full sync, 17.3GB in 9765 files

2.- Read only to B and C and full sync to D, 34.9GB in 15027 files

3.- Read only to B and C and full sync to D, 215.5GB in 50985 files

After finishing indexing some shares show that have some files left on some computers but does not transfer any file showing in history that some files are skipped because of bad time stamp.

Job 1 is synced between A and B and between C and D but the latter have less files synced than the former and show that A and B have to send files to C and D. Again there are no files being transferred.

I tried removing the folder from some computers and adding it again but, after indexing, the result was exactly the same (with the same number of files).

Any advice?


Edit: Well, I just removed the conclicting folders and files and left just the original ones. Added the folders again, indexed, and synced everything correctly. I guess that double update isn't recommended.

Edited by DonCamilo
Link to comment
Share on other sites

There are still conflict issues, especially with android.

If files are changed while the android client is offline then when it comes online it will overwrite the new versions.

This happens pretty much 100% of the time.

Maybe it reports out-dated changes to the other peers before checking for remote changes?

The desktop version seems good-ish. Sometimes a few deleted git pack files get synced back, but no conflicts so far.

Link to comment
Share on other sites

I have a problem with 1.1.27 as described here: http://forum.bittorrent.com/topic/21322-btsync-coredumps-on-debian-wheezy/ Quick recap:

It's a fit-pc running debian, the problem is that btsync exits with "Illegal instruction" after about a minute. Here's som info from the box:

cat /patte@vodskov:~$ cat /proc/cpuinfo
processor : 0
vendor_id : AuthenticAMD
cpu family : 5
model : 10
model name : Geode(TM) Integrated Processor by AMD PCS
stepping : 2
microcode : 0x8b
cpu MHz : 499.887
cache size : 128 KB
fdiv_bug : no
hlt_bug : no
f00f_bug : no
coma_bug : no
fpu : yes
fpu_exception : yes
cpuid level : 1
wp : yes
flags : fpu de pse tsc msr cx8 sep pge cmov clflush mmx mmxext 3dnowext 3dnow
bogomips : 999.77
clflush size : 32
cache_alignment : 32
address sizes : 32 bits physical, 32 bits virtual
power management:

atte@vodskov:~$ uname -a
Linux vodskov 3.2.0-4-486 #1 Debian 3.2.41-2+deb7u2 i586 GNU/Linux

atte@vodskov:~$ /lib/i386-linux-gnu/libc.so.6
GNU C Library (Debian EGLIBC 2.13-38) stable release version 2.13, by Roland McGrath et al.
Copyright (C) 2011 Free Software Foundation, Inc.
This is free software; see the source for copying conditions.
There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A
Compiled by GNU CC version 4.4.7.
Compiled on a Linux 3.2.35 system on 2012-12-30.
Available extensions:
crypt add-on version 2.1 by Michael Glad and others
GNU Libidn by Simon Josefsson
Native POSIX Threads Library by Ulrich Drepper et al
For bug reporting instructions, please see:

I've PM'ed kos13 a core dump.

Link to comment
Share on other sites

I noticed the following problem using:

pc sync app 1.1.27

android sync app 1.1.26

shared two folders in pc, in android i've added both as read only (full sync).

In pc I'd deleted 3 folders, these changes were not reflected in android when it was syncronized. These 3 folders remained in the android.

Link to comment
Share on other sites

I have a strange behavior and don't know if this is an unusual usecase:

I use 1.1.27 on a MacMini OS X 10.8.4 64bit, a MacBookPro OS X 10.8.4 64bit and on my Drobo 5N (ARM). I have set up multiple folders but at the moment only one folder has a sync problem. MacMini and Drobo have r/w access to a sync folder and are synced completely (as stated in the gui). The same folder should be synced (read only) to the MacBook Pro. The Problem: The sync stalls if i shut down the MacMini. If the MacMini is online then both (Mini + Drobo) upload data to the MacBook.

The log file on the MacBook states a hash mismatch and that the Drobo (my home IP if I take my MacBook with me) is blocked because of this. Is this a known issue? Is there a problem with r/o sync? I tried removing/re-adding the folder from the Sync App on the MacBook but it always fails on the same files if my Mini is offline.

Any Ideas? Should i send logs? All 3 devices (*sigh*)?

Edit: I seems as it does not matter if there is a direct lan connection or not. I left the MacBook syncing (so i thought) all night and nothing gets transfered (I did not notice because of at least some "traffic" being displayed by the web gui produced by the occurring transmit/hash fail/blocking)

Update: Same thing with 1.1.30. I assume the Drobo might be the cause for most of my issues. I will try to run btsync on another OS X machine instead (always on) in the next few days.

Edited by sagdusmir
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.