Problems With Btsync Running On Vserver


hawibtsync

Recommended Posts

In my own LAN with 2x i386, 4x APK and 2x Windows 8.1 BTSync works fairly ok.

 

Yesterday I did start a new experience. I did order a vServer with 2GB guaranteed/4GB dynamic RAM, 200GB harddrive, 2 vCores.

 

Without any additional configuration I copied the current x64 version of BTSync to that machine (Debian 6) and did start to sync. I did use encrypted shares (API) to test the results.

 

After some time the vServer did freeze. Within 36 hours the vServer crashed three times, once with the host system coming down. After some talk with the technical staff I was told that btsync did use DCachesize to the vServers limit. Lots of failcnts did lead to the vServer crashes. The last employee told me "[...] perhaps a vServer is no good idea for BTSync [...]".

 

As I said, I'm using un-encrypted shares on my two i386 machines (Slackware, 4GB each) without any problems.

 

I'm holding 16 shares, with the biggest size share holding 40GB and the max count of files in one share is 18,000.

 

Any ideas? Any hints are highly appreciated. Will the x64 use much more ressources than i386?

 

Thanks in advance.

 

 

*EDIT*: What I forgot to say: Not all files/shares were copied when the problems started. There were 6 peers connected - if that matters.

Link to comment
Share on other sites

Yep, exactly.

 

When the problems started - and there was plenty of empty HDD space left at that time - df/ls/top/etc stopped working. When leaving the SSH session a new SSH session was refused. Plesk returned an 500 Internal error. The BTSync Web-GUI came up with an empty list. BTSync itself was holding the existing connections to the remote peers but did not transfer anything. vServer needed re-boot at that time.

 

One of my two 8.1 Windows machines (8GB RAM, Quad-Core, 700GB free) yesterday evening started to receive blue screens when BTSync was re-indexing locally. The message - disappearing very fast - showed something with "*HANDLE*"...

 

I guess my environment started to hit against BTSyncs limits. I've read about millions of files here - I can't believe that any longer. As I wrote in my first post:

 

16 shares, 9 peers, 130GB (total), 72,000 files (total), 42GB biggest share, 18,000 files largest share.

 

 

BTW, I'm wondering how Backupsy handles this - their machines have 512MB ...

 

 

*EDIT*: Just in case - here are the releases I'm using:

 

4.150.213 BTSync-1.2.14.apk
1.648.488 BTSync-1.2.82.exe
2.044.500 btsync_i386-1.2.82.tar.gz
2.218.854 btsync_x64-1.2.82.tar.gz
Link to comment
Share on other sites

Here is my log screenshot : 1MVyuKJ.jpg

 

In a Backupsy vps of 500GB, syncing a 70GB , logs ate about 380GB to the point of running without free space. Deleting logs and in a few days back to full utilization of hdd space.
Temporary fix is to schedule a cron job to delete log files every 5 min.

Surely there's a bug here.


Regarding Backupsy and ram:

# free -mt
             total       used       free     shared    buffers     cached
Mem:           495        451         44          0         25        343
-/+ buffers/cache:         81        414
Swap:           87         18         69
Total:         583        469        114

 

 

The full almost memory is due to caching on RAM instead of the hdd. That's perfectly fine because running htop which shows the actual RAM usage without caching , is about 90MB including the usage due to Seafile server.

Link to comment
Share on other sites

Hey there.

 

Can you get additional information about your hosting infrastructure?

I cannot provide you with any help regardin how to adjust confiuration. But I can say that I use btsync on both, ESX and XEN machines and they do pretty goodl.

 

Ok, my share is smaller and I have only two of them.

The first one is 80MB with 223 files an 39 folders.

The second one is 19GB with 3881 files and 228 folders.

 

One of my virtual machines is VMware ESX. This one has 1 core of an i5-3, 256MB RAM, 512MB swap. Btsync uses 25MB of memory and in average 0.5% CPU. It never crashed since I created it half a year ago. But I had to reboot it yesterday since I adjusted the host hardware. So this might influence the used memory of the btsync process. So the uptime of the btsync process is 18 hour now. This one is located in my basement.

 

Another one of my virtual machines is XEN. It has 1 core of an Xeon E5-2, 512MB RAM and 1GB swap. Btsync uses 200MB of memory and an average of 0.5% CPU, too. This one never crashed, too. Due to some update things I rebooted this VM 12 days ago. So the uptime of the btsync process is 280 hours now. This one is just rented.

 

What I want to say: Both of them never crashed, and both were created about half a year ago. But VMware and XEN should be dramatically different in resource management compared to KVM an Virtuozzo. In fact, that was one of the reasons why I decided to pick XEN as a hosted vm.

 

Regards,

Stephan.

Link to comment
Share on other sites

It's Virtuozzo running on this providers vServer systems. Yesterday I did one last test before definetely canceling my order:

 

I did re-install BTSync - this time I used the i386 release. I completely filled the biggest share (18,000 files) on the vServer and did add just this shares secret. I did shutdown BTSync on all but one machine - a Windows 8.1 PC. After some minutes the vServer crashed again.

 

I mean, the data was already there. The only requirement for BTSync was to re-index and check the values with one single peer. My idea was that x64 needs more ressources, that more peers lead to more connections/traffic. No, just one single pre-filled share and just one remote peer, and boom. It's a 2GB/4GB, 2 vCore, 200GB vServer and that's not enough. Whow.

 

I did cancel that experiment and did delete my vServer account. It's useless for me.

Link to comment
Share on other sites

Hey hawi.

 

As I said, might be an issue related to Virtuozzo. Those virtualizers don't provide really encapsulated environments but share several resources just by adding access rules to raw resources. That's not exactly what I'm talking abut, but think about e.g. inodes. There are only a couple of them, and if one VM eats all of them up, they're gone for all other VMs, too.

 

I would give XEN a shot. There are cheep VMs on the market, too.

 

Regards,

Stephan.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.