Multi-Threaded Transfers Between 2 Peers


Recommended Posts

I believe Btsync only uses a single-thread from one peer to another (if I'm wrong, please show me! - I'd absolutely adore a fix for this). If you had 50 peers in a sync swarm, then you'd be able to saturate connections after the swarm started ramping up. When you only have 2 peers in a swarm (which I suspect is fairly common for many users), then you are stuck at the throughput of a single thread.


Please note that there can be significant ISP peering bottlenecks between btsync peers. So the maximum single-threaded throughput from one peer to another might only be 8mbit, even when both peers actually have 160mbit connections. This can be fixed with multi-threading. Using the example again, if the 2 peers could make 20 simultaneous transfers instead of just 1, you'd see those 160mbit connections saturated. 


Btsync is simple and easy, but in terms of performance between two peers (a simple and likely common use case), it just can't compete with FTP-based syncing because of this issue. 





Link to comment
Share on other sites

Not sure if you mean "threads" or if you actually mean "simultaneous connections/transfers"?


Assuming it's the latter, the number of connections used to transfer each file depends upon the size of the file being transferred. For larger files, the data is "chunked" and transmitted via 4 simultaneous connections from what I recall instead of a single connection for smaller files. Therefore, you should notice better performance when syncing larger files over smaller files. This "4 simultaneous connections" setting is currently not user-definable.


If you're referring to "threads", Sync is already multi-threaded - i.e. the UI runs in a separate thread to the actual Sync engine.

Link to comment
Share on other sites

This is very cool to hear.


I had assumed btsync already broke up files into smaller units in most all cases (e.g. maybe splitting files into 4MB pieces), with something like a hash check to ensure successful transfer for each piece. It sounds like that was a bad assumption though, as if only large files are chunked. Can you elaborate here?


What is the threshold file size to get this 4 concurrent connection active? I want to test it. I've moved reasonably large files, and honestly, I've not achieved the kind of throughput increase I would expect. Maybe I'm not moving large enough files though. 


So, I want to change my feature request to be more specific given the context: 

  • Don't force the limit to something as low as 4 connections. That is crazy low. 
  • I'd like to be able to choose the number of simultaneous connections.
  • Most importantly, I want concurrency not just for a single file, but for multiple files. Serially transferring a stack of 1000 small files is simply a waste of time when you could you have 50 simultaneous connections to smash through that stack.

I'm fine with a really lightweight default installation assuming low end hardware and throughput, but I don't see why advanced settings couldn't allow users to select a huge performance increase (which, honestly, most modern hardware could easily handle).

Edited by 4eak
Link to comment
Share on other sites

  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.