Search the Community
Showing results for tags 'upload balancing'.
Found 2 results
Currently the distribution of the available upload bandwidth to the other visible clients seems to depend more on the individual latency to each of the others than on the fact what upload bandwidth the other clients have. This could lead to the strange situation that the client with the smallest upload bandwidth to the internet gets the most data from all clients since it shows the best response time to them. For the main target namely to distribute the data to all clients as fast as possible, such behaviour is sub-optimal. It would be better and more fair to prefer those clients for upload bandwidth which themselves offer the most upload bandwidth to the others. This would guarantee a nearly optimal distribution speed. Example 1: If the local client knows remote clients A,B,C which have the individual upload bandwidth Ua, Ub, Uc on their site to the internet, the local upload bandwidth could be distributed by: full local bandwidth in percent to remote client X: 100 * Ux /(Ua + Ub + Uc), Ux is upload Bandwidth of Client X to the internet This would prefer the faster (stronger) remote clients which are more helpful to distribute the data as fast as possible. Example 2: Another algorithm could be to first pump all data to the client with the largest upload bandwidth until it is full then start filling the second largest ... and so. While the first seeder is filling later clients the best uploaders in the network could run in parallel at full upload speed. This is also near an optimal distribution regarding required time. Perhaps there are better/easier ideas to get near an optimal distribution algorithm.
Hi, can some of you explain to me how a btsync client decides what upload bandwidth will be used to the other members sharing the same secret? Background: When my team (consisting of 5 clients) shares data over the internet we can see at the seeder client, the system with the lowest upload bandwidth gets the most bandwidth from the seeder. "iftop" is a nice tool to check this on a Linux client. We have here the effect while transferring large data files that the clients with the smallest upload capability get ready first and the systems with largest possible upload bandwidth finish at last and this in a strict order! This is obviously the wrong order since it would be of course optimal to fill the client with the highest upload capability first so this resource can be fully used while distributing the data instead of waiting for the clients with the smallest capability to upload data to the others. But perhaps this simply bases on the understanding of the algorithm used and can be influenced somehow, so that's why I'm asking this question, here. Greetings from Germany