larsen161 Posted December 24, 2013 Report Share Posted December 24, 2013 (edited) It seems that indexing currently only happens on a single share at a time. When adding multiple new shares this would drastically improve speed as now it takes hours/days to sync multiple folders with 100k's of files and TBs of data. Currently, btsync is only using ~15% cpu on an m1.xlarge aws instance. Edited December 24, 2013 by larsen161 Quote Link to comment Share on other sites More sharing options...
graphicsmagick Posted December 25, 2013 Report Share Posted December 25, 2013 The bottleneck is probably the disks, not the CPU Quote Link to comment Share on other sites More sharing options...
larsen161 Posted December 29, 2013 Author Report Share Posted December 29, 2013 The bottleneck is probably the disks, not the CPU Perhaps, but that still doesn't mean that the btsync app isn't indexing multiple shares at the same time. I'd like to see this feature added so that it can index multiple shares at the same time. Quote Link to comment Share on other sites More sharing options...
lolcat Posted December 29, 2013 Report Share Posted December 29, 2013 Perhaps, but that still doesn't mean that the btsync app isn't indexing multiple shares at the same time. I'd like to see this feature added so that it can index multiple shares at the same time. Unless the shares are on different drives you would still be limited to the I/O of the drives. On a VPS I'd expect I/O to be the bottleneck. For someone using 7 drives and no raid setup it would make a great difference though, but in that case I would just recommend them to use a raid.Besides there are far more pressing issues, like selective downloads and encrypted read-only secrets that are vital for usability. Quote Link to comment Share on other sites More sharing options...
cyberto Posted November 10, 2015 Report Share Posted November 10, 2015 As far as I can read from the top command on UNIX, my btsync process is using 70-100% of CPU of one core. So maybe considering multithreading could lead to more performance. Quote Link to comment Share on other sites More sharing options...
RomanZ Posted November 11, 2015 Report Share Posted November 11, 2015 @cybertoMultithreading won't help here. Usually, the bottleneck for indexing are usually disk operations. Although, in your case it might be different. The 100% CPU might indicate that hashing of files is way too heavy for your CPU. It might happen on low-profile CPUs, for example - on NAS devices. Quote Link to comment Share on other sites More sharing options...
nickluck Posted November 11, 2015 Report Share Posted November 11, 2015 I oppose this suggestion, since sequential reading is way faster than random reads.Having parallel hashing would impact badly performance.Not to forget that disk I/O on low-end machines has also impact on CPU usage. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.