Is 20-45% Linux NAS CPU usage too high?


Recommended Posts

On my Linux NAS, I tested with a 90MB directory of about 60 files linking with two Windows and two Android clients, and it looked fine. So I added three more directories for sync of approximately the following sizes:

1) 15GB, 10,000 files

2) 760GB, 70,000 files

3) 300GB, 88,000 files

When it finished indexing, I linked two Android devices without automatic sync and read-only to those three large dirs. No other clients linked to the large dirs, just the one small one.

After several hours, when everything had synced and become stable, my NAS settled with the btsync process taking up about 70% memory and 45% CPU. So I restarted BTSync, and it later settled at about 35% memory and 20% CPU--still way too high. So I unlinked the 15GB and 760GB dirs, and the BTSync process's CPU usage settled at 10%, which I would think is unacceptably high. I unlinked the last big 300GB dir, leaving just the 90MB/60 file dir, and BTSync has 18% memory 0.1% CPU of my NAS's resources. That looks OK.

Is such high CPU usage (20-45%) to be expected when syncing so much stuff, or is it just that it's beta and the Linux BTSync is likely to undergo substantial optimization? Is there some re-config I should try on my NAS?

Link to comment
Share on other sites

I unlinked all that stuff yesterday. Today, I tried a completely different directory of 10GB, 37,000 files. No other clients are linked. After it all settled down, the btsync process stayed at 3.5-4.2% CPU. No other process is ever above 0.2%. When the webui says that btsync is indexing, I see its CPU usage is at 40%, and then it drops back to 4% when indexing is finished.

Link to comment
Share on other sites

I'm only syncing two large folders, maybe 100GBs in total consisting of hundreds of thousands of files, and things are consistently slow between the 8 computers. I am limiting the transfer speeds to 10Kbs up and down and have turned off the "show notifications". I can't watch TED talks or even play pandora music without pauses. The computers are all on the local network that has a gigabit switch. The internet connection is fios 25/25 but should not be getting used. The software is at version 1.1.42 except for one linux machine.

What does one need to do to keep btsync from hogging resources? I'm thinking btsync should be able to run on all my computers without being noticed. And that there should be a separate FAQ on how to keep btsync from hogging resources unless you want it to.

Link to comment
Share on other sites

1.1.48 continues the same: 10GB, 37,000-file directory takes up 4-7% CPU where no other process is above 0.2%, with one exception. My bittorrent client, Transmission 2.77, is at 2-5% CPU sharing 8 torrents (estimate 1000 files) that will eventually total 52GB. At the time of my previous post, Transmission wasn't running as many torrents. Every 10 minutes when syncing, the btsync process runs at 40% CPU. Any opinions if this btsync CPU performance on my Synology is within the realm of reasonable? I guess 4% CPU is OK given that my bittorrent client performs about the same. But if progressively adding my full complement of data to sync is going to progressively push btsync's CPU to a constant 40%, I'd be curious to know if that's to be expected and tolerable. I also tried turning on and off Synology's indexing service on the synced directory, but that made no difference to the btsync process's CPU usage.

Link to comment
Share on other sites

  • 2 weeks later...

Yes, problem still exists with 1.1.48

For example top tells me this laptop is using 33 to 56% of the cpu on btsync. (Using a 9 second average.)

I have two directories I'm keeping synced.

One directory of 9.6GB in 1196 files across 3 linux mint 15 xfce, 2 windows 7, and 1 ubuntu 10.04. This directory has had no changes in it for hours and is synced. It consists of files that I don't access much.

Second directory of 59.9GB in 134117 files. It is only shared between 3 linux mint 15 and one ubuntu 10.04. Everything is synced here but I do change about 30 files a day in this directory.

All computers are running the latest 1.1.48 software.

Link to comment
Share on other sites

  • 2 weeks later...

1.1.69 continues the same or slightly worse on my Synology NAS: 10GB, 37,000-file directory takes up 5-8% CPU where no other process is above 0.2%. My 90MB, 60-file directory was 0.2%, but adding more stuff still requires unacceptably high CPU usage. Big disappointment given the improvements in the Android app. The Windows version continues to stay low at <1%.

Link to comment
Share on other sites

In idle top tells me this 20 % cpu usage and 60 % memory usage.

6945 root 20 0 184m 126m 308 S 19.2 58.1 4:22.47 btsync-daemon

How can I reduce this?

I installed the newest version from the ubuntu ppa. This ends up in a really high battery consumption on my Laptop.

Link to comment
Share on other sites

erdna@voyage:~/Documents$ find . -type f | wc -l

143281

erdna@voyage:~/Documents$ du -hs .

16G .

Just one directory with 16 Gigabyte data.

It is the only client with this folder. Means nothing to do for btsync-daemon. No changes anywhere. And still more than 20 % cpu usage.

What is btsync doing in the background with my data?

Link to comment
Share on other sites

erdna@voyage:~/Documents$ find . -type f | wc -l

143281

erdna@voyage:~/Documents$ du -hs .

16G .

Just one directory with 16 Gigabyte data.

143,281 files and 16GB isn't "just." Compare with my stats above, and you'll see that your experience jibes with mine. I don't think there's anything to do but wait for the app to develop. Currently, I only use my Linux NAS to sync a 60MB/60-file tree, which requires 0.2% CPU.

Link to comment
Share on other sites

I could understand high cpu usage in case of an ongoing synchronization. But If the indexing is finished. What is to do in the background...

Do this depends on the count or the overall size of files? (This is nice to know because I like to try my video folder.)

Link to comment
Share on other sites

Do this depends on the count or the overall size of files? (This is nice to know because I like to try my video folder.)

I don't know because I only experimented with increasing the number of files and the total amount of data in a corresponding way. Why don't you do some file count vs. file size tests and report the results?

Link to comment
Share on other sites

With 1.1.70, I'm now getting a consistent 4-6% CPU usage with sync folders of 60MB/60 files, 10GB/5000 files, and 4.5GB/14,000 files. That's looking pretty good, and I'm comfortable using it now. Periodic 10-minute spikes of 40%. When I first added the 10GB/5000 files folder, I quickly eyeballed the CPU load as lower, maybe 3%, so I infer that adding more stuff increases CPU significantly even if the relation is less than linear. So I'm hesitant to go for the big test with my folders of 760GB/70,000 files and 300GB/88,000 files, particularly because I currently don't have another client set up that can sync that much along with my Synology NAS. But I'd really like to have that kind of sync capacity if it won't fry my CPUs.

Link to comment
Share on other sites

I added my 760GB/70,000 files folder to sync with 1.1.70 (currently with no sync peers), and that's imposed an additional cost of 7% CPU usage by pushing it to a consistent 11-13%. This also reflects a great improvement from 40%, but is it a reasonable cost for such a process to impose on the CPU when operating with this much data? I don't know. Of course it would be great to see CPU usage optimized further, but I also don't know how unhealthy it is to run my CPU at a constant 19% as opposed to its baseline server usage of 4%--I mean, obviously it's more wear and tear, but is it reasonable from a costs/benefits point of view? Or am I inviting an unnacceptably high rate of motherboard replacement? I tried asking at the Synology forums, but have never received a response.

I will only be able to sync the very large stuff with one other client due to space limitations. So one plan would be to shut off sync most of the time and restart it on the two devices manually, periodically. But that kind of sucks because the idea of these two NAS's is to keep them at different locations, with the remote one capable of being brought online as the main one in the event of a hardware failure. In that emergency case, if the second server isn't 100% up to date with the large data directories, it's not a disaster, but would still be a pain.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.