operationally

Members
  • Content Count

    8
  • Joined

  • Last visited

About operationally

  • Rank
    New User

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. 这个问题早有人问,官方没有解释,也没有行动。所以自己想吧。
  2. @jpap I think the ram usage is your case is reasonable for 1.6m files compared to my original use case/problem. In fact for this many files on Linux just storing watching events will take 820+MB RAM. As this tool is closed source, so cannot comment on what other information the software is storing and saving.
  3. I think people mentioned this before, I searched the forum only got some archive hits. It would be very useful for pro users to limit certain folder speed while not others, while we at it a priority will also be great (allocate bandwidth when uplink is saturated).
  4. Thanks, after reconnecting it does help reduce RAM usage. If inotify is used, on a 64 bit machine, millions of files takes about 1GB RAM, whereas my usage must have been one of the worst case then? Thanks for your reply. Just so that everyone else know, i updated Linux client to 2.5.6 (1043), so far it does looks like the RAM usage is drastically down, after reconnect as suggested by @Helen RAM went down to 5GB, now RAM is at 901MB (still growing though), I will follow up on this. But thanks for all the community support and replies.
  5. Doing a disconnect for one of my largest folder reduced RAM to 4.9GB. But reconnect seems refuse to allow me to sync the same folder and it is frustrating, if I am not the owner of the folder do I need to re-sync everything? And If it is changes/deletions that is in RAM I think that is a not so good design: a. these can easily put into a local db like sqlite and query upon request b. with the UI there is no way to restore a copy (albeit there is a archive Dir) so no reason to keep in RAM, unless this is a planned feature? Of course I don't know enough design details here to make sound judgement, so please enlighten me. Well tweaking check for file changes is considered indexing? (I am assuming no file change means no index required). Also I don't know why checking if file changed requires lots of CPU or memory? This is not done via something like inotify on Linux? Also do you use full sync or selective sync, i am very surprised by your memory / cpu usage. Because I have limited usage on Windows but huge resource hog.
  6. I searched on this forum, and i do see this question came up before, but I still think this worth mentioning. I am not here to question why it would current implementation takes that much memory. My Sync Folder contains about 56632 files, and folders are: 575GB in size. Running on Ubuntu 16.04 and rslsync uses about 9.1 GB RAM. (CPU fluctuates between 5-8% during running) For a comparison, i also use Dropbox Pro, I have 26602 files and 79GB in size. Dropbox uses about 329MB RAM. (0 % CPU when idle) In this case, since server is powerful enough that I don't care about the RAM and CPU. But on a less powerful machine, it is also very resource intensive. I have a laptop running windows, upon start, Sync eats 5 GB RAM, (this is given the fact only one folder (less than 200MB) is in full sync mode, most others are selective sync and with very few files synced). When Sync finishes initialization, it stables at 2GB RAM. (At this point I have effectively 2.25GB data synced, Folder properties shown 113k Files). And CPU usage flucuates around 20-50% (this is a much weaker processor i7-4510U) This is a LOT of resources for such small number data being synced in selective sync mode. Again I am not here asking why it is so, or I am trying to say this is a bug. I think this huge resources footprint will greatly limit Sync's user base/ usage. And it simply cannot compete with Dropbox as a standard family sync tool. (All your photos, 4k videos etc, old backups of your OSes).