tobeychris

Members
  • Content Count

    3
  • Joined

  • Last visited

About tobeychris

  • Rank
    New User
  1. Is there an actual file limit? I have one folder set up with this: 238.9 GB in 2403509 files. It takes a VERY long time to index and it's process on a Haswell linux machines is almost always at 100%, but it still indexed 239GB in 2.4 million files. I don't recommend it and I really hope that the developers can improve on this though. Here's what the .sync folder looks like: total 2.2G -rw-------. 1 root root 69K Jul 29 10:00 xxxxx.dmp -rw-r--r--. 1 root root 2.1G Aug 1 10:56 xxxxx.db -rw-r--r--. 1 root root 224K Aug 1 10:57 xxxxx.db-shm -rw-r--r--. 1 root root 16M Aug 1 10:57 xxxxx.db-wal -rw-r--r--. 1 root root 2.8K Aug 1 11:00 settings.dat -rw-r--r--. 1 root root 2.8K Aug 1 10:30 settings.dat.old -rw-r--r--. 1 root root 773 Aug 1 10:57 sync.dat -rw-r--r--. 1 root root 773 Aug 1 10:47 sync.dat.old -rw-r--r--. 1 root root 62M Aug 1 11:05 sync.log -rw-r--r--. 1 root root 5 Jul 31 10:17 sync.pid -rw-r--r--. 1 root root 90K Jul 31 10:17 webui.zip
  2. Hi, I started using BTSync as soon as it came out publically at home and I love it. I use it to sync my music, documents, etc. between multiple computers at home and remotely so that my main system can act as a backup server for all of the others. Some of the folders are ~300GB in 150,000 files, all Windows machines. Now on to my problem: After loving the product for personal use, I wanted to use it at work for a new project. We just bought 5 new Haswell servers and we wanted their configurations to all be the same. -Each has an SSD with the OS mirrored across all of them and everything with that is fine. -They also each have a 3TB HDD (WD RED) to store our compile tools. -All on the same gigabit switch. -iptables firewall disabled. I wanted to use BTSync to keep a folder (/usr/tools/) the same across all servers. Ideally, if new tools are added they would instantly sync to all the other servers, and this was supposed to cut down on my setup time. I have btsync installed on all systems and can view the webpage to manage them. When I go to add the folder (/usr/tools/) there is significant lag before the website registers that it is complete. I let this system finish indexing all the files (239GB in 2,400,000 files) which took over 8 hours (!!!!). I could not believe that it took 8 hours using 100% of one of the CPUs. When I added the second system to the swarm, it was immediately found and added, but the transfer rate was pathetic. It did about 2GB in an hour. I added the other three systems to see if that would help, but it only made things slower. I ended up copying the folders over manually and letting them all index before connecting them back to the LAN. Four of the five servers now report that they are in sync, but one is fully indexed yet still things it needs all 239GB from the other servers (but doesn't transfer anything). With only the four that are in sync online, btsync is still using 100% of one of the cores (presumably trying to constantly check for changes?) So my questions are: -Is it expected that indexing 2.4 million files will take a very very long time? -Is there anyway to let it use more than one core for indexing? -What could be preventing one of the servers from syncing if they are all configured exactly the same? -Would smaller folders be handled better? -Why is the CPU usage so high?