Search the Community

Showing results for tags 'large files'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • Resilio Sync
    • Sync General Discussion
    • Sync Troubleshooting
    • Sync for NAS (Network Attached Storage)
    • Sync Stories
    • Developers
    • Feature Requests

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start





Website URL







Found 3 results

  1. Hello, I have a large excel file of about 35 MB. Saving this file can take time, and quite often Sync will start to sync before it has finished saving and then I get an error message from Excel that I could not save the file and that I need to delete the file and rename the Temp file which Excel has created. I read on the forum here (but now can not find it anymore) that it is possible to delay syncing or certain files. Q1: What do I have to to delay syncing of excel files? Q2: Can I only delay the syncing of this particular file? Q3: Can I delay the syncing of only large Files (in this case Excel)? Cheers and thanks
  2. I have a server with 100mbit/100mbit connection and a home downstream connection of 150mbit. When using BTSync, I can't seem to get above 3MB/s no matter what the situation. If I use FileZilla and send the file via SFTP, I see a similar speed when sending a single file. However, if I have multiple files to send and I use FileZilla/SFTP with simultaneous connections enabled, I very easily max out my server and get a constant ~90mbit or so download speed to my home. See attached screenshot. The first portion is BTSync and then when that finished, I loaded up a bunch of files in FileZilla. I love BTSync's flexibility and ease of use, but the speed difference is hard to stomach sometimes. Is there anything that I can do to significantly increase my BTSync speed in the short term? Is there anything like the "simultaneous connections" that SFTP enables planned for BTSync? I thought it already used multiple connections or sent multiple files at once, but from further reading it sounds like thats only if you're sending very small files. Mine are anywhere from 100MB to 25 GB and I only see one ".sync" file being written at a time. I am currently using BTSync version 1.1.26 and could upgrade to the latest, but everything is rock-solid for me other than the speed. So unless that'll get me what I'm looking for, I prefer to hold off on upgrading. My server is an i3 w/ 8GB ram. Home machine is on a high-end i7 with plenty of memory. I don't think either of those are contributing to my issue but figured I'd mention it. I understand that BTSync continues to evolve and this issue may already be identified as something to resolve. I'm just looking for confirmation that it is in fact something that can be fixed/resolved given the way BTSync works or confirmation that BTSync will never be as fast (or even close to as fast) as simultaneous SFTP connections. Thanks BAZ
  3. Hi, I started using BTSync as soon as it came out publically at home and I love it. I use it to sync my music, documents, etc. between multiple computers at home and remotely so that my main system can act as a backup server for all of the others. Some of the folders are ~300GB in 150,000 files, all Windows machines. Now on to my problem: After loving the product for personal use, I wanted to use it at work for a new project. We just bought 5 new Haswell servers and we wanted their configurations to all be the same. -Each has an SSD with the OS mirrored across all of them and everything with that is fine. -They also each have a 3TB HDD (WD RED) to store our compile tools. -All on the same gigabit switch. -iptables firewall disabled. I wanted to use BTSync to keep a folder (/usr/tools/) the same across all servers. Ideally, if new tools are added they would instantly sync to all the other servers, and this was supposed to cut down on my setup time. I have btsync installed on all systems and can view the webpage to manage them. When I go to add the folder (/usr/tools/) there is significant lag before the website registers that it is complete. I let this system finish indexing all the files (239GB in 2,400,000 files) which took over 8 hours (!!!!). I could not believe that it took 8 hours using 100% of one of the CPUs. When I added the second system to the swarm, it was immediately found and added, but the transfer rate was pathetic. It did about 2GB in an hour. I added the other three systems to see if that would help, but it only made things slower. I ended up copying the folders over manually and letting them all index before connecting them back to the LAN. Four of the five servers now report that they are in sync, but one is fully indexed yet still things it needs all 239GB from the other servers (but doesn't transfer anything). With only the four that are in sync online, btsync is still using 100% of one of the cores (presumably trying to constantly check for changes?) So my questions are: -Is it expected that indexing 2.4 million files will take a very very long time? -Is there anyway to let it use more than one core for indexing? -What could be preventing one of the servers from syncing if they are all configured exactly the same? -Would smaller folders be handled better? -Why is the CPU usage so high?