New User Has A Few Questions


gijs007

Recommended Posts

I've just started using bittorent synch and have a few questions.

I'm using it to share a 2 folders between multiple servers.
 

One folder contains my website and I want this to be synchronized across all servers for redundancy.

The other folder contains a huge ammount of data (files for game servers), these should stay insynch in case of an update of the games.

 

1. I'm having a lot of files and they are being indexed as we speak, does this process happen every time bittorent synch is started/restarted or does it keep some sort of cache?
I'm asking this because it's been bussy for about 10 minutes and still got another hour or so to do (despite using an SSD).

 

2. Are changes to files detected inmediatly and uploaded to the other peers?

 

3. I see it can detect LAN connections, but can it also detect other servers in a datacenter and use them as LAN(to save on WAN bandwith)?

 

4. Does it upload the entire files, or only the parts that have changed?

 

5. Some servers have a 1000mb/s connection, but they don't have this connection with all datacenters duo to transit contracts.
This is mostly the case between USA - EU transfers.
I assume this works pretty much like bittorent, so I shouldn't have to worry about preferred connection setups to maximize the transfer speeds?

Link to comment
Share on other sites

1. I'm having a lot of files and they are being indexed as we speak, does this process happen every time bittorent synch is started/restarted or does it keep some sort of cache?

I'm asking this because it's been bussy for about 10 minutes and still got another hour or so to do (despite using an SSD).

Full indexing happens when you initially add a folder to Sync. Once added, only subsequent changes will be indexed (i.e. the folder won't be completely re-indexed every time)

 

2. Are changes to files detected inmediatly and uploaded to the other peers?

Depends on the OS - not all are able to detect changes in "real time" - that's where the advanced "folder_rescan_interval" setting comes in - this defines a regular recurring interval at which to rescan your folders for any changes (that couldn't be detected in "real time")

 

3. I see it can detect LAN connections, but can it also detect other servers in a datacenter and use them as LAN(to save on WAN bandwith)?

Sync will prefer LAN connections wherever possible. Where a direct/LAN connection cannot be established, a relayed connection will be attempted. If you wish to completely disable "relayed" connections, forcing Sync to use LAN connections only, you will need to disable the Relay, Tracker, and DHT settings

 

4. Does it upload the entire files, or only the parts that have changed?

This is answered in the Unofficial FAQ

 

5. Some servers have a 1000mb/s connection, but they don't have this connection with all datacenters duo to transit contracts.

This is mostly the case between USA - EU transfers.

I assume this works pretty much like bittorent, so I shouldn't have to worry about preferred connection setups to maximize the transfer speeds?

Correct. Sync will attempt to retrieve a file in the most efficient way possible - which will depend upon the speed of other peers. i.e. faster peers will upload more data than slower peers

Link to comment
Share on other sites

 

Depends on the OS - not all are able to detect changes in "real time" - that's where the advanced "folder_rescan_interval" setting comes in - this defines a regular recurring interval at which to rescan your folders for any changes (that couldn't be detected in "real time")

 

On which operating systems does it work in real time? which one's don't?

 

 

max_file_size_diff_for_patching
(MB)
determines a size difference between versions of
one file for patching. When it is reached or exceeded, the file will be updated by
downloading missing chunks of information (patches). Updates for a file with a smaller size
difference will be downloaded as separate files.

The default setting is 1000, so it only does the 4mb block's on files larger than 1000mb...

Is there any reason as to why I wouldn't want to set this to 50mb?

 

It looks like it's only syncing one file at a time, is it possible to change this to speed it up?

 

What is the best way to set this up on a lot of systems? Is it possible to make an installer for windows which contains certain preconfigured settings?

Link to comment
Share on other sites

On which operating systems does it work in real time? which one's don't?

"Real time" monitoring works on all Windows OS's - I think, although I may be wrong, that Sync running on some Mac's (and possibly also on the Raspberry Pi?) have trouble detecting changes to files in "real time"?

 

The default setting is 1000, so it only does the 4mb block's on files larger than 1000mb...

No, this means that existing files will only be "patched" is their size if LESS than 1000MB (files over this size won't be "patched" and will instead be re-transferred in their entirety if part of the file changes)

 

Is there any reason as to why I wouldn't want to set this to 50mb?

See above!

 

It looks like it's only syncing one file at a time, is it possible to change this to speed it up?

The number of files transferred concurrently depends upon the size of the file. From what I recall, "large" files are transferred one at a time, where up to 4 "small" files can be transferred concurrently. There is currently no way to change the number of files being transferred concurrently (although you can limit the up/down bandwidth used)

 

What is the best way to set this up on a lot of systems? Is it possible to make an installer for windows which contains certain preconfigured settings?

There isn't currently a "mass deployment" tool as such.

One possible workaround is to create a "portable" install on an USB memory stick, with all the necessary settings/folders defined, etc, and then copy/deploy this to each computer.

Link to comment
Share on other sites

Thanks, it appears the official guide is wrong then on the part of max_file_size_diff_for_patching.
I don't know who to contact, so they can correct it.

 

 

 

 

The number of files transferred concurrently depends upon the size of the file. From what I recall, "large" files are transferred one at a time, where up to 4 "small" files can be transferred concurrently. There is currently no way to change the number of files being transferred concurrently (although you can limit the up/down bandwidth used)

I've limited the up and down bandwidth to 70mbps, but it takes ages to transfer 7gb of 100.000 of files because of this.

It's not even using 1mbps...

 

Would be great if we could have it transfer 100 small files(less than 10mb) by default per second.

Another thing that would be great is using compression (optionally).

 

Another thing is that it's quite cpu intensive(10-15%) while synching lot's of files between 2 servers despite the low transfer speed of less than 1mbps.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.