[Now Implemented!] Wasting Full Re-Transfer For Simple File Moves


gadgetster

Recommended Posts

Performing simple file moves among sub folders in a sync folder seems to invoke full file retransmissions among devices.

 

It is especially problematic, or rather concerning when devices are being synchronized over slow, and measured WAN connections.

 

It is really appreciated to optimize the synchronization implementation for such a situation.

 

Thanks,

Link to post
Share on other sites

That title is not really written like feature request :)

 

I agree. The receiving BTSync should look at the hash of the "new" file, say "wait a minute, I got that somewhere" and copy it locally. I'm guessing it can't move it right away because BTSync sees a move as deleting the old file and creating the new file - please correct me if I'm wrong.

Link to post
Share on other sites

Reduce redundant retransmissions

=========================

 

Hello,

I fully agree that this feature it highly wanted. But this should also be applied across different synced folders. If I have two folders synced with another machine, then moving a file from one folder to the other should merrily reuse the file from folder 1 to create the new file at the 2nd folder in the remote machine.

 

As a common rule, reuse of files and patches should be applied as much as possible. I would go as far as to say, if the file created on machine A already exist on machine B but not in a shared folder, then transmission should be avoided. This might just create too big a hashtable to be feasible, but small files could be ignored when hashing.

 

I just signed up only to ask for this
Kind regards

smg2006

 

 

Link to post
Share on other sites

I would go as far as to say, if the file created on machine A already exist on machine B but not in a shared folder, then transmission should be avoided. 

 

Do you really want BTSync to hash every single file on your computer on every single file change just on the off chance that you might move one to a synced folder? I don't. The CPU and IO wasted on that would be FAR greater than the time it takes to transmit the small percentage of files that really get synced some day.

Link to post
Share on other sites

Without any configuration btsync can just use the syncarchive folder.

Currently btsync uses its very own data folder only, creates hashes of each and every file fragment (blockwise with fixed block size, afaik) and stores those hashes in its database file.

It should be little to no adjustment to keep blockwise hashes of files when those are moved to the synarchive folder, btsync doesn't even need to re-hash the moved file since the btsync process itself moves a already hashed file to that folder anyway.

 

So I fully agree to that feature request in general and exepct it to work just nicely without any additional monitored folders anyway.

 

Regards,

Stephan.

Link to post
Share on other sites

Yes, please.

 

This feature can help me much. I solve this almost daily.

Each directory structure change indicate complete retransfer of all concerned files.

I can not agree with ChrisH. All hashes are already there, there is no need to compute another ones.

I think, that there is only need to change "detection" mechanism between file/dir was deleted and file/dir was moved in synced structure.

 

Like today, I made change in directory structure of 100 GB in size. I move all directory in root folder to one folder deeper, which has consequence in complete retransmit of 100GB of data and all structure was copied to .SyncArchive folder, so there is doubled data fot "sync_trash_ttl" time. All of this for single directory structure change. 

On my DSL line it ends up with manual offline sync thru my portable hard drive. :-(
 
If I should wait for BTSync to retransfer all of my data I will wait one day or even longer.
A totally unnecessary, because all the data on both sides were available, only structure was changed.
 
So , from my point of view this is not only wish but is a must.
Link to post
Share on other sites

I can not agree with ChrisH. All hashes are already there, there is no need to compute another ones.

 

You misunderstood me. I am not against a feature like that if it concerns moving files around inside an already shared folder. In fact I would very much like to have that.

 

smg2006 suggested calculating and keeping current hashes of ALL files on his computer, regardless if in a synced folder or not. That's all I was arguing about.

Link to post
Share on other sites
  • 4 weeks later...

I'd vote for something like this to be improved.  I work with large video/photo files. I'm using BitSync to sync a backup over from one PC to another. Theres 2 bitsync folders involved.  When I'm done working on a project I move the files between folders to free up space.  Bitsync treats it as a delete-from-folder-A and an add-to-folder-B.  Its not smart enough to know all it has to do is 'move' the files around on the second PC as well.

Obviously I understand its not easy to spot a move from the o/s, but I'd hope bitsync could find a smart way around this.

Link to post
Share on other sites
  • 2 weeks later...

Archived

This topic is now archived and is closed to further replies.