The answer for this problem is called 'hard links'. Hard links allow you to create snapshots of a filesystem at any given time. They are usually used for incremental backups. Hardlinks are present by default. There is allways at least one hardlink to any file. Imagine the filename beeing that hardlink. If the user requests a hardlĂnk delete, the system checks, whether there are still other hardlinks pointing to the file and deletes the file only if not, otherwise just deletes the hardlink. So you would sync files into a folder on your server and periodically copy all the hardlinks to another one. e.g #servers home folder(archive) /home/alexmeyeer #btsync folder /home/btsync/alexmeyer Periiodically just do $ cp -rlp /home/btsync/alexmeyer /home/alexmeyeer The 'l' switch tells the command not to copy the actual file, but to create a hardlink instead. The amount of space taken by this copy operation is negligible and the advantage is, that your homefolder always contains the truth - all the files you ever head. It is actually a little improvement of @fukawi's workflow, which has the disadvantage that files are sometimes in the sync folder and sometimes in the archive. This makes it tricky to build some indexing databases for images/music with programms like picassa or banshee on the server. A more advanced workflow would be using the rsync command for that, which has also the option for creating hardlinks. I don't know exactly but it should then detect hardlink renames (moves) in the sync folder. There is also an automation daemon for that, called 'lsyncd' which will fire up rsync on changes in any specified folder. I am planing to use exakt the same scenario, where the lsynd daemon is watching the sync folder for changes and backups it to my home folder with rsync by creating hardlinks. I will report how this turned out