dre

New Members
  • Posts

    2
  • Joined

  • Last visited

Everything posted by dre

  1. I checked the rsync man page and the command would be $ rsync -av --link-dest=../btsync/alexmeyer /home/btsync/alexmeyer/ /home/alexmeyer/ The --link-dest option is path relative to the destination dir, which tells rsync to hardlink all the files from source dir againts it. As in this case, source and link-dest are the same, all the files will be hard-linked instead of copied. But there is a drawback. Rsync has no way to detect file moves. So if you change the name in the source dir, you allways end up wirth an another hardlink in the archive. Even the --fuzzy option is to no help. I actually reconsidered my syncing scenario and decided to go with git instead of BTSync. You would wonder how many features does git acually have, you just not know after using it for years. What you cannot achive with git is a completely automated workwlow, but it gives you the ultimate configuration power for your special workflow. The workflow for this scenario would be: 1. Add files to local git repo. 2. Commit and Push to the server. 3. Delete Files (when not needed locally) 4. Add them to your local ignore (or just never commit the deletes) http://stackoverflow.com/questions/1753070/git-ignore-files-only-locally There is also the option to delete from the git index completely, but keep the files locally with $git rm --cached And you actually will need to clear the git history from time to time, in order to get the files wiped out completely from you local/remote machine because git keeps the hardlinks to them. To wipe out git history up to a specifc commit: $ git rev-parse --verify [commit_hash] >> .git/info/grafts $ git filter-branch --
  2. The answer for this problem is called 'hard links'. Hard links allow you to create snapshots of a filesystem at any given time. They are usually used for incremental backups. Hardlinks are present by default. There is allways at least one hardlink to any file. Imagine the filename beeing that hardlink. If the user requests a hardlĂ­nk delete, the system checks, whether there are still other hardlinks pointing to the file and deletes the file only if not, otherwise just deletes the hardlink. So you would sync files into a folder on your server and periodically copy all the hardlinks to another one. e.g #servers home folder(archive) /home/alexmeyeer #btsync folder /home/btsync/alexmeyer Periiodically just do $ cp -rlp /home/btsync/alexmeyer /home/alexmeyeer The 'l' switch tells the command not to copy the actual file, but to create a hardlink instead. The amount of space taken by this copy operation is negligible and the advantage is, that your homefolder always contains the truth - all the files you ever head. It is actually a little improvement of @fukawi's workflow, which has the disadvantage that files are sometimes in the sync folder and sometimes in the archive. This makes it tricky to build some indexing databases for images/music with programms like picassa or banshee on the server. A more advanced workflow would be using the rsync command for that, which has also the option for creating hardlinks. I don't know exactly but it should then detect hardlink renames (moves) in the sync folder. There is also an automation daemon for that, called 'lsyncd' which will fire up rsync on changes in any specified folder. I am planing to use exakt the same scenario, where the lsynd daemon is watching the sync folder for changes and backups it to my home folder with rsync by creating hardlinks. I will report how this turned out