Search the Community

Showing results for tags 'dfs'.



More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Resilio Sync
    • Sync General Discussion
    • Sync Troubleshooting
    • Sync for NAS (Network Attached Storage)
    • Sync Stories
    • Developers
    • Feature Requests

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


AIM


MSN


Website URL


ICQ


Yahoo


Jabber


Skype


Location


Interests

Found 2 results

  1. We run a suite of grid computing applications (sadly, all Windows only) in the EC2 cloud to produce computer graphic rendered content for film and television. Most recently we used our EC2 system to produce several shots for an upcoming episode of the new Cosmos series. In doing so we ran up against severe IO problems as 80 nodes attempted to access a 1.3GB particle cache file from the EC2 instance acting as a server, as it was a several thousand frame sequence, every machine needed to access this file dozens of times each. We solved the issue by burning the cache into the C: drive of a one-off Amazon Machine Image that was then spun up into 80 nodes to complete the work. This work-around is fine, but time consuming to setup so we decided to implement a Distributed File System between all the nodes using BTSync. BTSync works wonderfully between our studio and the cloud, we are in love with the software already. However once in the cloud, we run into the following problems: 1: We have found in EC2 that any application that searches the cloud subnet for other running instances of itself will not find any other nodes. Amazon don't allow certain kinds of packets across their network. BTSync works fine if each node is explicitly setup with the IPs and ports of the other nodes, but no matter how we configure the firewall rules on the EC2 side, searches of the local subnet don't work, the nodes can't find each other and nothing syncs. 2. Because every render node is spun up from the same master image, they all come up with the exact same BTsync device name which seems to cause problems. Because there is no command line interface in the Windows version of BTSync there is no automated way to set this. Manually editing the config files (which don't appear to be ASCII) always fails because it disrupts a checksum and the config file is renamed by the app to .bad. While there is probably no way around problem 1, for problem 2 the addition of a command line interface would be immensely useful to us because we could run a startup script to set the BTSync device name for each node to it's unique EC2 machine name, and also as part of the same script create sync shares with the appropriate secret with the IPs and ports of the other BTsync nodes on the local subnet. As an alternative to a command line interface, a build of Windows BTSync that uses plain-text config files would be hugely welcome, as we could set those up with scripts. As I said, we love the software, and these problems notwithstanding we see it could have great potential for us with large-scale cloud applications.
  2. Dear devs and community! First, let me thank everybody for supporting the development of such a great client! I want to deploy BT Sync as an addition to syncing HUGE 6TB+ 300k+ files folder between domain servers running DFS Replication and VSS, and non-domain workstations and would like to hear if there are any considerations. My current scenario is the following: 1. There are 3 physical sites with 3 Windows Server 2003R2 32bit servers running in single domain (all three are domain controllers) 2. This huge folder is being replicated using DFS-R (DFS Replication) between all 3 servers, located on some NTFS volumes and backuped inside that volumes with Volume Shadow Copy mechanism (last 61 snapshots) daily 3. One server is considered as "Main" as it is big, physical and 24|7 online. The second one is movable from apartment to apartment with me and actually "hosts the folder" for the workstation i work on The third one is a virtual machine to work truly on the go when i need that folder with me 3. All workstations connect to the servers via SMB (network file sharing) to access the folder My desired scenario using BT-Sync: 1. Main server hosts both DFS-R and BT-Sync replication over the same folder 2. Second server remains with DFS-R and shares folder for workstations via SMB 3. Third server is eliminated (was used as a VM for mobile workstations) 4. Truly mobile workstations (Win XP, Win 7 32|64) have full copy of the folder synced via BT-Sync Thus no duplication between BT Sync and DFS should exist as only the main server will have both, all other nodes will use only one or another I've searched the forum and came up empty, my basic questions are the following: 1. Can BT Sync live together with DFS-R? Both syncs should be fully bi-directional 2. Can BT Sync handle such a huge amount of data? (DFS-R does never rescan the whole folder. it uses USN, AD and some other mechanisms to sync only changes) 3. Can BT sync live together with VSS? I.e.: 3.1 Does it work well during the process when the Shadow Copy of the volume is being created? 3.2 Can it break down VSS capability on the volume as, say, some Acronis products do? 4. When i deploy BT Sync, what is the best way to pointing BT Sync to the "current" big folder on the master server and point the client to the "new" place where the folder's copy should be to make sure that: 4.1 Big folder contents do not get erased with the "empty" folder from the workstation 4.2 Workstation gets full copy of the big folder and then only changes are synced Thanks a lot for your help! Best, Vlad