Many people look at DFS as a means to replicate data between servers and different branches.
Some thing to take note of before deciding to implement DFS:
1. You would need enough free space for the staging quota on the drives that contains the replication data. The staging quota is the area on the disk where the files that will be replicated are compressed to before they replicate to the other server. By default your staging qouta is 4GB but generally you will make this bigger.
2. If you have big files (Larger than 4GB) that is part of the data that will be replicated you would need to increase the staging quota. Any files larger than the staging quota won’t be able to replicate. Generally you want to increase the staging quote to as this can decrease the needed replication times.
3. When you setup DFS for the first time it can be very resource intensive on the CPU’s (Initial big replication).
4. DFS can be difficult to manage / troubleshoot if there are problems.
5. Certain firewall’s tend to cause problems when replicating over a WAN. In one scenario we had the situation where the Sonicwall was dropping the TCP connection after a certain period of time so the initial replication could never complete.
6. You can’t create new replication groups of folders that are already within an existing replication group.
7. Backups should not run at the same time as the DFS replication, otherwise it will backup the temporary staging folders. Should your DFS replication be scheduled to run all the time then make sure you remove the staging folders from the backup selection.
8. Users can overwrite eah others work if you are using DFS replication over a WAN to different file servers.
Make sure you run health reports in DFS to ensure your data is replicating correctly before you decommission one DFS server.