Tag Info

There is another way to get at the information, and that's through WMI. An enterprising soul has put together a PowerShell script that gathers this information:
http://gallery.technet.microsoft.com/scriptcenter/dac62790-219d-4325-a57b-e79c2aa6b58e
No indication of whether or not is faster than dfsrdiag, but I suspect it just might be.
The WMI root is ...

No, this is absolutely not the source of your issue.
I have never seen any documentation that would support this claim, and furthermore, I manage an enterprise with 18 sites, dozens of DFS replication groups, half a dozen DFS namespaces containing hundreds of shares with terabytes of data and have never had an issue that our replication groups replicate ...

Disclaimer: I'm plotting my first DFS implementation at work, so my understanding comes from books (specifically Windows Server 2012: Inside Out in this case) rather than practice. But, I'm definitely interested in the answer.
Assuming I understand your question, I think you're asking two discrete questions:
How does DFS replicate?
What determines the ...

You have a kernel memory leak. The nonpaged pool is 2.5GB. You can use poolmon to see which driver is causing the high usage.
Install the Windows WDK, run poolmon, sort it via P after pool type so that non paged is on top and via B after bytes to see the tag which uses most memory.
Now look which pooltag uses most memory as shown here:
Now open a cmd ...

No, DFS-R will not replicate the VSS shadow copies. Each server would maintain their own shadow copies. However, if you have bi-directional replication and you restore from a local shadow copy on server B, it would replicate that change to Server-A as long as that file doesn't already exist identically.
If the folder hash is different, meaning the file ...

It's amazing how the answer is sometimes staring you right in the face:
The DFS Replication service has been repeatedly prevented from replicating a file due to consistent sharing violations encountered on the file. The service failed to stage a file for replication due to a sharing violation.
Additional Information: File Path: ...

This happens after you install hotfix 2663685 http://support.microsoft.com/kb/2663685
It changes the behaviour after a dirty DFSR shutdown so that there is no longer an automatic restart, instead it stays down allowing you to do whatever backups you may need to do, then you run a WMI command as per the article to restart it.
Word of warning - applying this ...

They aren't related at all.
There's a nice video at Channel9 talking about how DFS-R works.
BITS isn't really a "replication" protocol-- it's just gives you a way to "trickle" downloads via HTTP using "spare" network bandwidth.
DFS-R performs delta transfers of data (moving only the changed data) and doesn't use HTTP for its transport protocol.

Before I say anything else,
DFSR is not a backup! Don't use it this way, or you'll get burned again eventually.
So for clarification, you had Server1 with a set of files, and Server2 without that set of files. You added a folder target on Server2, and then created a replication group between the two servers?
In theory if it was done as above then ...

You need to modify the DNS settings on the servers hosting the DFS namespace to prevent them from registering their public IPs. In the TCP/IP properties, on the DNS tab, on each of these servers untick the "Register this connection's address in DNS" box for the public NIC. You can manually delete the undesired entries from DNS and run an ipconfig ...

You can tweak the replication schedule to allow DFS-R to replicate at full-speed during off hours (or even on hours if appropriate).
You can also try to increase the staging size on the back logged server. It should increase performance in this situation.
You don't mention whether or not it's capped, but I assume it is since you have replication across a ...

DFS-R uses something called Remote Differential Compression.
Instead of comparing and transfering an entire file, the algorithm will compare the signature of sequential chunks of data between the source and the target replica. This way, only differing chunks of data needs to be tranfered across the wire, in order to "reconstruct" the file at the target ...

Depends entirely on how this is setup and what's cached. Anything from "nothing will happen" to "it will break completely" are possible.
Basically, the Domain Controller here is needed for authentication to access the shares, and name resolution to translate your server names to IP addresses. If you have a client that has the DNS entry for its DFS server ...

Using the same drive letter is not an issue that I have encountered. I have configured multiple servers for DFS and nearly every single time, I have a data volume on D:, followed by a directory structure that's relevant.
I do use the best practice analyzer, following any configuration, as well as Robocopy to preseed any data. Once replication starts, it ...

I've just moved away from a DFS-R environment because of the very reason you described above. Locked files are impossible to deal with and causes all kinds of conflicts especially if both servers are being used like a proper failover (so users are hitting both servers at once).
To me, DFS-R is decent for replicating over WAN/VPN connections to remote ...

According to this article a CSV is actually a CsvFs layer that hides and controls access to the underlying NTFS. It provides synchronizing services that help multiple CSV aware actors write to the filesystem without conflict.
Meanwhile, DFS-R is tied to NTFS because it works with low-level structures directly to catch and respond to creation and change ...

A couple clarifying questions:
How much data are you migrating (# of files and total size)?
Do you plan on keeping the replication active following your migration, or is it a one-time transfer?
Assuming you really do want to use DFS migration, I would use a single replication group. This reduces the complexity, chance for problems, and the amount of size ...

Did you make sure none of the issues mentioned in this article apply to your situation? Did you create a diagnostic report? What does it say? Also check the backlog:
dfsrdiag backlog /rgname:GROUP /rfname:FOLDER /smem:SOURCE /rmem:DESTINATION /v
Aside from that I'd recommend to fix permissions on the home directories. Best practice is to give full access ...

No synchronization tool can synchronize files that are open w/o running the risk of making inconsistent copies. Unless the tool has hooks into the application holding the file open to request that it "quiesce" the file there will always be a risk that a copy made of an open file will end up being inconsistent and unusable.
It sounds, to me, like you're ...

You can use either to replicate data however these technologies are unrelated. As Evan said BITS is a way to manage bandwidth utilization and allow data to be tranfered between hosts without impacting any other network transfers. IE if I copy a file between server S and server D using explorer, explorer will use up as much bandwith as possible. If I start ...

We currently use Server 2008 DFSR to transfer 900 GB of files, with about 3 GB changing daily. Our topology is a single hub, with 3 spokes. Each spoke is on a 4Mb/1Mb ADSL connection, separated by roughly 300-500KM. Our hub site has a 10Mb/10Mb connection.
Other than the lack of file locking, after some initial configuration problems DFSR has been running ...

You should use
dfsrdiag filehash /filepath:<yourfile>
on both servers for the same file to check if DFS-R would recognize the file as "same" as described in KB947726. I suspect, this would not be the case.

Just to add to the response from syneticon-dj, depending on how you did the robocopy, it is likely that the permissions on the source and destination differed. If this were the case, the filehash would differ and cause 4412 events. It should still use RDC to minimize what it pulls down from source.
...

A standalone DFS namespace cannot participate in DFS-R (If it is not a member of an AD domain).
Clarification:
DFS can utilize two methods to replicate data:
DFS-R - newer and used for Win2k8, Win2k8 R2, & Win2k3 R2
FRS - used for older versions of Windows.
If the server you are setting up the namespace on, and the target servers are members ...

Yes it does, according to the Windows Small Business Server 2008 Technical FAQ, but without the management tools. Under the Backup and Server Storage heading look at the answer to the question "Does Windows Small Business Server 2008 support Distributed File System (DFS) for data replication?"
DFS is built into Windows Server 2008
and is available in ...