Had some rather nifty issues with a DirectAccess array the other week – so I thought I would return here and blog it!

In short, everything was working fine apart from one very small part – “Manage Out” via IPHTTPS tunnel wasn’t functioning.

In short, clients were connecting the IPHTTPS tunnel before the Teredo was up. Whilst IPHTTPS is connected it will be preferred over Teredo (or 6to4) and disconnects after a random amount of time.

Clients could route traffic down here – so connecting to Intranet services was fine. Tunnel was up on both parts (Intranet/Infrastructure) and everything worked fine apart from “Manage Out”. Routes all fine, Windows Firewall (client-side) all fine.

Queue some hair tearing etc etc.

Raised a call with MS eventually – and in short its VMware causing the issue.

To quote MS (slightly edited to make sense outside of the Email trail);

We have had similar cases before where VMWare template provisioning was used for the UAG hosts, and can confirm that the problem was down to the template creating duplicate adapters that would affect tunnel bindings when configuring UAG DA. And the solution was to rebuild using standard media which completely addressed the issue.

Quick script that will assign all archive users (i.e. people with an Exchange archive) to a retention policy (that you have created to archive email, obviously). Then runs a “start-managedfolderassistant” to apply.

When applying a new Retention Policy (or tag) in Exchange 2010 SP1 you may wonder why it doesn’t apply immediately.

In short, Exchange is configured by default to cycle all mailboxes in a 24 hour period. This can be seen (and therefore changed) by looking at the “ManagedFolderWorkCycle” attribute against each Mailbox Server;

get-mailboxserver | select-object Name,ManagedFolderWorkCycle

(obviously, set-mailboxserver to set it – though I believe it only takes day’s as its input value)

If this isn’t good enough – then you can force this on an individual mailbox (or all if you wanted) using the command;

(for reference, the CAS array will be fronted by a Hardware box provided by the customer – probably a NetScaler or something similar)

Each of the servers was installed – each created the default “Mailbox 00915125” style databases.

Now, typically I rename and move the EDB/Log location for these – it always seems simpler than moving the System-type mailboxes that are located on the first 2010 mailbox database in an organisation. I did so again in this instance, renaming them and moving them to the Mount points created similar to below;

All seemed to be going well – so I created the 3-node DAG (placing the inactive FSW on the vCentre server for later use) and now its time to create myself a secondary copy of one of the six databases.

I ran the usual “add Mailbox database copy” EMC wizard, selecting the second server as a target for a copy of “DB1”. However it copied over the initial EDB file, but wouldn’t copy over any logs (and therefore sat at ~30 logs to copy/replay).

Event log had one error message;

The internal database copy (for seeding or analysis purposes) has been stopped because it was halted by the client or because the connection with the client failed.

Helpful. A Google search turned up This Link – and yes indeed I have SnapManager for Exchange installed (this is a NetApp based deployment after all) – however I am past the version noted (in fact am running the latest 6.0.2.x revision) – and cannot see anything on the NetApp NOW portal regarding this error.

Regardless I removed SME – no joy.

I then tried seeding a second database (after removing the DB1 copy) – and found that out of my 6 databases three wouldn’t seed;

DB1 – formally the default MDB on SERVER01

DB3 – formally the default MDB on SERVER02

DB5 – formally the default MDB on SERVER03

Strange to see but a very clear pattern. So I deleted DB1 and recreated it (initially as DB1 – however this then seemed to inherit the same MailboxDatbaseCopy object in AD so it still didn’t work so I recreated it) as MDB1 and voila it seeded. Even after renaming it back to “DB1” it caught back up (and I tested it for failure heavily later on – note to self btw – do NOT delete the boot disks from the vSphere server on an Exchange 2010 deployment. They do come back – but its not pretty and makes you sweat!).

So in short, if you are having problems seeding a Database to a second node, and it is the default created with Exchange 2010 SP1 – delete and try again.

(For reference these servers were installed with 2010 SP1 and then updated with RU3 v 3)