Fixing a Tombstoned Domain Controller

After struggling for quite a while to get the right commands to fix a domain controller we thought it a good idea to post the steps we had to take.

I know a lot of people would say that the best way is to dcpromo the DC out of the domain, do a meta data cleanup and then dcpromo it in again. Sometimes this method is not possible like in instance were your DC is also an Exchange server. Then you would first have to migrate the Exchange to another server before fixing the broken DC.

Always first and foremost is to make sure you have a system state backup of a healthy DC in case something goes wrong.

The first step is to allow the other domain controllers in your domain to replicate with Tombstoned DC. To do this follow the steps below:

  1. Click Start, click Run, type regedit, and then click OK.
  2. Navigate to HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\NTDS\Parameters
  3. In the details pane, create or edit the registry entry as follows:

    If the registry entry exists in the details pane, modify the entry as follows:

    1. In the details pane, right-click Allow Replication With Divergent and Corrupt Partner, and then click Modify.
    2. In the Value data box, type 1, and then click OK.

    If the registry entry does not exist, create the entry as follows:

    1. Right-click Parameters, click New, and then click DWORD Value.
    2. Type the name Allow Replication With Divergent and Corrupt Partner, and then press ENTER.
    3. Double-click the entry. In the Value data box, type 1, and then click OK.
The next step would be to clear lingering objects that is residing on the Tombstoned DC.
Lingering objects are objects that are residing on the Tombstoned DC. This happens for instance when an Object is deleted on a working DC, the object is then tombstoned for 180 days. Then after the 180 days the object is removed completely. Now the tombstoned DC if it was still replicating normally it would detected that the object is tombstoned on another DC and place the same object residing in its database also in a tombstoned state.
Now when you bring the Tombstoned DC back into replication it will have that object in its database but the domain would not know about it as its removed the object completely. This can create inconsistencies in your domain.
To remove these lingering objects is a 2 step process which is described below:
1. First you need to view the lingering objects to make sure you are not deleting anything important.
You run this command:
Repadmin /removelingeringobjects ServerWithLingeringObjects CleanServerGUID NamespaceContainingLingeringObject /advisory_mode
Ex:
Repadmin /removelingeringobjects DC95 a4bcd546-5e94-2330-b4d0-f218b16dc0f6 DC=Test,DC=Com
The server that throws the error is actually the clean server (CleanServerGUID). The GUID of this server can be located in DNS. Expand Forward Lookup Zones and click in the _msdcs.DOMAIN.NAME zone. In this zone there are CNAME records that point all DCs in the domain to their GUIDs. Copy the GUID of the server that threw the error.
After running this the lingering objects will be listed in the Event Logs so you can have a look there.
2. To remove the lingering objects run the exact same command but remove the /Advisory_mode
3. Reboot the domain controllers and see if the replication starts.

Moving mailboxes between MS exchange servers in different branches / sites

In this scenario we had 5 sites which all had an Exchange 2007 server at each branch. The goal was to consolidate all the exchange servers into one Exchange 2010 server at the head office.

Below describe the process to take as well as some things to look out for.

  • Make sure all the exchange servers are on the latest service pack (Online mailbox moves are not supported on Exchange 2007 SP1 and earlier – Only from SP2 and onward).
  • Setup legacy DNS configuration for Exchange 2007 and 2010 co-existence. This will allow remote users to access both the 2007 and 2010 exchange depending on where their mailbox currently resides – For OWA and outlook anywhere.
    • It uses autodiscover to connect you to the exchange server where your mailbox resides.
    • You need to create a public and internal DNS A record (legacy.yourdomain.com), the internal A record needs to point to you exchange 2007 server (Internal address) and the public A record needs to be directed to the exchange 2007 server for external connections – This can be done by publishing the legacy address if you use ISA or via a new public IP NAT that directs to the exchange 2007.
    • Note that outlook 2003 does not use autodiscover and the remote users settings would not be automatically changed over to the new exchange server when their mailbox has been moved over – You would need to manually change them.
    • You would need to create a new certificate for exchange and would need to include the legacy DNS address. So in total you would use the following in your cert:
      • Legacy.yourdomain.com
      • webmail.yourdomain.com (Depending on the address you use)
      • autodiscover.yourdomain.com
      • internal FQDN of your exchange 2010 server

The old cert for exchange 2007 would still need to be in place while you are doing the migration.

  • Exchange will automatically be able to route email between the old exchange 2007 and new 2010, so the SMTP NAT can point to either the 2007 or 2010 exchange.
  • In this scenario the client was using mimecast and mail was being delivered according to which exchange server the user mailbox was residing on. So after we moved a mailbox from a branch to head office we had to run the mimecast AD sync to update mimecast on where to send the mail for the user (If your exchange can route the mail from the branch to the HO you won’t need to do this).
    • Mimecast does an automatic AD sync every couple hours
    • Mimecast can only accept about 10 changes during one AD sync (So preferably run the AD sync before completing more than 10 mailbox moves)

Access denied opening Exchange 2010 management Console

You receive the above error when trying to open the exchange 2010 management console.

I resolved this by doing the following.

Run the following command from cmd:

Net time /set

Run the following command from powershell:

Winrm quickconfig

Then restart the server.

 

If your exchange server is running inside a virtual machine make sure that the ‘Time sync’ on the guest services are not enabled.

Move VM created in Hyper-V Server 2012 to Hyper-V Server 2008R2

I recently had to move a VM I created in Hyper-V 2012 to a 2008R2 Hyper-V server.

This is the process I followed as well as the problem I came across.

Created a new VM on the 2008R2 server.

Copied the 2 VHD files across.

Attached them and started the VM.

When the server started up there were 2 things I picked up that was not working. The disk Manager and the NIC. The NIC icon would just be stuck on spinning. How I resolved it was to shutdown the host server and add a second nic. Don’t remove the first one. When the server next booted up it installed the second nic and some how re-applied the config. I still had to wait about 15 min for everything to start working. Once this happen I connected the server to the physical network and did a final reboot.

Everything was working fine after that.

Hyper-V Manager missing Server 2012

So you’ve installed the Hyper-V module in Server 2012 and are ready to start setting up your virtual environment but for some reason you cannot access the Hyper-V manager.

The reason is you need to add the Hyper-V management tools as a feature.

Go to add features in Server manager and the tick the box below:

Design considerations for VMware vSphere

  • You have 3 type of swap files – 1. Host swap file (4GB) 2. VM swap file (Size of memory that you allocated to the VM – Space consideration when plan your design) 3. Guest OS swap file.
  • VM’s heavily rely on CPU cache – Get CPU’s with high L1/2/3 cache.
  • You could purchase solid state disks for the partitions where your host will be installed and store the VM’s swap files on there – This will improve performance.
  • If you don’t need to allocate additional vCPU to a VM then DONT – It will use extra memory overhead to run the VM’s (Rather assign an extra core if needed unless your app is configured to run with multi PROC).
  • Optimize VM’s according to NUMA – Don’t allocate more RAM to the VM than what is on the physical RAM section per CPU.
  • Plan 60% of resources for usage and 40% for maintenance and future growth.
  • Make sure you have enough resources on all hosts that are part of your HA or fault tolerance cluster to accommodate for VM’s that might be moved to host in case of another host failure.
  • Rather scale out than up for HA.
  • Don’t put your management and IP storage on the same network.
  • For HA implement redundancy heartbeat networks and redundant isolation addresses.
  • If you are planning to enable fault tolerance on VM’s then only assign 1 vCPU to the VM. FT is not supported on more (This is on vSphere 5.1 and earlier).
  • Dedicate networks for Fault Tolerance.

VMware vSphere best practises

Below are some high level best practises for your vSphere environment:

  • Remove your ISO images from your VM’s as soon as you are done with them – Don’t leave them connected.
  • Remove all devices that you are not using on your VM’s (This prevents unnecessary Kernel processing).
  • Isolate your iSCSI traffic, don’ use CHAP.
  • Unmount datastores before you delete them.
  • Set alarms on your critical datastores – Alerts you before it runs out of space (It will freeze the VM’s on the datastore should the datastore run out of space).
  • Create VM templates.
  • Enable CPU and Memory hot add on each VM (Not enable by default). Remember that the hot add is limited to the OS limitations – eg: Server 2003 does not support CPU hot add.
  • Dedicate cores to VM’s – Don’t over commit.
  • Change your block sizes according to application recommendations – eg: MS SQL uses 64k block sizes (So set this on SAN and guest OS level).
  • Configure VM startup and shutdown – eg: DC start up first, DB servers second, Web/front end servers third and test/non-critical VM’s last. Remember to configure this on each host, if it vMotions the VM’s to another host the startup/shutdown settings will change according to what has been configured on that host.
  • Configure IP hash as load balancing option – Need Link Aggregation support on switches (Switches needs to be stacked).
  • Run HA and DRS together.
  • Join vCenter and host to domain and let the ESXi administrators use their domain accounts to administer.

 

VMware vSphere redundancy

Always make sure you have proper redundancy in place. Below are your most impotant considerations.

  • Remove SPOF (Single Point Of Failure) – Make sure you have at least two of everything.
  • Use shared storage with proper RAID level redundancy.
  • Use HA and fault tolerance with multiple hosts in the cluster.
  • Use SRM (Site Recovery Manager) to replicate entire production site to DR site or a different branch.
  • Make sure your vCenter server is highly available.

Configuring VMware vSphere for best performance

Below are a couple of things to consider when configuring your VM’s for best performance.

  • Disconnect any devices that your VM’s don’t use – This removes any unnecessary kernel processing.
  • Always use Thick Provision Eager Zero disks.
  • Use multiple NIC’s and team them together.
  • Use Round Robin on NIC’s with IP Hash for your load balancing.
  • Use Software iSCSI with Jumbo Frames (Unless you HBA card support Jumbo Frames). If you don’t use Jumbo Frames then an HBA card could provide 150% better performance – This is due to the TOE (TCP Offload Engine) function on the cards. But if your HBA card does not support Jumbo Frames then use Software iSCSI with Jumbo Frames.
  • Set Jumbo Frames to 9000. You would need to set this on all interfaces (VM NIC’s on host, switches and SAN).
  • Put VM’s on different LUN’s (1 LUN = 1 I/O queue).
  • Set block sizes according to application requirements. Eg: for a MS SQL server you would set the block size on the SAN (RAID where SQL VM will reside) to 64k and also format the VM guest OS partitions with 64k block sizes.
  • Optimize VM’s according to NUMA.
  • Enable large memory pages on critical production servers (Inside guest OS). But don’t do this on all servers otherwise they can’t share memory pages.