Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

Nerd Achievement Unlocked: We’re On Hey Scripting Guy!

$
0
0

Hey y’all, Mark here with a quick post. A while back Tom Moser and I wrote a powershell script to help backup and remove ADM files.

http://blogs.technet.com/b/askpfeplat/archive/2011/12/12/how-to-implement-the-central-store-for-group-policy-admin-templates-completely-hint-remove-those-adm-files.aspx

and

http://blogs.technet.com/b/askpfeplat/archive/2012/03/14/central-store-and-adm-removal-q-amp-a-with-an-updated-script.aspx

 

We were asked by Hey Scripting Guy to do a more detailed write up on how we went about writing this script so we went into some crazy detail found here.

 

As long time readers of Hey Scripting Guy we couldn’t be more excited.

 

-Mark Morowczynski and Tom Moser


How ‘netmask ordering’ feature in DNS affects the resultant queries

$
0
0

Hey y’all, Mark here again. When visiting a customer PFEs tend to get a bunch of questions that have been “saved up” over time. One of my frequent customers always has a massive list for me when I get there. If they are reading this, they know who they are. Many of these questions are the types where it isn’t causing a production issue where they’d call in for support to determine root cause but just little annoyances they’d like to get figured out. If you have those feel free to use the contact us button, it might turn into a blog post. This is a perfect example.

 

 

clip_image002

All clients are in the Chicago site. If the user did a DNS look up from Client01 (172.10.1.100) on the domain name Contoso.com the following DCs were returned in order

DC01-172.10.1.1

DC02-172.10.1.2

DC04-172.30.20.1

DC03-172.20.10.1

DC05-192.168.5.1

 

If they would run the same query a second time from the same machine the list would tend to look like this.

DC02-172.10.1.2

DC01-172.10.1.1

DC04-172.30.20.1

DC03-172.20.10.1

DC05-192.168.5.1

If they would run it a third time, it would be identical to the first time. The 4th time would be identical to the 2nd time. The DCs in their AD site would always be returned as number 1 and 2 in the list. This is what the desired and expected behavior was. However, if Client02 (172.10.8.1) or Client03 (172.10.9.1) did a DNS look up the list would frequently look like those above, but sometimes it would look something like this

DC03-172.20.10.1

DC04-172.30.20.1

DC05-192.168.5.1

DC01-172.10.1.1

DC02-172.10.1.2

The customer first suspected it to be a DHCP vs Static IP issue as that appeared on the surface to be the only real difference between them. The clients were getting different records returned in a round robin fashion but ONLY in their site when using a static address but when it was DHCP it would round robin through all the different records. The devil they say is in the details.

There is a setting on the DNS server on the advanced tab called Netmask Ordering. The purpose of netmask ordering is to try and prioritize local resources for clients. For example if an A record exists in 2 places, 1 on the same subnet as the client and 1 on a different subnet, return the A record on the same subnet client. It assumes since the subnets match, the resource must be closer to the client so return that address.

clip_image003

 

So what is the problem here? Netmask ordering was enabled on both DC01 and DC02. Why are the DHCP clients behaving differently? Turns out Netmask ordering by DEFAULT matches to Class C addresses (255.255.255.0) I’ve highlighted in red to show the Class C octet for the environment.

DCs

DC01- 172.10.1.1

DC02- 172.10.1.2

DC03-172.20.10.1

DC04-172.30.20.1

DC05-192.168.5.1

Clients

Client01-172.10.1.100

Client02- 172.10.8.1

Client03-172.10.9.1

 

Since Cilent01 Class C octet matched DC01 and DC02 Class C octet those were always being returned in a round robin fashion. However Client02 and Client03 did not match any Class C octet for any DC so all results were returned in a round robin fashion since none were considered to be close for that client.

For this environment we wanted to actually match on the Class B octet, 172.10.x.x. In order to do this we ran the following DNS command, Dnscmd /Config /LocalNetPriorityNetMask 0x0000FFFF.

DCs

DC01- 172.10.1.1

DC02- 172.10.1.2

DC03-172.20.10.1

DC04-172.30.20.1

DC05-192.168.5.1

Clients

Client01-172.10.1.100

Client02- 172.10.8.1

Client03-172.10.9.1

 

Now all DHCP clients in the Chicago site behaved like the statically defined client since the Class B octet (255.255.0.0) matched the octet for the DCs. DC01 and DC02 are being returned in a round robin fashion as expected.

For more info around this read http://support.microsoft.com/kb/842197

Mark “your mockingjay in this crazy tech world” Morowczynski

MAILBAG: Should I use the same network adapters for all interfaces on my cluster?

$
0
0

Not sure where February went but it sure is flying by in a hurry.  During the month I had another interesting question given to me to answer.   Don't forget that you can contact us using the Contact tile just to the right of this article when viewed from our TechNet blog.

Question

When building a cluster, is it okay to use the same model network adapters for all interfaces on the same node?

Thoughts regarding the question

Technically, the answer to this question is that it is perfectly fine to use the same model network adapters for all interfaces in the cluster.  Modern clusters (Windows Server 2008 and newer) need to pass validation in order to be supported...and to avoid issues.  Assuming that validation passes with the network adapters chosen, then you should be good to go.   However, my conservative nature tends to want to take this a step further...not just what is supported, but what might be better and supported.   The validation process tells you that things are working as expected.  But what about later if the single driver that services all network adapters in the system malfunctions and prevents any communication?  A failover cluster may experience a failover that might otherwise be prevented if communication were possible through at least one network interface.

With a single driver for all network interfaces it is possible that all communication may be impacted.  I've seen that same scenario play out more times than I can count over the 16 years I've supported clusters at Microsoft.  Those issues typically go away like waving a magic wand when the offending network driver receives the proper update.  Usually the adapters using the same malfunctioning driver don't end up completely non-functional...but when there's a problem it may trigger an otherwise unnecessary failover.   Why?  Because there are timing tolerances for node to node communications as well as global updates and out of tolerance delay on all interfaces looks like a failure of all networks when the single network driver flakes out.   As a result, the cluster has to try to recover from the situation to keep resources highly available.  Thus, the single network driver approach can be vulnerable.

When I build a cluster, or when someone asks me about building one, I suggest using slightly different models of network adapters within the same server for the public and private networks.  They can even be from the same manufacturer as long as they use a different driver.  This way, you're using two different adapter drivers.  If one of them fails and renders corresponding adapters useless, you still have the potential for other adapters in the system to function with the other driver(s).   One could argue that other single driver situations for storage or other devices could be considered failure points as well.  When I've seen that happen, typically I/O operations get retried but access to storage may not complexly fail.  Such incidents may be transient, recoverable, and noted in the event log.  Failover Cluster nodes need to be able to communicate which is why I have the opinion I do about network adapters and their corresponding drivers.  It is important to remove as many single points of failure as possible.   When it comes to communication, network adapters are typically inexpensive.  Again…what I’m saying here is not a design requirement.  It’s just an opinion based on experience.

Circling back around to the original question, it is perfectly fine to use adapters in the same server that are all the same model and use the same driver.  However, It might be good to consider slightly different network adapters to avoid a single network adapter driver as a single point of failure.  It is also wise to keep hardware configurations as consistent as possible amongst all failover cluster nodes.  Consistency of hardware across nodes is always a plus in my opinion.

A really great post about cluster validation can be found below:

http://blogs.msdn.com/b/clustering/archive/2011/06/28/10180803.aspx

Until next time!

-Martin

How to Reduce the Size of the Winsxs directory and Free Up Disk Space on Windows Server 2012 Using Features on Demand

$
0
0

Remember these on Windows 2003 Server?


 

I cannot count the number of times I’ve been asked how to clean this up.

Apparently I’m not the only one that hates clutter. 

And what about that annoying prompt to insert your product CD-ROM now at the most inopportune time? I’ve literally been on mission critical server down situations where no one could locate a CD.

File corruption, repairs, installing features all required the CD.

Introduction of the Component Store

So with Windows Server 2008 and later, we moved from that to the Windows Side-by-Side (WinSXS) directory and there was much rejoicing as it introduced many new features that made the administrator’s job much easier:

  • We no longer prompt for the CD when installing a role or feature (unless you’ve completely removed which only applies to Windows Server 2012 or later)
  • We automatically repair corrupt files using a good copy from the component store
  • Repairs such as System File Checker (SFC) no longer prompt for media
  • All the previous versions of the operating system files are still readily available and newer versions (just in case you install a role or feature in the future) are too

  

But a few nagging questions began to popup…These typically come hand in hand with disk space cleanup.

What is this winsxs directory and why is it so large?  See below.

Can I delete winsxs? No.


Can I move winsxs? No.

Can I cleanup winsxs? It depends…

We wrote a blog to get the message out: http://blogs.technet.com/b/askcore/archive/2008/09/17/what-is-the-winsxs-directory-in-windows-2008-and-windows-vista-and-why-is-it-so-large.aspx

The gist, you could uninstall a role/feature, but it was still there…along with all the updates that might ever be needed.

There’s also a good one on how to reclaim space after installing a service pack:

http://blogs.technet.com/b/joscon/archive/2011/02/15/how-to-reclaim-space-after-applying-service-pack-1.aspx and here as well: http://support.microsoft.com/kb/2795190/EN-US or if you’re looking to cleanup disk space on
Windows Server 2012 in general, check this out: http://social.technet.microsoft.com/wiki/contents/articles/15221.enabling-disk-cleanup-utility-in-windows-server-2012.aspx

Windows Server 2012 Features on Demand

Then came Windows Server 2012 Features on Demand with the ability to remove any unwanted role/feature and all is well.  So far.

And it’s easy too with 3 simple steps:

1)     Open an administrative PowerShell command prompt

2)     Use the Get-WindowsFeature command to find the name of the role/feature

3)     To uninstall, run the following command:

Uninstall-WindowsFeature –name <name of role/feature> -remove

Tip: Add the –Whatif switch to the end to see exactly what will be removed without actually removing it.

So why do this? I’ll leave you with three reasons:

1)     Decrease the footprint of the base server install

2)     Reduce the number of potential patches (and reboots)

3)     Increase security compliance by removing features not integral to the role of the server

But don’t worry. We’ll let you add them back if you change your mind. :-)

So what happens if you try to reinstall a role or feature that has been completely removed?

Well, we alert you that the feature or role is missing and we will automatically attempt to retrieve it from Windows Update (or WSUS), a source location specified by Group Policy, or you can manually specify an alternate source path.

What does this look like in PowerShell?

For starters, when you run the Get-Feature command, it will show the role or feature as Removed:




If you still attempt to add the role or feature via PowerShell, here’s what it looks like:

At this point, it’s obvious we do not have the Group Policy configured with the alternate source location. We can either grant the server external internet access and allow it to get the feature from Windows Update, if possible, or we can specify a source location with our Install-WindowsFeature command by specifying the –Source switch. Here’s an example:

Install-WindowsFeature web-server –source {source location and source file}

Here’s what this looks like in the GUI:

If you click on the Specify an alternate source path option, you are presented with the following:

You can specify a share with the install.wim file, but you also need to specify the index of the image within the WIM file. More on that here in a minute.

If you would like to configure this ahead of time through Group Policy, the policy we policy we configure is Computer Configuration\Administrative Templates\System\Specify settings for optional component installation and component repair

Here’s what the policy looks like when we go to enable and configure it:

Notice the options to Never attempt to download payload from Windows Update or Contact Windows Update directly to download repair content instead of Windows Server Update Services (WSUS). These options can come in handy depending on the configuration of your environment.

I personally think it’s cool that we can now point to an install.wim file instead of a share with a flat of the install media.

To do so, we specify the WIM parameter along with a share that contains the install.wim file. We also have to specify which image within the install.wim file we wish to use as our source.

For those of you that don’t know, a .wim file is simply a container that contains one or more images. These can be custom images that you’ve created and captured yourself, modified images that have modifications such as injected security updates, or just the base images contained in the default install.wim file pulled out of the Sources directory on the Windows Server 2012 media.

Therefore, the index number we provide could be different depending on the install.wim we are pointing to as our source.

To find out which image you would like to use, run the dism /get-Wiminfo /wimfile:{location of install.wim} to list out the images contained within the install.wim file. Here’s what this looks like for the default install.wim:

Since I’m deploying mainly Windows Server 2012 Standard in my environment, I need to specify an index of 2. The majority of you are probably deploying Windows Server 2012 Standard as well, unless you’re setting up Hyper-V servers, then its likely Datacenter or potentially Server Core, then it’s one of the Server Core editions. However, there are no edition-specific role differences between Standard and Datacenter edition. So really, I could specify an index of 2 or 4 and be just fine.

The end result is that my alternate source file path is as follows:

wim:\\KMS-2012\2012_Source\install.wim:2

Now when reinstalling my role or feature, it uses this source and works like a champ!

Things to keep in mind when using the install.wim as your source:

  • As you apply updates to your servers, you want to keep your source install.wim that you are using for re-adding roles and features updated too. So why is this? If you remove a role or feature after it has been updated, it is possible that the update you applied previously that updated that role or feature spanned multiple roles and features and therefore was not removed. When you attempt to re-add the role or feature back using the install.wim source, if it’s not as updated as the server expects, it will fail to re-add it. If it is newer than expected, that’s fine. It just can’t be older than expected.
  • When we remove a role or feature, we do delete the files associated with that role when we remove the role, but we don’t uninstall the updates that have already been applied. When the role is enabled again we need files from any updates that have been applied to it.
  • It’s ok for the install.wim to be more up to date than the server that you are enabling the role for. The important part is that the install.wim has the files from all the updates that have been applied to the server affecting the role being enabled. Also note that if the install.wim is more up to date than the server, enabling the role will never install new updates directly from the install.wim. To bring the server up to date, new updates need to be applied in the normal way, such as WSUS or Windows Update.
  • If you are attempting to install .Net Framework 3.5 on Windows Server 2012, instead of specifying the install.wim, you need to specify the Sources\SxS directory on the DVD or if providing the source, the files in the SXS directory copied from the DVD are just for .Net Framework 3.5. You can host them on a share and supply them through the GUI or through Group Policy just like any other feature.

If all that is too much to keep up with, just use Windows Update. As long as the service is not disabled, blocked at the firewall, or blocked by group policy; if we don’t find what we need in the specified install.wim or on the server, we’ll automatically search Windows Update. It works like a champ. :-)

Enjoy!

~ Charity “Will slim base installs be the new wave of the future?” Shelbourne

Your Personal Isolated Lab - Featuring Windows 8 + Hyper-V

$
0
0

By Mike Leary

With Windows 8 + Hyper-V you have the opportunity to create a great lab environment on a single workstation. When I build a lab I like it to be isolated from any production network but also flexible enough to get Internet access from the lab and files and such into the lab. I like the idea of isolation because I fear things like a rogue DHCP servers or duplicate domain controllers on a production network. As a Premier Field Engineer I needed the ability to quickly reproduce issues and verify behaviors in my own lab. This approach allows me to have complete control for testing (without tweaked configurations, 3rd party software, etc.)

The solution I am outlining in this article is not intended for a production network, but for personal testing on a single Hyper-V host. In most production environments there is no need for isolation and this networking layout would add an unneeded complexity.

If you are looking for a production ready solution that provides better flexibility and enterprise ready features I would suggest looking at solutions such as TMG, UAG, or RRAS. I will be adding a blog post shortly on how to configure TMG. Stay tuned.

So what is the solution? Well, it is not perfect- but I use a Linux based firewall/router running as VM. This solution has a very small footprint for both disk and memory (50mb and 32mb). It boots very quickly and provides lots of functionality. There are a few options in this realm including pfSense, m0mwall, and DD-WRT. In my testing I found that DD-WRT was the most stable platform and pfSense provided the best performance. Since I care more about ease of use and ease of installation I selected DD-WRT for my lab.

DD-WRT is an open source router firmware that has been ported to x86. It runs under Hyper-V with very little setup. It does require the use of Legacy network adapters and the throughput somewhat limited but it provides a great firewall, web interface, and port forwarding. For a full list of features visit http://dd-wrt.com.

Here is a simple diagram of the network setup I use:

clip_image002

What follows is a simple guide to get DD-WRT up and running in your lab (It assumes you have some experience with Hyper-V):

On the virtual switch in Hyper-V you will need to create an Internal network that is private only. Then create one network for each external interface. For example, I have WiFi and Ethernet, so I have 3 total network switches as seen in the following diagram:

clip_image004

Make sure that the External interfaces have the “Allow management operating system to share this network adapter” check box selected.

Next, create a new VHD on your host machine. Open Disk management and create a new VHD that is 50mb, fixed size. Make a note of the disk number (Disk 2 in this example):

clip_image006

Download physdiskwrite from http://m0n0.ch/wall/physdiskwrite.php

Download the latest DD-WRT image for x86: http://www.dd-wrt.com/routerdb/de/download/X86/X86///dd-wrt_public_vga.image/3744

Or search the router database at www.dd-wrt.com for x86

Open a command prompt with administrative privileges and run the following command:

Phydiskwrite –u dd-wrt_public_vga.image

It will prompt you to put the disk number in. This will erase the entire drive, so double check your disk number. Once this process is complete, dismount the VHD from the host machine.

Create a new VM (I name mine 1-Router so it is at the top of the list in Hyper-V manager). Set the memory to 32mb and NOT connected to a network.

Open the settings of the new VM and remove the Network adapter and add 2 new Legacy Network adapters.

Connect one of the Legacy network adapters to the internal network. Leave the other disconnected.

** If you connect the External network to the wrong adapter you will introduce DHCP to your External network. This will make you and everyone else on your network crazy, so don’t do it…

Boot the VM

Test the configuration from another VM:

From any VM on the internal network, set it to use DHCP. The router should give you a 192.168.1.x address. If you don’t get one, try switching the Internal network to the other legacy adapter in the VM configuration.

Once you get an address, open http://192.168.1.1– if you get a web page, you did it correctly!

Connect the second VM Legacy network adapter to the external network (pick the one that is currently active).

Enable the WAN port:

Enable the WAN on the settings page. From there you should have a fully functional router/firewall in your environment.

I use port forwarding to allow RDP (port 3389) to one of my VMs. I use this to transfer files and work in the lab. This solution provides great isolation, internet access, and management access from your host machine.

Related blog post:

http://blogs.technet.com/b/hollis/archive/2012/01/23/excerpt-from-the-road-warriors-survival-guide-part-2-of-2-quot-networking-the-lab.aspx

http://www.windowsitpro.com/article/what-would-microsoft-support-do/road-warriors-laptop-build-guide-142916

Windows Server 2012 Hyper-V Best Practices (In Easy Checklist Form)

$
0
0

Windows Server 2012 provided major improvements to the Hyper-V role, including increased consolidation of server workloads, Hyper-V Replica, Cluster Aware Updating (CAU), network virtualization and the Hyper-V extensible switch, just to name a few! Hyper-V 3.0, as some call it, helps organizations improve server utilization while reducing costs.

The following is a checklist I initially developed for Windows Server 2008 R2 SP1 (which can be found here: http://blogs.technet.com/b/askpfeplat/archive/2012/11/19/hyper-v-2008-r2-sp1-best-practices-in-easy-checklist-form.aspx) and overhauled with the latest release. Those of you who have used my previous checklist will notice quite a few items remaining; that’s because many of the best practices still apply to Hyper-V in Server 2012!

I find having a checklist can be a great tool to use not only when reviewing an existing Hyper-V implementation, but one which can be leveraged as part of pre-planning stages, to ensure best practices are implemented from the start.

It’s important to note this is not an exhaustive compilation, rather a grouping of features/options commonly used in businesses I’ve had the pleasure of assisting.

A special thanks to Ted Teknos, Ryan Zoeller and Rob Hefner for their input/suggestions/corrections as I put this together!

So, without further ado, here’s the newly updated Hyper-V 2012 Best Practice Checklist!


Disclaimer: As with all Best Practices, not every recommendation can – or should – be applied. Best Practices are general guidelines, not hard, fast rules that must be followed. As such, you should carefully review each item to determine if it makes sense in your environment. If implementing one (or more) of these Best Practices seems sensible, great; if it doesn't, simply ignore it. In other words, it's up to you to decide if you should apply these in your setting.


 

GENERAL (HOST):

⎕ Use Server Core, if possible, to reduce OS overhead, reduce potential attack surface, and to minimize reboots (due to fewer software updates).

⎕ Ensure hosts are up-to-date with recommended Microsoft updates, to ensure critical patches and updates – addressing security concerns or fixes to the core OS – are applied.

⎕ Ensure all applicable Hyper-V hotfixes and Cluster hotfixes (if applicable) have been applied. Review the following sites and compare it to your environment, since not all hotfixes will be applicable:

· Update List for Windows Server 2012 Hyper-V: http://social.technet.microsoft.com/wiki/contents/articles/15576.hyper-v-update-list-for-windows-server-2012.aspx

· List of Failover Cluster Hotfixes: http://social.technet.microsoft.com/wiki/contents/articles/15577.list-of-failover-cluster-hotfixes-for-windows-server-2012.aspx

⎕ Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fibre Channel, NIC’s, etc.), to address any known issues/supportability

⎕ Host should be domain joined, unless security standards dictate otherwise. Doing so makes it possible to centralize the management of policies for identity, security, and auditing. Additionally, hosts must be domain joined before you can create a Hyper-V High-Availability Cluster.

· For more information: http://technet.microsoft.com/en-us/library/ee941123(v=WS.10).aspx

⎕ RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine.

  • Preferred method: Use Group Policy with host servers in their own separate OU
    • Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection –> Set to "Enabled

⎕ Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles (if VDI will be used on the host).

  • When the Hyper-V role is installed, the host OS becomes the "Parent Partition" (a quasi-virtual machine), and the Hypervisor partition is placed between the parent partition and the hardware. As a result, it is not recommended to install additional (non-Hyper-V and/or VDI related) roles.

⎕ The only Features that should be installed on the host are: Failover Cluster Manager (if host will become part of a cluster), Multipath I/O (if host will be connecting to an iSCSI SAN, Spaces and/or Fibre Channel), or Remote Desktop Services if VDI is being used. (See explanation above for reasons why installing additional features is not recommended.)

⎕ Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article, namely:

    • All folders containing VHD, VHDX, AVHD, VSV and ISO files
    • Default virtual machine configuration directory, if used (C:\ProgramData\Microsoft\Windows\Hyper-V)
    • Default snapshot files directory, if used (%systemdrive%\ProgramData\Microsoft\Windows\Hyper-V\Snapshots)
    • Custom virtual machine configuration directories, if applicable
    • Default virtual hard disk drive directory
    • Custom virtual hard disk drive directories
    • Snapshot directories
    • Vmms.exe (Note: May need to be configured as process exclusions within the antivirus software)
    • Vmwp.exe (Note: May need to be configured as process exclusions within the antivirus software)
    • Additionally, when you use Cluster Shared Volumes, exclude the CSV path "C:\ClusterStorage" and all its subdirectories.
  • For more information: http://social.technet.microsoft.com/wiki/contents/articles/2179.hyper-v-anti-virus-exclusions-for-hyper-v-hosts.aspx

⎕ Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.

⎕ If you choose to save the VM state as the Automatic Stop Action, the default virtual machine path should be set to a non-system drive, due to the creation of a .bin file is created that matches the size of memory reserved for the virtual machine.  A .vsv file may also be created in the same location as the .bin file, adding to disk space used for each VM. (The default path is: C:\ProgramData\Microsoft\Windows\Hyper-V.)

⎕ If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host, to allow iSCSI traffic to pass to and from host and SAN device. Not enabling these rules will prevent iSCSI communication.

To set the iSCSI firewall rules via netsh, you can use the following command:

Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes

 

⎕ Periodically run performance counters against the host, to ensure optimal performance.

  • Recommend using the Hyper-V performance counter that can be extracted from the (free) Codeplex PAL application:
  • Install PAL on a workstation and open it, then click on the Threshold File tab.
    • Select "Microsoft Windows Server 2012 Hyper-V" from the Threshold file title, then choose Export to Perfmon template file. Save the XML file to a location accessible to the Hyper-V host.
  • Next, on the host, open Server Manager –> Tool –> Performance Monitor
  • In Performance Monitor, click on Data Collector Sets –> User Defined. Right click on User Defined and choose New –> Data Collector Set. Name the collector set "Hyper-V Performance Counter Set" and select Create from a template (Recommended) then choose Next. On the next screen, select Browse and then locate the XML file you exported from the PAL application. Once done, this will show up in your User Defined Data Collector Sets.
  • Run these counters in Performance Monitor for 30 minutes to 1 hour (during high usage times) and look for disk latency, memory and CPU issues, etc.

 

GENERAL (VMs):

⎕ Ensure you are running only supported guests in your environment. For a complete listing, refer to the following list: http://blogs.technet.com/b/schadinio/archive/2012/06/26/windows-server-2012-hyper-v-list-of-supported-client-os.aspx

 

PHYSICAL NICs:

⎕ Ensure NICs have the latest firmware, which often address known issues with hardware.

⎕ Ensure latest NIC drivers have been installed on the host, which resolve known issues and/or increase performance.

⎕ VMQ should be enabled on VMQ-capable physical network adapters bound to an external virtual switch.

⎕ TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, due to TCP Chimney has the entire networking stack offloaded to the NIC. If software-based NIC teaming is not used, however, you can leave it enabled.

  • TO SHOW STATUS:
    • From an elevated command-prompt, type the following:
      • netsh int tcp show global
        • (The output should show Chimney Offload State disabled)
  • TO DISABLE TCP Chimney Offload:
    • From an elevated command-prompt, type the following:
      • netsh int tcp set global chimney=disabled

⎕ Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. This can significantly increase (6x increased throughput) throughput while also reducing CPU cycles.

  • End-to-End configuration must take place – NIC, SAN, Switch must all support Jumbo Frames.
  • You can enable Jumbo frames when using crossover cables (for Live Migration and/or Heartbeat), in a two node cluster.
  • To verify Jumbo frames have been successfully configured, run the following command from all your Hyper-V host(s) to your iSCSI SAN:
    • Ping 10.50.2.35 –f –l 8000
      • This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet from the host. If replies are received, Jumbo frames are properly configured.

image

⎕ NICs used for iSCSI communication should have all Networking protocols (on the Local Area Connection Properties) unchecked, with the exception of:

  • Manufacturers protocol (if applicable)
  • Internet Protocol Version 4
  • Internet Protocol Version 6.
  • Unbinding other protocols (not listed above) helps eliminate non-iSCSI traffic/chatter on these NICs.

⎕ NIC Teaming should not be used on iSCSI NIC’s. MPIO is the best method. NIC teaming can be used on the Management, Production (VM traffic), CSV Heartbeat and Live Migration networks.

⎕ If you are using NIC teaming for Management, CSV Heartbeat and/or Live Migration, create the team(s) before you begin assigning Networks.

⎕ If using aggregate (switch-dependent) NIC teaming in a guest VM, only SR-IOV NICs should be used on guest.

⎕ If using NIC teaming inside a guest VM, follow this order:

METHOD #1:

  • Open the settings of the Virtual Machine
    • Under Network Adapter, select Advanced Features.
    • In the right pane, under Network Teaming, tick the “Enable this network adapter to be part of a team in the guest operating system.
  • Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server

image

METHOD #2:

  • Use the following PowerShell command (Run as Administrator) on the Hyper-V host where the VM currently resides:
    • Set-VMNetworkAdapter –VMName contoso-vm1 –AllowTeaming On
      • This PowerShell command turns on resiliency if one or more of the teamed NICs goes offline.
    • Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server

⎕ When creating virtual switches, it is best practice to uncheckAllow management operating system to share this network adapter, in order to create a dedicated network for your VM(s) to communicate with other computers on the physical network. (If the management adapter is shared, do not modify protocols on the NIC.)

Please note: we fully support and even recommend (in some cases) using the virtual switch to separate networks for Management, Live Migration, CSV/Heartbeat and even iSCSI.  For example two 10GB NIC’s that are split out using VLANs and QoS.

⎕ Recommended network configuration when clustering:

Min # of Networks on Host

Host Management

VM Network Access

CSV/Heartbeat

Live Migration

iSCSI

5

“Management”

“Production”

“CSV/Heartbeat”

“Live Migration”

“iSCSI”

** CSV/Heartbeat & Live Migration Networks can be crossover cables connecting the nodes, but only if you are building a two (2) node cluster. Anything above two (2) nodes requires a switch. **

⎕ Turn off cluster communication on the iSCSI network.

  • In Failover Cluster Manager, under Networks, the iSCSI network properties should be set to “Do not allow cluster network communication on this network.” This prevents internal cluster communications as well as CSV traffic from flowing over the same network.

⎕ Redundant network paths are strongly encouraged (multiple switches) – especially for your Live Migration and iSCSI network – as it provides resiliency and quality of service (QoS).

VLANS:

⎕ If aggregate NIC Teaming is enabled for Management and/or Live Migration networks, the physical switch ports the host is connected to should be set to trunk (promiscuous) mode. The physical switch should pass all traffic to the host for filtering.

⎕ Turn off VLAN filters on teamed NICs. Let the teaming software or the Hyper-V switch (if present) do the filtering.

 

VIRTUAL NETWORK ADAPTERS (NICs):

⎕ Legacy Network Adapters (a.k.a. Emulated NIC drivers) should only be used for PXE booting a VM or when installing non-Hyper-V aware Guest operating systems. Hyper-V's synthetic NICs (the default NIC selection; a.k.a. Synthetic NIC drivers) are far more efficient, due to using a dedicated VMBus to communicate between the virtual NIC and the physical NIC; as a result, there are reduced CPU cycles, as well as much lower hypervisor/guest transitions per operation.

DISK:

⎕ New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.

  • The VHDX format supports virtual hard disk storage capacity of up to 64 TB, improved protection against data corruption during power failures (by logging updates to the VHDX metadata structures), and improved alignment of the virtual hard disk format to work well on large sector disks.

⎕ Disks should be fixed in a production environment, to increase disk throughput. Differencing and Dynamic disks are not recommended for production, due to increased disk read/write latency times (differencing/dynamic disks).

⎕ Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead. Additionally, if you are hosting 2008 R2 (or earlier) Domain Controllers, reverting to an earlier snapshot can cause USN rollbacks. Windows Server 2012 has been updated to help better protect Domain Controllers from USN rollbacks; however, you should still limit usage.

⎕ The recommended minimum free space on CSV volumes containing Hyper-V virtual machine VHD and/or VHDX files:

  • 15% free space, if the partition size is less than 1TB
  • 10% free space, if the partition size is between 1TB and 5TB
  • 5% free space, if the partition size is greater than 5TB

 

  • To enumerate current volume information, including the percentage free, you can use the following PowerShell command:
    • Get-ClusterSharedVolume "Cluster Disk 1" | fc *
      • Review the "PercentageFree" output

⎕ It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.

⎕ Page file on Hyper-V Host should managed by the OS and not configured manually.

 

MEMORY:

     ⎕ Use Dynamic Memory on all VMs (unless not supported).   

⎕ Guest OS should be configured with (minimum) recommended memory

  • 2048MB is recommended for Windows Server 2012 (e.g. 2048 - 4096 Dynamic Memory). (The minimum supported is 512 MB)
  • 2048MB is recommended for Windows Server 2008, including R2 (e.g. 2048 - 4096 Dynamic Memory). (The minimum supported is 512 MB)
  • 1024MB is recommended for Windows 7 (e.g. 1024 - 2048 Dynamic Memory). (The minimum supported is 512 MB)
  • 1024MB is recommended for Windows Vista (e.g. 1024 - 2048 Dynamic Memory). (The minimum supported is 512 MB)
  • 256MB is recommended for Windows Server 2003 (e.g. 256 - 2048 Dynamic Memory). (The minimum supported is 128 MB)
  • 128MB is recommended for Windows XP (e.g. 128 - 2048 Dynamic Memory). (The minimum supported is 64 MB)

 

CLUSTER:

⎕ Set preferred network for CSV communication, to ensure the correct network is used for this traffic. (Note: This will only need to be run on one of your Hyper-V nodes.)

  • The lowest metric in the output generated by the following PowerShell command will be used for CSV traffic
    • Open a PowerShell command-prompt (using “Run as administrator”)
    • First, you’ll need to import the “FailoverClusters” module. Type the following at the PS command-prompt:
      • Import-Module FailoverClusters
    • Next, we’ll request a listing of networks used by the host, as well as the metric assigned. Type the following:
      • Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role
    • In order to change which network interface is used for CSV traffic, use the following PowerShell command:
        • (Get-ClusterNetwork "CSV Network").Metric=900
          • This will set the network named "CSV Network" to 900

image

⎕ Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic:

  • Open Failover Cluster Manager, Expand the Cluster
  • Next, right click on Networks and select Live Migration Settings
    • Use the Up / Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom)
    • Uncheck any networks you do not want used for Live Migration traffic
    • Select Apply and then press OK
  • Once you have made this change, it will be used for all VMs in the cluster

⎕ The Cluster Shutdown Time (ShutdownTimeoutInMinutes registry entry) should be set to an acceptable number

  • Default is set using the following calculation (which can be too high, depending on how much physical memory is installed)
    • (100 / 64) * physical RAM
      • For example, a 96GB system would have 150 minute timeout. (100/64)*96 = 150
  • Suggest setting the timeout to 15, 20 or 30 minutes, depending on the number of VMs in your environment.
    • Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes
      • Enter minutes in Decimal value.
      • Note: Requires a reboot to take effect

image

⎕ Run the Cluster Validation periodically to remediate any issues

  • NOTE: If all LUNs are part of the cluster, the validation test will skip all disk checks. It is recommended to set up a small test-only LUN and share it on all nodes, so full validation testing can be completed.
  • If you need to test a LUN running virtual machines, the LUN will need to be taken offline.
  • For more information: http://technet.microsoft.com/en-us/library/cc732035(WS.10).aspx#BKMK_how_to_run

 

HYPER-V REPLICA:

⎕ If utilizing Hyper-V Replica, update inbound traffic on the firewall to allow TCP port ‘80’ and/or port ‘443’ traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster.

To enable HTTP (port 80) replica traffic, you can run the following from an elevated command-prompt:

netsh advfirewall firewall set rule group="Hyper-V Replica HTTP" new enable=yes

To enable HTTPS (port 443) replica traffic, you can run the following from an elevated command-prompt:

netsh advfirewall firewall set rule group="Hyper-V Replica HTTPS" new enable=yes

⎕ Compression is recommended for replication traffic, to reduce bandwidth requirements.

⎕ Configure guest operating systems for VSS-based backups to enable application-consistent snapshots for Hyper-V Replica.

⎕ Integration services must be installed before primary or Replica virtual machines can use an alternate IP address after a failover

⎕ Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.

 

⎕ Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover

⎕ Hyper-V Replica requires the Failover Clustering Hyper-V Replica Broker role be configured if either the primary or the replica server is part of a cluster.

 

CLUSTER-AWARE UPDATING:

⎕ Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. (Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs. http://technet.microsoft.com/en-us/library/jj134224.aspx

 

SMB 3.0 FILE SHARES:

⎕ An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.

⎕ Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on compute nodes that will serve other VM’s is not supported.

 

VITRUAL DOMAIN CONTROLLERS (DCs):

⎕ Domain Controller VMs should have "Shut down the guest operating system" in the Automatic Stop Action setting applied (in the virtual machine settings on the Hyper-V Host)

  • Important: See “Use caution when using snapshots” under the Disk section for more information regarding snapshots.

 

INTEGRATION SERVICES:

⎕ Ensure Integration Services (IS) have been installed on all VMs. IC's significantly improve interaction between the VM and the physical host.

⎕ Be certain you are running the latest version of integration services – the same version as the host(s) – in all guest operating systems, as some Microsoft updates make changes/improvements to the Integration Services software. (When a new Integration Services version is updated on the host(s) it does not automatically update the guest operating systems.)

  • Note: If Integration Services are out of date, you will see 4010 events logged in the event viewer.
  • You can discover the version for each of your VMs on a host by running the following PowerShell command:
    • Get-VM | ft Name, IntegrationServicesVersion
  • If you’d like a PowerShell method to update Integration Services on VMs, check out this blog: http://gallery.technet.microsoft.com/scriptcenter/Automated-Install-of-Hyper-edc278ef

 


 

I sincerely hope you find this blog posting useful! If you do, please forward the link on to others who may benefit!

Until next time,

Roger Osborne, Microsoft PFE

Slow Boot Slow Login (SBSL) Hotfix Rollup for Windows 7 and Server 2008 R2 Available Today!

$
0
0

Hey y’all, Mark here with some great news. A massive Slow Boot Slow Login (SBSL) hotfix rollup for enterprises that includes 90 post Windows 7/2008 R2  hotfixes is available for download. The hotfixes focus on performance (Hooray!) and stability (Hooray again!). There are improvements to the DFSN client, Folder Redirection, Offline Files and Folders, WMI, and SMB client to name a few. There also improvements to Group Policy which gets blamed for pretty much everything when I’m doing a WDRAP. You can read the full list of improvements here.

Few things I want to point out. First, this applies to both Windows 7 SP1 clients and Server 2008 R2 SP1 as seen here. This is how you can take full advantage of the networking updates.

“To take full advantage of this improvement for Windows 7 clients that log on to Windows Server 2008 R2 servers, install this rollup update on Windows 7 clients. Additionally, install this rollup update on the Windows Server 2008 R2 servers that clients authenticate and retrieve user profiles, policies and script data from during the startup and logon process. You can update your environment by installing this hotfix rollup on both clients and servers in no particular order.”

Second, use Xperf to baseline before and after the update is applied. We’ve been talking about Xperf forever here. Use this as an opportunity to really be the hero. Show the improvement that can affect EVERYONE in the enterprise daily. Multiply that by how many work days there are and that is a nice thing to hang your hat on. Not a bad day at the office.

Post in the comments how much time you saved. We love to hear before and after stories.

Finally, as the article states, “We recommend that you apply this hotfix rollup as part of your regular maintenance routine and build processes for Windows 7 and Windows Server 2008 R2 computers.” That does include putting this in your test lab first right? Right? Right!?

Thanks to all the PFEs, CTS and Devs that help collaborate to make this happen. Specifically the Dude Jeff Stokes, BenChrist, and A. Conner for driving this bus.

http://support.microsoft.com/kb/2775511/en-us

 Direct Download http://catalog.update.microsoft.com/v7/site/Search.aspx?q=KB2775511

 

Update 3/13/13- We received a few questions as to why 2775511 resides exclusively on the Windows Catalog vs WSUS. Windows Catalog distribution allows administrators to distribute the rollup in either WSUS or System Center throughout the enterprise. This allows for increased coverage by companies that are normally refrained from downloading adhoc hotfixes. 

 

 Update 3/14/13- Fellow SCCM PFE Michael Griswold has a blog post about importing this into SCCM. If you are having issues take a look at http://blogs.technet.com/b/michaelgriswold/archive/2013/03/13/kb2775511-deployment-for-the-sccm-admin.aspx

 

Mark “Let’s Not Blame Group Policy Just Yet” Morowczynski

MailBag: RODCs – krbtgt_#####, Orphans, and Load Balancing RODC Connection Objects

$
0
0

Dougga here to answer a couple of quick RODC related questions.  I have been the fortunate PFE to perform ADRAPs (Active Directory Risk Assessment Program) that have had more than the average number of RODCs. I have also reviewed environments with only a few RODCs. During these risk assessments a couple of questions have come up regarding RODCs that I would like to share with you (including a bonus on orphaned krbtgt_##### accounts).

1. What is the krbtgt_###### account?

2. How can I verify the connection objects to the RODCs are balanced on the bridgehead servers in the hub site?

You should start off knowing what the krbtgt account is and then that knowledge will help you understand why each RODC has its own special krbtgt account. So let’s do some quick review.

I could try to explain what the krbtgt account is, but here is a short article on the KDC and the krbtgt.

http://msdn.microsoft.com/en-us/library/windows/desktop/aa378170(v=vs.85).aspx

“All instances of the KDC within a domain use the domain account for the security principal "krbtgt". Clients address messages to a domain's KDC by including both the service's principal name, "krbtgt", and the name of the domain. Both items of information are also used in tickets to identify the issuing authority. For information about name forms and addressing conventions, see RFC 4120.”

Likewise a snip for the RODC krbtgt_##### account:

http://technet.microsoft.com/en-us/library/cc753223(v=WS.10).aspx

“The RODC is advertised as the Key Distribution Center (KDC) for the branch office. The RODC uses a different krbtgt account and password than the KDC on a writable domain controller uses when it signs or encrypts ticket-granting ticket (TGT) requests. This provides cryptographic isolation between KDCs in different branches, which prevents a compromised RODC from issuing service tickets to resources in other branches or a hub site.”

So the answer to the first question after reviewing the previous two links is apparent. The krbtgt_##### account is unique to each RODC and minimizes impact if the RODC is compromised. The RODC does not have the krbtgt secret. It only has its own krbtgt_##### secret (and other accounts you have allowed). Thus when removing a compromised RODC, the domain krbtgt account is not lost.

<Bonus material: Orphaned Krbtgt_##### accounts>

I found some orphaned krbtgt_##### accounts in one of my ADRAPs. Orphans would be defined by krbtgt_##### without a backlink to an existing RODC computer account. The ADRAP tool does not look for these so I set off to my lab to find out how to discover them.

I attempted to cause krbtgt_###### accounts to be orphaned. I can create one, however, normal demotion and metadata cleanup process will remove the krbtgt_##### account. So in the end, I don’t see this as a common occurrence and if it does exist in your environment it is likely from a failed RODC that has not had the metadata properly cleaned up.

The following PowerShell commands can help you identify your krbtgt_##### accounts and if any are orphaned.

Import-Module ActiveDirectory

get-adobject -server <SERVER NAME> -filter { (objectclass -eq "user") -and ( name -like "krbtgt*") } -properties * | FT Name, msDS-KrbTgtLinkBl

clip_image002

In my lab the command is showing two accounts. The krbtgt account for the domain and it is not associated via a back link to any RODC and IS NOT an orphan. However, you can see my RODC (DC103-RODC) is linked to krbgt_28896 and in this case not orphaned either.

Warning: DO NOT delete the domain krbtgt account.

If you find an orphaned krbtgt_##### account verify the RODC is no longer a domain controller before deleting the orphaned account.

You can see the same thing using adsiedit.msc or your favorite LDAP browser:

clip_image004

The window on the left shows the properties of the krbtgt_38896 account. You will likely need to click on the “filter” button and select “BackLinks” to show the back linked attributes.

The window on the right is the properties of the RODC account. The highlighted attribute is the forward linked attribute, so no need to modify the filter to see that attribute.

So hopefully you can see the benefit of using the script to discover the krbtgt_##### accounts.

</End of Bonus material. >

The second question is regarding load balancing the RODC connection objects coming from the RWDCs in a hub site.

What if you have a lot of RODCs connected to a hub site. Are the connection objects load balanced? This is an improvement in Bridgehead servers that is not well known. This improvement removes a need to run any scripts to load balance RODC connections. To verify the connections are balanced you could count them – okay if you only have 4 or 5 RODCs that would be okay, but what if you have 50, or a hundred, or a couple hundred? Nope, I am not going to count them either.

Let’s count them with PowerShell.

Open PowerShell on a 2008 or 2008R2 member server and there are only two commands needed (if you count importing a module) to see the distribution of RODC connection objects. In the command where I am looking for the RODC connection objects, it will find the “automatically” generated connections named “RODC Connection*.” If for some reason someone has modified or manually created these objects, the command will likely not find those connection objects.

Command #1: import the active directory module

Import-Module ActiveDirectory

Command #2: Find, count, and group the RODC connection objects

get-adobject -filter {objectclass –eq "NTDSconnection"} -searchbase 'CN=Sites,CN=Configuration,DC=contoso,DC=com’ -properties fromServer | Where-Object {$_.name -like 'RODC Connection*'} | Group-object fromServer | FT -auto

The result will be a count of connections per bridgehead server and you will easily be able to see if the connection objects are evenly balanced across bridgehead server.

Cheers!


Is NIC Teaming in Windows Server 2012 supported for iSCSI, or not supported for iSCSI? That is the question…

$
0
0

image

Whether 'tis nobler in the mind to suffer not knowing the answer to this question or educate thy self…

Enough of Shakespeare... Where’s my PFE, when I need him?


Well in previous releases of the operating system, customers always depended on vendors to provide a support solution for NIC Teaming software. Even then in those days, neither the vendors nor Microsoft supported teaming iSCSI Initiator and recommended using the vendor’s MPIO (multi-path I/O) solution and/or DSM (device-specific modules).

Now with the release of Windows Server 2012, we include our own version of NIC teaming software also known as Load Balancing/Failover (LBFO).

So here’s where the confusion comes in and a lot of customers have been asking us…
Is NIC Teaming supported for iSCSI, or not supported for iSCSI?

Because when customers read this TechNet article Failover Clustering Hardware Requirements and Storage Options, it states “For iSCSI, network adapter teaming (also called load balancing and failover, or LBFO) is not supported. You can consider using MPIO software instead.” 

So to demystify the question that inspired to write this blog, I consulted with Don Stanwyck (Sr. Program Manager / Windows Core Networking) who is the authority on this subject and here is the essence of the message he conveyed:

· The Technet statement that basically quotes “iSCSI + NIC Teaming not supported” is still true for all teaming solutions, with the EXCEPTION of the Windows Server 2012 inbox NIC Teaming solution we provide.

· If iSCSI Initiator is used with dedicated NICs such as in a stand-alone and/or Failover Clustering environment, then NIC Teaming should not be used (because it adds no benefit over MPIO for dedicated NICs).

· If iSCSI Initiator is used in a shared NIC scenario (see figure below) such as in a Hyper-V 2012 environment, then iSCSI Initiator used over the Hyper-V switch (and over NIC Teaming) is supported.

 

image

 

· In a native environment where the NICs are shared between iSCSI Initiator and other uses, the following configuration is also supported.

image

 

· Just to make one point clear, the Microsoft’s Windows Server 2012 NIC Teaming (LBFO) solution is also supported for iSCSI targets (such as Microsoft’s iSCSI target) anytime, not just in the shared NIC scenario.  So even if the iSCSI initiators aren’t using Microsoft’s NIC Teaming (LBFO) solution (and perhaps not MPIO/DSM either), the iSCSI targets can use teamed NICs however you like.

 

KEY IMPORTANT TAKE AWAY

When iSCSI is used with dedicated NICs, then using any teaming solution is not recommended and MPIO/DSM should be used. But when iSCSI is used with shared NICs, those shared NICs can be teamed and will be supported as long as it’s being used with Microsoft’s Windows Server 2012 NIC Teaming (LBFO) solution.

In addition, we will always recommend you consult with your iSCSI storage vendor to confirm the support of iSCSI solutions with their storage. When in doubt, you can always search the status of any hardware testing completed on the Windows Server Catalog - Certified for Windows Server 2012.

ADDITIONAL REFERENCES:
NIC Teaming Overview.

Understanding MPIO Features and Components.

Windows Server 2012 NIC Teaming (LBFO) Deployment and Management documentation.

NIC Teaming in Windows Server 2012 Brings Simple, Affordable Traffic Reliability and Load Balancing to your Cloud Workloads.

Windows Server 2012: Creating a NIC TEAM for Load Balancing and Failover.


Hope you find this blog enlightening and clears up any doubts you might have heard regarding the ability to use NIC Teaming software with iSCSI.
Off I go to travel the world and see cities where no PFE has gone before…

Enjoy!
Mike Rosado (About me…)

As my lawyer would say…“This posting is provided "AS IS" with no warranties, and confers no rights

Four things I like about Active Directory Administrative Center (ADAC) in Windows Server 2012

$
0
0

Active Directory Administrative Center (ADAC) was first introduced in Windows Server 2008 R2 to manage directory service objects along with Active Directory Users and Computers (ADUC) however, it did not win me over until after I saw the enhancements made in Windows Server 2012. It is one of the reasons why I don’t resort to typing dsa.msc to open up ADUC anymore. Instead I type dsac on the start screen to open ADAC to manage my users, computers and much more. Let me share with you four things I like about ADAC in Windows Server 2012.

1. PowerShell History Viewer

This is a new addition to ADAC in Windows Server 2012 and this feature certainly blew me away.

While ADAC provides a graphical interface for administrators to perform common tasks such as creating new groups, users etc., PowerShell History Viewer in ADAC, allows them to review the exact PowerShell commands for these tasks. Now, that is brilliant! Let’s face it, for those who are not well versed in PowerShell or use it every day to remember the Cmdlets, can now use it to learn PowerShell and create their own Cmdlets. As a test, I created a new user called Tester, via the graphical interface and an equivalent PowerShell command got displayed in the PowerShell history viewer (screenshot below). I can now copy it to the clipboard via the Copy button and reuse it in a script later. Furthermore, if I want to review all the calls made by PowerShell, I can simply check Show All on the top right corner and it will display all the commands recently executed by ADAC.

clip_image002[8]

2. Active Directory Recycle Bin

ADAC in Windows Server 2012 now includes the Active Directory Recycle Bin UI. In Windows Server 2008 R2 AD recycle bin was introduced to assist with recovering deleted objects in AD but it lacked the graphical interface and could only be administered via PowerShell. However, Windows Server 2012 has made this process much easier via the ADAC GUI. While this blog is not intended to cover AD recycle bin in detail, you can learn more about it here. It is important to note that by default AD Recycle Bin is disabled. To enable it, ensure your forest is at Windows Server 2008 R2 Functional level or higher.

The screenshot below shows how to enable Recycle bin for the first time. Keep in mind, once enabled, it cannot be disabled.

clip_image004[7]

After enabling, refresh ADAC and you will see the Deleted Objects container. As the objects get deleted, they will be stored in the Deleted Objects container for up to the tombstone lifetime. To recover a deleted object, go to the Deleted Objects container, right-click the object and select restore. This is a much smoother and painless operation compared to in Windows Server 2008 R2 and now it can be administered easily via ADAC.

clip_image006[7]

3. Fine Grained Password Policy

ADAC in Windows Server 2012 makes Fine Grained Password Policy management much easier and simpler due to the graphical user interface. This feature was first introduced in Windows Server 2008 to allow administrators to define different password and lockout policies for different users in the domain. However, it was difficult to manage and was not very visual. Using the Windows Server 2012 version of ADAC, administrators can create a separate password policy for user accounts with different password needs, much more easily and quickly. First, ensure that your domain is at Window Server 2008 domain functional level or higher. Next, select New Password Settings, under the Password Settings Container which resides under System. Complete the password settings dialog box as desired and then click Add to apply this policy directly to the desired users, groups etc. And there you have it. You will really like this if you have ever set this up on Windows Server 2008 and felt the pain :)

clip_image008[7]

clip_image010[7]

4. Global Search

I really liked the Global Search feature in ADAC to search for objects in AD. This is not new to Windows Server 2012 as it was first introduced in Window Server 2008 R2. However, it is still one of my favorite features as it allows me to perform global search such as searching all my domains at once and not be limited to Global Catalog lookups. If I had more domains in my lab, you would see all of them listed under the navigation nodes.

clip_image012[7]

Another gem is the ability to very easily build LDAP queries. For example, if I want to search for all users who have not logged on for more than a given number of days, I can use the built in criteria (Click Add Criteria) to quickly get that information as shown below.

clip_image014[7]

It gives you the option to select the number of days as show below

clip_image016[7]

You can also convert it to LDAP and build on that query should you wish

clip_image018[7]

While you can’t transfer FSMO roles via ADAC today, starting in Windows Server 2008 R2 this feature is now easily available at fingertips via PowerShell using Move-ADDirectoryServerOperationMasterRole cmdlet of AD Module. I have added few examples below:

You can view FSMO role owner with these AD-Powershell commands:
Get-ADForest | select SchemaMaster,DomainNamingMaster
Get-ADDomain | select PDCEmulator,RIDMaster,InfrastructureMaster

Transfering all roles, command syntax:
Move-ADDirectoryServerOperationMasterRole -Identity "Target-DC" -OperationMasterRole SchemaMaster,RIDMaster,InfrastructureMaster,DomainNamingMaster,PDCEmulator

Seizing all roles, command syntax:
Move-ADDirectoryServerOperationMasterRole -Identity "Target-DC" -OperationMasterRole SchemaMaster,RIDMaster,InfrastructureMaster,DomainNamingMaster,PDCEmulator –Force

Well, these are some of the reasons why ADAC has now become a one stop shop for me when it comes to AD tasks. I am now using ADAC for all my AD administrative needs including raising the domain and forest functional level which is conveniently located under Tasks when Domain node is selected. I therefore, invite you to start using “The Tool of the Future’ as I would like to call it :)

Cheers!

Jasmin Amirali

Mailbag: How Often Does the DNS Server Service Check AD for New or Modified Data?

$
0
0

Here’s an interesting question that came to us from one of the readers. If you have a question for us, don't forget that you can contact us using the Contact tile just to the right of this article when viewed from our TechNet blog.

Question:

With an AD-Integrated zone, when a record is added or updated in DNS on one server, how much time is needed for the DNS server service to find this record and load it (assuming that the other DC/DNS server is in the same site and DS replication is working fine)?

This is a great question that often confounds us and we see some people hitting the “Refresh” button every so often and others choosing to close/reopen the DNS MMC, while many others resort to restarting the DNS Server service. What is the correct method?

So here’s a little flow-chart that shows the “workflow” as a new DNS record is updated on a DNS server in an AD-Integrated zone:

image

 

So if all DNS servers are in the same site and AD replication is working fine, the short answer to this question is 180 seconds or 3 minutes since that’s how often DNS server service polls Active Directory for changes in Active Directory integrated zones.

And your next question maybe: How do I control this behavior? What if I want to reduce it to 2 minutes?

This setting is stored in the registry as “DsPollingInterval”under the subkey: HKLM\System\CCS\Services\DNS\Parameters.

Before you open regedit, let me show you an easier way to query (or change) this setting - by using dnscmd in an Administrator cmd prompt window:

dnscmd /info /dspollinginterval  - should show you the current setting, and

dnscmd /config /dspollinginterval 120 - would change it to 120 seconds.

Although the range of this setting is 0-3600, if the DNS server is running Windows Server 2008 or above, setting a value of 0 for dspollinginterval will result in the default interval of 180 seconds being configured, and values of 1-29 are not allowed as mentioned in this TechNet article.

While we are talking about dnscmd, I should mention that if you use dnscmd on a Windows Server 2012, you may see this message:

In future versions of Windows, Microsoft might remove dnscmd.exe.

If you currently use dnscmd.exe to configure and manage DNS Server,

Microsoft recommends that you transition to Windows PowerShell.

So you should start using PowerShell for these tasks.  Here’s the equivalent command in PowerShell 3.0:

Get-DnsServerDsSetting

The first setting returned is “PollingInterval(s)”.  To change it to say 120 seconds:

set-DnsServerDsSetting -PollingInterval 120

And one last thing: this is a per server setting.

So if you are thinking - this is great information but I have a LARGE number of DNS servers, how about some automation to make it easy to change this setting on all of them. No problem. There are many ways to automate this change, let’s look at two of them. First one is our good ol’ FOR command. Something like this at an Administrator cmd prompt:

for /F %A in (dnsservers.txt) do dnscmd %A /config /dspollinginterval 120

Where dnsservers.txt contains the list of DNS servers.

And for our second example, I have TWO different ways to do this in PowerShell:

First method: Using the dnsservers.txt file that has a list of all DNS servers that need to be modified:

Get-content .\dnsservers.txt |foreach {set-DnsServerDsSetting -PollingInterval 120}

Second Method: If all your domain controllers are DNS servers, this one will modify setting on all of them:

Get-DnsServerDsSetting -ComputerName (Get-ADDomainController | % {$_.Name})|Set-DnsServerDsSetting -PollingInterval 120

Until next time!

Rakesh Chanana

Troubleshooting Windows Performance Issues Using the Windows Performance Recorder

$
0
0

One of the great things about the Windows Assessment and Development Kit for Windows 8 is the Windows Performance Toolkit V5.0.

The toolkit, itself, has a subset of tools that produce in-depth performance profiles of Windows operating systems and applications.

If you are familiar with the older versions of the Windows Performance Toolkit obtained from the Windows 7 SDK, you know that obtaining traces with Xperf was, at times, very complex. Knowing which providers and stackwalking flags to enable was a struggle all together.  There was no built-in UI and everything was done from a command prompt using xperf.exe or xbootmgr.exe.  We wrote several posts on troubleshooting Windows performance issues using the Xperf utility as part of our Xperf Xpert series:

Becoming an Xperf Xpert: The Slow Boot Case of the NetTCPPortSharing and NLA Services

Becoming an Xperf Xpert Part 2: Long Running Logon Scripts, Inconceivable!

Becoming an Xperf Xpert Part 3: The Case of When Auto “wait for it” Logon is Slow

Becoming an Xperf Xpert Part 4: What Did the WDIService Host Ever Do To You?

Becoming An Xperf Xpert: Part 5 Gaps of Time For Explorer.exe

Windows Performance Toolkit V5.0 introduces WPR (Windows Performance Recorder) as well as WPA (Windows Performance Analyzer) to make our lives easier.  Basically, the recorder gathers the trace and the analyzer opens the trace. 

For today’s topics, we will focus on installing the Windows 8 ADK as well as using the WPR for common scenarios.

Installing the Windows Performance Toolkit V5.0

Installing the toolkit is pretty straightforward.  First, download it from here.   Don’t worry, you only have to do this on one machine which has internet connectivity as the install itself will copy down local copies of the Windows Performance Toolkit redistributables that can be used for installation to other machines. 

One thing to point out is that Windows Performance Toolkit v5.0 is compatible only with Windows 7, Windows Server 2008 R2, Windows 8, and Windows Server 2012

So, let’s get started…. During the install, specify a location or a download path:

  
  

Select Yes to Join the CEIP:


Accept the license agreement:

Since, we are only installing just the Windows Performance Toolkit,
uncheck the other boxes and then click on Install:

Installation should only take a few minutes:

Now, you are done:

 

By default, the Windows Performance Toolkit installs to C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit.  If you wanted to install to an alternate machine, you can use the redistributables under C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\Redistributables.  One cool thing to notice is the redistributable for ARM :-)  which means you can install the toolkit on tablets\notebooks running Windows RT

 

Using the Windows Performance Recorder

WPR comes in two flavors, a GUI form (wprui.exe) and a command line version (wpr.exe).  Since I am a GUI junkie, I will be using wprui.exe. 

To launch the Windows Performance Recorder, simply double click on the exe or if you are using Windows 8, click on the tile on the start screen (the one with the bar graphs!).

The tracing itself does require administrative privileges, so don’t be surprised if you see a UAC prompt:

Note:  If you are running a 64 bit version of Windows 7 or Windows Server 2008 R2, you will need to disable the paging executive in order to successfully gather event stacks.  To do this, go to a command prompt and run wpr.exe –disablepagingexecutive on.  You can also do this manually by setting DisablePagingExecutive (dword) to a value of 1(hex) under HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\Memory Management.  This does require a reboot. 

Also, if you have 24 gigs or more of memory, it might be better to use the following general profile:  C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\SampleGeneralProfileForLargeServers.wprp.  Systems with large amounts of memory will consume larger amounts of nonpaged pool(buffer).  This profile uses less memory then the default general profile.   

At its simplest form, the GUI will appear like below:

Clicking on Start will launch the General Profile for the recording.  In fact, by default the general profile is loaded for every recording. It is what its name implies……a general profile for recording basic
system issues and performance data.   The general profile uses the NT Kernel Logger Collector as well as the WPR_initiated_WPR Event Collector.  You can see this in action when a trace is running by going to Data Collector Sets\Event Trace Sessions.  There you will see those two collectors running. 

Now that the recording is started, the GUI will appear like below:

If you notice, we now show the number of events dropped.  I can’t tell you how many times I’ve reviewed xperf data that had numerous events dropped which lead to an inconclusive analysis.  If you see events
being dropped, you can just recapture a trace with WPR and save yourself headaches.  

Clicking on Save will allow you to save the trace and cancel will stop the trace.  Once saved, it restarts the tracing so that if you have another issue you could easily save it from the buffer.   

We now save the traces under the user account conducting the recording under Documents\WPR Files. 
This is primarily for security concerns as outlined by the yellow exclamation. 

Furthermore, we can provide a detailed description of the problem, and the format is saved as machinename.date.time format. 

Notice, there are also folders marked in same format with NGENPDB at the end.  This folder will contain all the PDBs necessary to diagnose issues with the managed components. 

Now, let’s move on to more advanced recording with WPR.  Clicking on “More options” will bring up additional profiles used for recording. This is where you can get more granular on what you want to capture.  There is an overhead associated with ETL tracing and the more checkboxes you check, the more overhead you will experience.  Never, I repeat, never check every checkbox!!

The Performance scenario dropped down gives you the scenario you are trying to troubleshoot. 

  • General: Records general performance while the computer is running.
  • On/Off - Boot: Records performance while the computer is booting.
  • On/Off - Shutdown: Records performance while shutting the computer down.
  • On/Off - RebootCycle: Records performance during the entire cycle while the computer is rebooting.
  • On/Off - Standby/Resume: Records performance when the computer is placed on standby and then resumed.
  • On/Off - Hibernate/Resume: Records performance when the computer is placed in hibernation and then resumed.

You might ask yourself …..what happened to xbootmgr.exe?  I thought that did boot level traces?   Guess what, WPR replaces xbootmgr and will be the preferred tool moving forward for any on/off scenarios.

The Detail level dropdown is where you specify whether the detail is light or verbose.  Verbose is the default, but there might be times such as analyzing network I/O activity where you do not want to include
every send, receive, and acknowledgement in the trace.  That is where light comes in handy.  For the Logging mode, memory is the default for the General Performance scenario.  I’d suggest for every LONG recording setting this to memory.  Setting this logging mode to memory records logging data to circular buffers in memory.  Furthermore, if you cannot anticipate when the issue will occur, use memory logging mode. 

Logging to a file is generally used for very short recordings.  Do not attempt to open a trace in WPA that is very large.  If you are trying to analyze a trace in the several gigabytes, use memory logging mode. 

Note:  For any boot, fast startup, shutdown, rebootcycle, standby/resume, and hibernate/resume scenario, WPR will ALWAYS log to a file.  You cannot change this in the GUI. 

So, what if you wanted to create a custom profile if you already knew the provider you are trying to gather a trace for?  MSDN has some good examples on creating custom profiles.  Also, there is an example wprp file in the C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit directory.  To add it, click Add Profiles, then click on SampleWPRControlProfiles.wprp. 

Notice, the profile was added under “Custom measurements.”  All custom profiles will be added there. 

Now that we have a recorded trace, what do we do with it?  Well, that is where WPA comes into the picture.  The Windows Performance Analyzer allows you to open the trace for analysis. You mean, it doesn’t
analyze it for me?   Nope, not quite.  I will save that thought for a later topic. 

If you are interested in seeing some “live action” videos with WPA and WPR, I would suggest looking at the following Channel 9 videos by Michael Milirud:

Capturing and analyzing performance traces

Introduction to the new WPA user interface

Customizing WPA Trace Views

As with anything, practice makes perfect!  Install the toolkit and get used to it.  Time is your friend.

 

Cheers,

James Klepikow Platforms PFE

How to create an Active Directory Subnet/Site with /32 or /128 and why

$
0
0

While working with a customer this week on their Active Directory (AD) Site configuration I found out they had not heard about using a /32 or /128 subnet mask. In fact my Bing search did not reveal a lot of information on how and when to use this handy part of AD. The purpose of this article is too explain the what, how and when of using this technique.

AD Sites allow you to attempt to match the logical (AD Sites) to your physical subnets (cables, wires, hardware). This is not an easy task, mostly because networking is hard for most people. Your networks may also not have the best documentation or have available documentation to the AD administrators. Your network may also not be broken down the way you need it for AD purposes.

Let me define some uses for sites before we go any further:

What are Sites for anyways?

1) To help control AD replication. Putting domain controllers within a site (intra) or further way in different sites (inter).

2) The client logon process uses sites to find the closest domain controller(s).

3) System Center Configuration Manager uses AD sites for its configuration and package distribution.

4) Distributed File System (DFS) uses sites to determine if a given client is closer to one replica or another.

5) Printing can use locations which are an attribute of sites, allowing clients to locate printing devices close to them.

6) Blank sites (without a Domain Controller or DFS Server) force clients to use site costing to find the next closest site. They may also be setup for later use, staged as you will.

What?

What does it mean /32 or /128? First off I am taking about creating a logical connection within your AD to match the physical that already exists. Start by opening Active Directory Sites and Services snap-in from the Administrative Tools. I am using Windows Server 2012 but any currently supported version of AD will work for this purpose. The /32 or /128 will take precedence over a lesser subnet mask of say /24 or /64. The dialog box looks like this:

clip_image002

This is from my test lab where I have only two Domain Controllers (DCs). I wanted to have two sites, but the physical network has only one subnet. /32 or /128 to the rescue!

clip_image004

If you use a /32 on IPv4 that puts all 1’s for the subnet mask (32 of them in fact), leaving only that IP for the range. So a range of 192.168.3.51/32 would only cover 192.168.3.51’s IP.

clip_image006

If you use a /128 on IPv6 that puts all 1’s for the subnet mask (128 of them in fact), leaving only that IP for the range. So a range of fe80:a10c:5fea:47c1:df05/128 would only cover fe80:a10c:5fea:47c1:df05’s IP.

How?

First you need to find your IP addresses. I like to open a Command Prompt and type ipconfig as seen here:

clip_image008

These are static addresses. If you use DHCP to assign the computer an address you will have to reserve it within DHCP to ensure it keeps that address after a reboot and that the lease will never expire.

Next, open Active Directory Sites and Services snap-in from the Administrative Tools. Highlight Subnets (under Sites) and secondary mouse button to create a new Subnet. After you have a few /32 or /128’s is should look something like this:

clip_image010

When?

So when does this come in handy?

1) Test Lab scenarios.

a. When you need to test AD or applications between AD Sites and you only have 1 subnet. Below I created two logical sites on one physical network using IPv4 and IPv6 subnets. The /32 or /128 allows me to control precisely which site each test machine will be in, even though they are on the same physical subnet in the real world.

b. To allow testing of remote services for a client

clip_image012

2) Build Site.

a. On Windows Server 2012 you have the ability to pick the server you replicate from when you create a new Domain Controller. You may not want to use the same DC that all your clients hit. Creating a new Site and using a /32 or /128 to add a dedicated promotion DC might be just what you need. This is an awesome feature of Windows Server 2012 (first in 2008 R2). I love it!

clip_image014

b. Another use is a dedicated application testing, point them to your /32 or /128 Build Site and let them have fun. You may have to move the client (by its IP Address) into the site before you test.

3) Isolation

a. To isolate a computer from traffic maybe because you want to decommission it. Taking it out of a used site will allow only hard coded application to continue to use it. Monitoring to find the rest and “fix” the hardcoding. You might need a rubber mallet to stop the person from hardcoding.

b. To do maintenance on the machine or software for a lengthy period of time.

4) DFS

a. DFS site costing.  You might find that you have sites with no DCs but we need to shape the DFS link target discovery.  So we create a site, link it up, and then associate a single IP subnet for the site that matches that DFS link target server’s IP.  Lastly, have the client sites linked to the DFS site then link the DFS site to the site with the DCs.

Conclusion:

As you have seen the /32 or /128 subnet can be a very handy tool in your Active Directory bag of tricks. No need to limit the weight on DNS records or even reboot a machine or use RRAS to create a new network. This technique can be used for clients and servers alike.

Rod Fournier

Mailbag: How do you set network adapter settings with PowerShell in Windows 8 or Windows Server 2012?

$
0
0

Here’s an interesting question that came to us from one of our readers. If you have a question for us, don't forget that you can contact us using the Contact tile just to the right of this article when viewed from our TechNet blog.

Question:

I am setting up multiple Windows Server 2012 and Windows 8 virtual machines.  How do you set the IP address of a network adapter using PowerShell?

This is a great question.  It makes sense to be able to automate parts of the virtual machine configuration process, especially when creating multiple virtual machines for a virtual lab environment.  While these commands can be pasted into a PowerShell prompt from the Clipboard functions of a Virtual Machine Connection, these could be slipped into the OS install with other tools as well for a more automated experience.  The PowerShell commands referenced below may be used with Windows Server 2012 and Windows 8.  These commands don’t just work within virtual machines.  They work with installations to physical hardware as well.

To configure network adapters in Windows Server 2012 or Windows 8 using PowerShell, you first need to know the names of the network adapters as you would see them in the list of network connections if you were to look.  The default name for a network connection in Windows Server 2012 or Windows 8 is Ethernet. .  If you’ve installed Windows Server 2012 or Windows 8 with just a single network adapter, then the only name you need to be concerned with is Ethernet.  If there are other network adapters detected by this operating system, those will be sequentially numbered starting with Ethernet 2, and so on.  For this example, let’s assume a configuration of two adapters with the following intended IP address configuration:

“Ethernet” IP Address

192.168.0.14

Subnet Mask

255.255.255.0

Default Gateway

192.168.0.1

DNS

192.168.0.10

“Ethernet 2” IP Address

10.0.0.1

Subnet Mask

255.0.0.0

Default Gateway

undefined

DNS

undefined

 

The following commands may be used to configure the network adapters based on the configuration above:

 

$netadapter = Get-NetAdapter -Name Ethernet

$netadapter | Set-NetIPInterface -Dhcp Disabled

$netadapter | New-NetIPAddress -IPAddress 192.168.0.14 -PrefixLength 24–DefaultGateway 192.168.0.1

Set-DnsClientServerAddress -InterfaceAlias "Ethernet" -ServerAddresses "192.168.0.10"

 

$netadapter = Get-NetAdapter -Name “Ethernet2”

$netadapter | Set-NetIPInterface -Dhcp Disabled

$netadapter | New-NetIPAddress -IPAddress 10.0.0.1 -PrefixLength 8

 

Using IPConfig /all, you can see the results of these commands:

 

 

If you wanted later to change Ethernet 2 back to DHCP addressing, it is as easy as omitting the New-NetIPAddress calls and changing the –DHCP parameter of Set-NetIPInterface to Enabled.

 

$netadapter = Get-NetAdapter -Name “Ethernet2”

$netadapter | Set-NetIPInterface -Dhcp Enabled

 

 

For more details about the New-NetIPAddress PowerShell cmdlet for Windows 8 and Windows Server 2012, as well as links to other related cmdlets you may find valuable, please consult the following reference:

http://technet.microsoft.com/en-us/library/hh826150.aspx

Also, when configuring virtual machines, the Metrics setting can be important.  For more information about the importance of Metrics setting on the network adapters, check out Roger’s post below:

http://blogs.technet.com/b/askpfeplat/archive/2013/03/10/windows-server-2012-hyper-v-best-practices-in-easy-checklist-form.aspx

Until next time!

Martin

One Little Victory – Get Value from the Remote Server Administration Tools on Windows Server 2012 and/or Windows 8 today!

$
0
0

If you've read any of my prior posts, you know I like to be able to do things without a lot of prep, in a short amount of time. Yes, I like the K.I.S.S principle – Keep It Simple, Stan.

I also enjoy the sense of accomplishment that comes when I can leave work and know that I actually made progress in the pursuit of "the well-managed IT infrastructure."

Like the song by Rush, "One Little Victory," sometimes the small wins turn out to be big.

Well, in this post, I'm going to show you how you can get value/benefits from the Remote Server Administration Tools (RSAT) on Windows Server 2012 or Windows 8.

"Frank, what's an RSAT and should I want one?"

Remote Server Administration Tools includes Server Manager, Microsoft Management Console (MMC) snap-ins, consoles, Windows PowerShell CMDlet and providers, and command-line tools for managing roles and features that run on Windows Server 2012. In limited cases, the tools can be used to manage roles and features that are running on Windows Server 2008 R2 or Windows Server 2008 and some of the tools work for managing roles and features on Windows Server 2003.

You can do this today.

This afternoon.

Not next week, next month or next year.

No need to wait for SP1 or R2.

No need for Schema extensions.

No Enterprise Admin membership.

No 2012 DCs required.

Of course, there is no free lunch, though and there are some requirements:

Now that we've covered that, you can get benefits from the new tools of Windows Server 2012 in short order.

I should state clearly and emphatically that you should NOT circumvent established processes and procedures for deploying a new OS/system into your production environment. Hopefully, a base build "design" is a pre-requisite to deployment of a new OS to your production environment.

However, in terms of learning, ramp-up and proof of concepts, you can deploy a Windows Server 2012/Windows 8 system as a member of your domain as easily as any other Windows OS.

So, download the Windows Server 2012/Windows 8 trial or utilize your Microsoft benefits to obtain the install media for the OS and let's get started!

Provision a VM or maybe re-purpose a physical machine and install the OS.

Configure the system and join the system to your dev/test domain – you do have a dev/test environment, right?

Patch it up via Windows Update, WSUS, SCCM or your company's patching mechanisms.

  • This is another great aspect for your proof of concept – "How will we patch the new OS?"

Windows Server 2012

When you're ready and you've signed in, open Server Manager and run the "Add Roles and Features" Wizard:

  • Click Manage > Add Roles and Features.

    

  • Click Next.

   

  • Verify "Role-based or feature-based installation" is selected and click Next.

   

  • Verify/select your Windows Server 2012 server from the Server Pool and click Next.

   

  • Don't select any option on the "Select server roles" page (we're not adding any Roles). Click Next.

   

  • On the "Select features" page of the Wizard, select 'Group Policy Management' and scroll down to choose any other Remote Server Administration Tools you want and click Next.

 

   

  • Click Install.

   

  • Click Close to complete the install.

Now, let's discuss a few of the awesome tools you just added and how they can help you manage the infrastructure:

AD CMDlets for PowerShell 3.0

  • Don't be late - Automate!
  • Folks, if I can be a functional PowerShell technician, ANYONE can
  • See Mr. Ashley McGlone's AD PowerShell post for some AD-PoSH Joy
  • "Be useful now" examples:
    • Find enabled user accounts in AD with passwords set to not expire (and then set out to answer if/why those accounts need passwords that don't expire)
      • Get-ADUser –filter {objectclass –eq "user"} –properties * |Select samaccountname,passwordneverexpires,enabled
    • Enumerate the membership of your Domain Admins group (and then set out to reduce that membership!)
      • Get-ADGroupMember "Domain Admins" –recursive | select samaccountname,name | format-table -autosize

Server Manager

  • Add some servers into the default "All Servers" Server Group (or create your own custom Server Group)
  • Right-click "All Servers" and choose "Add servers"

      

  • Multi-select a few servers from the domain and add them in…

 

            

  • Recall that in Windows Server 2012, remote management is enabled by default.
  • On prior OSes, you'll need to enable remote management and also install WinRM or you'll see a message like the one highlighted in blue above (SRV2008R2-01 in the screen shot above)
  • See my prior Server Manager Post for more details and links to the updates needed to manage down-level OSes
  • Once that's installed/enabled on your down-level systems, refresh your Server Group and voila!
    • Below, I have two 2008 R2 machines and the local 2012 machine in my Server Group

         

  • "Be useful now" example:
    • Highlight one of the remote servers in the Server Manager console
    • Right-click and choose a function targeted to the remote system

AD Administrative Center (ADAC)

   

Group Policy Management Console

Windows 8

You can achieve these same small victories via Windows 8, too, but you'll need to download the Remote Server Administration Tools for Windows 8 before you'll be able to see/use the Tools discussed here.

  • Here is an excellent link describing all the Tools and a matrix of OS support for each one
  • After you install the RSAT pack on Window 8, there will be a Tile for Server Manager and one for the Administrative Tools folder on the Start screen.

       

  • You can add shortcuts to your "tools of choice" in a few ways:
    • Method 1 – pin the specific tool(s) you want to the Start screen and/or Taskbar
      • Click the Administrative Tools Tile
      • Right-click the shortcut(s) and select "Pin to Start" or "Pin to Taskbar"

           

           

      • Also, from the Start screen, you can 'pin' the tool you want to the Taskbar on the Desktop.
      • Right-click the Tile and select "Pin to taskbar"

                                 

   

                                 

    • Method 2 – select the option to "show Administrative tools" on the Start screen
      • ** WARNING ** - this will put a lot of Tiles on your Start screen and could be considered by some to be Start screen pollution J
      • From the Start screen, open the Charms bar (WinKey + C or move the mouse to the lower or upper right corner of the screen)
      • Select "Settings"
      • Then "Tiles"

   

   

      • Move the slider for "Show administrative tools" to "Yes"

   

   

      • You'll get them all…

   

   

                              

   

There you have it – useful tools by 4:00 pm. Don't be late for dinner!

Cheers!

Hilde


Mailbag: Problem of the week: DNS Aging and Scavenging (getting the DNS record timestamp) with new Windows Server 2012 DNS cmdlets

$
0
0

Greg here with a quick post where the new DNS PowerShell cmdlets in AD made a task much easier.

 

Many of our customers use Microsoft DNS and a feature of Microsoft DNS is the ability to remove stale records. By default this feature is disabled and some people never enable it, and others disable it believing it has deleted something it should not. Then years later they find they have 1000s of stale records and want to clean up this situation. The problem with our traditional cmd line tool DNSCMD is that it does not output the timestamp in a friendly readable format. There are other blog posts out there with scripts that sometimes work and sometimes we go onsite to help. Now we have a PowerShell cmdlet that will easily get this information for you. You do not need a Windows Server 2012 DC or DNS server you just need a Windows 8 or Windows Server 2012 machine with the new DNS cmdlets.

 

Get-DnsServerResourceRecord-ZoneName"demo.local"-RRType"A"|Export-Csvdemo.csv

That one liner will output all of the A records from a zone called demo.local and give us a file we can easily put in Excel to review these records.

This does not look pretty in a blog post so I have attached the file if you are interested in the output.

 

 

If you are not familiar with DNS aging and scavenging we have plenty of documentation around this.

http://technet.microsoft.com/en-us/library/cc759204(WS.10).aspx

Windows Server 2012 DNS PowerShell cmdlets

http://technet.microsoft.com/en-us/library/jj649850.aspx

 

Greg

Audit Membership in Privileged Active Directory Groups. A Second Look.

$
0
0

Some months ago, I shared a PowerShell script to enumerate the membership of privileged groups (including membership in nested groups) and report membership as well as password ages. Like most scripts, it works well in most environments, but has some limitations. One glaring limitation that I’ve found, for example, is that it searches for privileged groups by name. However, in some environments the groups may have been renamed. Or even more problematic are instances where built-in group names are different in non-English versions of the OS.

Since the built-in privileged groups all have well known SIDs, the logical solution was to re-write the script to search for groups based-on SIDs rather than names. So I started by identifying the well-known SIDs for the built-in privileged groups. There’s a KB for that. As it turns out, some of the well-known SIDs are constructed from the domain SID or the forest root domain SID. For example, the SID for Enterprise Admins is the root domain SID with “-591” appended to it.

Consequently, I had to re-work my script to identify the SID for every domain in the forest. Then, I had to construct all the SIDs for the privileged groups and enumerate their memberships.

To add another degree of difficulty, I wrote the entire script without using the AD PowerShell Cmdlets. As I’ve mentioned before, I still run into customers who can’t use the AD Powershell Cmdlets because they still have all 2003 domain controllers (without the AD web services installed). So instead of using one line of PowerShell to generate a list of domain SIDs:

(Get-ADForest).domains | forEach {Get-ADDomain $_} | Select-Object Name,DomainSid

I had to use about twenty lines of code to generate my list of SIDs. Most interesting was the use of .Net methods to convert SIDs to string values:

$RootDomainSid = New-Object System.Security.Principal.SecurityIdentifier($AdObject.objectSid[0], 0)

So I began talking to my peers about the beauty of the AD PowerShell Cmdlets and how they’ve saved us from writing lines and lines of code. I thought, “Let me re-write my script and show people how much more succinct the code could be.”

What started out as a noble effort, has turned into my White Whale. While sections of my script can be obliterated with single line Cmdlets, there are holes in the AD PowerShell Cmdlets that are frustratingly difficult to address. So here’s a challenge for you PowerShell junkies out there (and who have environments where the Cmdlets work), tear down my script and replace sections with AD Cmdlets.

I’m already working on my next blog, tentatively titled “Who’s the tool – PowerShell or Me?” where I’ll detail some of the different ways of using PowerShell with AD – including the AD Cmdlets. I’ll point out the differences and some of the relative strengths and weaknesses of each way.

Meanwhile, I’m counting the days until July 14th 2015 when Windows Server 2003 is no longer supported, so I can leverage the AD Cmdlets in every environment I visit.

Since I’ve taken you off on a tangent, let’s get back to the original purpose of this post. You’ll find an updated version of the script on the TechNet Script Center.  As before, it will enumerate membership in privileged groups and report password ages. While it’s not perfect, it better than the original in the following ways:

1. It targets groups based on well-known SIDs, so it will work in more environments.

2. It also reports on members that may not be users (computers or managed service accounts)

The syntax is straight-forward. Launch PowerShell. No special privileges are necessary, but you’ll have to run as a domain account, so we can read the directory. You’ll also need connectivity to DCs in the forest so we can enumerate group memberships. Simply run the script.

privilegedUsersV2.ps1

clip_image002

It’ll dump output to the screen and in a CSV file (that will dump in the same directory from which you launch the script).

Don’t forget to review the original blog for information on how (and why) to use the script.

Doug Symalla

A note on all of our AskPFEPlat scripts.  We’ve removed all script attachments to our blogs and posted them on the TechNet Script Center.  The blog will contain a hyperlink to the relevant location.  You can find all of our scripts on the Script Center by searching for the keyword AskPFEPlat.

RaaS-Active Directory The Engineers Point Of View

$
0
0

My name is Bryan Zink and I am a Microsoft Premier Field Engineer focused on supporting Windows Server and Active Directory. You’ve probably read the fantastic post by Yong Rhee introducing RAP as a Service. Maybe you even read What is RaaS? Is That a Real Acronym? posted by Doug Symalla. Today I wanted to assure you that yes, it is a real acronym and one you’ll want to fully understand. In this post, we’ll dig into HOW it works and WHY you should jump in.

First a brief history

A long time ago in a galaxy far far away (actually, it was Dallas, TX), a team of 15 PFEs pulled together some tools, wrapped them in a process and called it the Active Directory Health Check (ADHC for short). The intention was to partner with a customers’ IT team to help spot the issues we knew caused pain to avoid lost productivity and outages.

 

image

 

As the process matured, we found there were two huge benefits. First, outages were dropping as customers better understood how to operate their AD environments. Second, we drove some great changes through the Windows product team resulting in improvements to diagnostic tools and prescriptive guidance.

When we transitioned from the ADHC to the era of the Active Directory Risk and Health Assessment Program (ADRAP for short) we formalized the tools and services development process in many ways. While our goals around assessing the issues and providing remediation guidance were still the same, we wanted to bring a more exciting experience to the customer and leave behind a much nicer toolset.

Now we’re bringing you the next big thing in Active Directory assessments, RAP as a Service for Active Directory (RaaS-AD for short). Essentially we’re combining the best of the best in tools and processes, moving it to the Azure Cloud platform and giving customers a persistent on-demand assessment experience.

HOW RaaS-AD works

First and foremost, RaaS is a service with a few basic components. These components are made up of the following:

· RaaS Client

· Windows Azure Cloud service

· Online Services portal

The front-end (RaaS Client) is downloaded and installed onto a machine in your AD Forest. This tool essentially discovers the Active Directory components in your environment and facilitates the data collection process. Once data collection is completed, the RaaS Client allows you to securely submit an encrypted blob into the Azure Cloud service.

Once you complete data collection and submission, you will fill out an Operational Survey. This survey covers topics we can’t answer with diagnostic tests. Backup and Disaster Recovery, operational processes etc are examples of topics covered in the Survey.

The Azure Cloud service is where the heavy lifting happens. The collected data along with the Survey results are analyzed for the good, the bad and the ugly against our collection of rules.

The Online Services portal is your view of not only the collected data but also the issues that were identified through the analysis process. This portal is your customized and secured dashboard view. You have the ability to control who from within your organization has access to view this information.

In addition to the components described above, you will receive not only a detailed report of the findings and recommendations but also deep-dive knowledge transfer on the top issues in the environment. You also have a couple of options for how this all comes together. We offer a remote delivery option as well as something that includes on-site time. Specifics can be explained in more detail with your Microsoft Technical Account Manager (TAM).

RaaS does have a re-use license whereby you’re able to leverage the persistent on-demand assessment experience. This enables you to track progress against recommended remediation tasks and generally, check-up on your AD environment at whatever frequency makes sense.

WHY RaaS-AD matters

There are numerous benefits for you to leverage with the RaaS platform. Instead of listing the bullet points from the marketing glossy, let’s cut right to the chase.

Customers who have leveraged the power of RAPs in the past have had almost zero exposure to issues such as the time rollback problem so elegantly detailed in Mark’s post Fixing When Your Domain Traveled Back In Time, the Great System Time Rollback to the Year 2000.

Another example of an issue that strikes many environments who have not had the pleasure of an AD assessment DFS Shares either not replicating or seemingly missing data. Here’s a post by the infamous Ned Pyle covering the Top 10 Common Causes of Slow Replication with DFSR.

Finally, have you ever wondered about just how weird the symptoms of Lingering Objects can be? Have a look at this post from David Everett setting us straight on Strict Replication Consistency - Myth versus Reality.

At present, there are roughly 600 Health and Risk issues we specifically look for in a RaaS-AD assessment and more are being added weekly. All of this can and should be yours, operators are standing by. If you’re still reading and would you’d like a RaaS-AD, feel free to contact us at Askpfeplat and we’ll get the right people going.

In the event you want to see a more formalized list of value points, keep reading.

Delivered when you’re ready: Faster turnaround time for generating actionable results: data is collected and submitted as soon as you are ready and reports generated by a PFE within a few days of completing the submission tasks.

New/Updated IP always available: Absolute latest rules (IP) and all new IP updates available to customers during their contract without paying for an additional assessment.

Current State Assessment on-demand: Updated view of your environment through the Online Services portal, as often as you would like, helps tracking remediation progress.

A PFE can still go on-site: If you still want on-site assistance in the form of knowledge transfer or remediation assistance, we can still absolutely provide that experience.

Support: ADRAP (actually all RAPs in general) had no support for either the toolset or the process other than what the PFE was able to deliver as part of the engagement. However, RaaS has full support for both the toolset and the end to end process. The only thing not supported is the actual remediation of an identified issue.

Updates to the toolset: Not only do we update the IP (rules and issues) much more frequently but we now also perform updates to the RaaS Client as well as the back end platform.

Reliability, Security and Privacy: Providing a feature rich and usable experience is only part of the solution. Reworking the backend systems that integrate to complete this experience have allowed us to provide a more reliable, stable and secure experience for everyone.

-Bryan

Building a VM in Windows Azure using PowerShell in a few quick steps

$
0
0

Hello folks, its Rick here. In a previous post Michael gave you nice overview of Windows Azure Services and helped you setup your own VMs in cloud using the free trial thru the Azure portal. Today I would like to walk you through building a VM in Windows Azure using your favorite management tool ‘PowerShell’, and in doing so you will also unlock 133 cmdlets that will help you manage other services you may have running in Windows Azure. If you have not signed up for a 90 day trial yet, I suggest you give it a try. Also did you know if you have an existing MSDN subscription, you may be entitled to up to 1500 compute hours that’s $6500 in annual Windows Azure benefits at no charge.

Let’s get started;

1. Download the Windows Azure PowerShell and install it on a Windows 8, Windows 7, Windows Server 2012, or Windows Server 2008 R2 machine.

image

 

2. If you would like to use Windows Azure PowerShell snap-in you can directly launch it from your Start Menu/Screen, but if you are like me and rather import the module into your existing Windows PowerShell, you know the drill.

Set-ExecutionPolicy RemoteSigned

Import-Module "C:\Program Files (x86)\Microsoft SDKs\Windows Azure\PowerShell\Azure\Azure.psd1"     

3. Configuring connectivity between your workstation and Windows Azure; before a connection can be made from your Windows Azure Powershell module you first need to download your publish settings from Azure portal using the cmdlet below.

Get-AzurePublishSettingsFile    

4. Your browser will take you to https://windows.azure.com/download/publishprofile.aspx, where you can sign in to Windows Azure account download the publish settings file. Note the location where you save this file. This file has Azure API information, your subscription ID and more importantly the management certificate that needs to be imported locally on your machine using another cmdlet.

 

image  
 

5. Now we will go ahead the import the publish settings file we have just downloaded using the command below. Where <mysettings>.publishsettings is the file that you downloaded in the previous step. You should delete the publishing profile that you downloaded after you import those settings. The downloaded profile contains a management certificate that should not be accessed by unauthorized users.

Import-AzurePublishSettingsFile <mysettings>.publishsettings

image
   

You can optionally view the publish settings file you downloaded in Step 4, in Notepad.

 image

 

6. Let’s take a look at our current subscription, notice the certificate being good for one year since the time you set this up.

 image

 

7. Now let’s take a look what are some of the cmdlets that Azure PowerShell module unlocks for us, focusing on VM management. You can see the rest here.

 image

 

8. Let’s see what I already have running out there..

 image

 

9. We are almost ready to build the VM, but before we do that we have to use Set-AzureSubcription cmdlet to save our subscription information and set the storage account that was previously setup using the Azure portal. You can use the Get-AzureStorageAccount cmdlet to retrieve the StorageAccountName property.

 image

 

Set-AzureSubscription -SubscriptionName "Windows Azure MSDN - Visual Studio Ultimate" –CurrentStorageAccount portalvhdshmdl42f1wmfd7

You can verify the above command using the Get-AzureSubscription cmdlet again.

Above “portalvhds*” being the name of my storage.

10. Let’s build a new VM using the New-AzureQuickVM cmdlet, (note that as mentioned in the previous post, Azure provides pre built images in VHD, in following example we have run the Get-AzureVMImage and passing the name of a 2012 image. You can also use Test-AzureName cmdlet to verify if the VM or service name you are wanting to acquire is available.

New-AzureQuickVM -Windows -ServiceName TESTADDC03 -Name TESTADDC03 -ImageName MSFT__Windows-Server-2012-Datacenter-201210.01-en.us-30GB.vhd -InstanceSize 'Small' -Password Password!@# -AffinityGroup Chicago

image

You will see everytime you deploy a VM, it first gets created as a Cloud Service and then the VM itself gets deployed, you will see the progress bar for the VM creation as well.

 image

It took less than a minute to spin up this test VM for this demo, your time may vary.

 image

11. Let’s verify that our VM is up and running.

 image

 

image

 

This wraps up this quick tutorial, hopefully you can see that after you have your initial subscription ready and registered and at least one storage account setup, you can provision VMs using PowerShell in a matter of minutes. I invite you try this out yourself and see what other useful cmdlets you find in the Azure PowerShell module.

Until next time, Rick “is there a cmdlet for that” Sheikh !

Some Other PFE Blogs You’ll Want To Be Reading

$
0
0

Hey y’all, Mark here with some quick links for you. We have two other PFE team blogs that have been getting some steam as of late we thought we’d pass along so our fine readership can check them out and spread the word.

A bunch of SQL folks have started to bring an old blog back from the dead. You can find their blog here. Their latest post, SQL 2012 System Health Reporting Dashboard is the type of great content they are putting out. They are always looking for community feedback so send in your questions or if SQL is not your thing, don’t worry its not mine either, pass it on to your favorite DBA. They can owe you one.

Also if you are running Dynamics CRM you should be reading our PFE CRM in the Field blog. If you weren’t before…you’re welcome. Several of them just got back from the Dynamics Convergence conference where they get to mingle with their community which they are highly active in. Reach out to them and tell them we sent you.

As always send us in your questions via email or if you dare to Doug ‘Grandpa Simpson’ Symalla has just discovered in his own words “this new thing called twitter” and is manning our @PFEPlatforms account. He’s in the process of changing the profile picture. You’ve been warned. I too can also be found on twitter @markmorow don't be shy. Have a good weekend and as always we’ll have a new post up for Monday. It’s a good one. 

 

Mark “SQL DROP TABLE Joke Here” Morowczynski

Viewing all 501 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>