Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

IPv6 for the Windows Administrator: Why you need to care about IPv6

$
0
0

Hey y’all, Mark back with a new topic we haven’t really talked about much here on the blog, IPv6. When I go onsite with customers I tend to have two discussions over and over again. First, RPC ports and firewalls. Ned Pyle has taken care of that one here and here. The second, IPv6. The point of this post is not the technical how it all works deep down, the point is to be similar to the on-site discussions I have every other week and is geared at the Windows/System Administrator. Ray Zabillia and I have more posts planned on some basics and how it all works in the coming weeks. If this is a topic of interest we can keep going from there and do some real in depth on some of the transitions technologies and how to roll your own lab even. Please let us know in the comments! Now on to the glimpse of the on-site discussions.

“Who cares about IPv6? We got IPv4 working and it’s working just fine.”

I bet you do. It’s has similar logical argument of, “who cares about 64-bit computing we have 32-bit”. Do you want to make that claim as well? On February 3, 2011, the Internet Corporation for Assigned Names and Numbers (ICANN) joined the Number Resources Organization (NRO), the Internet Architecture Board (IAB) and the Internet Society to announce that the pool of public Internet Protocol version 4 (IPv4) addresses has now been completely allocated.

On 14 September 2012, the RIPE NCC began to allocate IPv4 address space from the last /8 of IPv4 address space it holds. Currently IPv4 address space is now allocated according to section 5.6 of the IPv4 Address Allocation and Assignment Policies for the RIPE NCC service region. The IPv4 pools of the RIRs (Regional Internet Registry) are nearly exhausted RIPE NCC IPv4 Available Pool. Shortly thereafter the ISPs will exhaust their pools. It is at this point that customers will be impacted by the exhaustion, because there will not be any IPv4 addresses available to give them. They are all gone. Donezo.

Also there are several limitations of IPv4. I’m not saying you need to roll out IPv6 tomorrow, but let’s not do things that will make it hard in the future to transition to.

“IPv4 Limitations? Like what?”

Well for starters we are out of addresses as said above. Chances are you are getting MORE internet connected devices not less. But let’s assume you are lucky enough to have an entire class A or B address to yourself and you don’t need more addresses for the foreseeable future. Do you need IP level security or will you need that in the future? I’m guessing so. IPSec is optional in IPv4 but has become a standard in IPv6 from day one which makes the implementations of IPSEC consistent across vendor implementations. What about Quality of Service (QOS)? IPv4 can do that by using the Type of Service (TOS) field but doesn’t work when the packet is encrypted. So hopefully you don’t want both SECURITY and QOS at the same time. It’s getting harder and harder to force IPv4 to do what is easily accomplished in IPv6.

“We got NAT working right now so it’s fine”

That’s a whole other ball of wax. Not to mention its adding complexity to the network which can make troubleshooting issues even harder to deal with, but not every application works with NAT due to the fact it doesn’t have a “real” IP address on the client. Making IPSec work with NAT is also a challenge. NAT can solve some problems but it can also introduce some others. It’s probably not sustainable for the long haul.

“Hmmm all this sounds like you should talk to the Network Team about this, they are up the hall. This is not my problem”

Alright we’ve arrived at the core of this argument. It is ABSOLUTELY your problem. If you’ve never had to troubleshoot a server not being able to connect to another server, it must be your first day on the job. Connectivity troubleshooting is a critical tool in your troubleshooting bag. If it’s not, add it immediately; you’re welcome. Being able to understand an IPv6 address and what it all means will be helpful and in reality a necessity in the future. I’ve had customers where the network team is “testing” IPv6 and the client now starts receiving this “mystery address”. Is that normal? Is it working like it suppose to? Am I on the right network? All these questions today can be answered with an IPv4 address, why would you NOT answer them because the address looks different? The thought of not having basic understanding of IPv4 today is unthinkable, having IPv6 skills will not only put you ahead of the curve today, and it will set you up for the future. Real life example coming up here shortly.

“Yea but still, I hear IPv6 screws stuff up that’s why I disable it like so”

clip_image002

Of course you have. First off, I’ve yet to hear what IPv6 “screws up”. Second, this isn’t disabling IPv6, this is unbinding it from the network adapter. If your goal is to disable IPv6 on the system, you have not done so. It is still running on your system. If you need to re-check that box there is NO PROGRAMMATIC WAY**(see bottom of page) to do so. So if you gone ahead and built that uncheck in your image and you do need IPv6 on that network adapter you’ll need to log into EVERY MACHINE AND RE-CHECK IT. Oh how fun that will be. If you do need to disable it follow KB 929852 using the Disabled Components registry key. I recommend not disabling it but if you have absolutely must, use a GPO so you can easily undo this in the future. As stated in the KB if you do use the Disabled Components registry key that checkbox will still be checked. That is expected behavior.

“This is all great in theory but does this actually happen in the real world?”

We here at AskPFEPlat have a unique perspective by spending so much time in front of so many customers we get to see what does happen in the real world. Recently Ray was assisting one of our large enterprise customers in their migration from Windows Server 2003 Active Directory to Windows Server 2008 R2. They had just installed a few 2008 R2 domain controllers and shortly thereafter Ray received a call from one of the company’s AD architect asking to explain why he was getting an IPv6 addresses in response to his “ping” on the 2008 R2 domain controllers. Further, why were there two IPv6 addresses assigned? And why did one of address always begin with FEC0 and the other with 2002? What addresses are being registered in DNS?

Now at this particular customer most of the IT support and administration, including Active Directory has been outsource to a third party vendor. So Ray had a meeting with the customers’ in house AD staff and several members of the third party outsourcers AD staff. One of the members from the third party AD support staff announced that this had an easy fix, they would simply just uncheck the IPv6 protocol box on the Network adapter settings to disable IPv6 and the problems would be resolved.

See the real life problem? Face palm! If a vendor is telling you to disable IPv6 to “fix an issue” or “has seen it cause problems” push back a bit and ask them what is it actually fixing or problems that it is causing. Have them be specific. It’s time to not allow IPv6 to be this great universe mystery.

“Ok I’m coming around a bit. What is Microsoft’s stance on IPv6?”

I’ll let the official documentation do the talking on this one. Short answer: Leave it on. Original can be found at IPv6 For Microsoft Windows: FAQ.

“It is unfortunate that some organizations disable IPv6 on their computers running Windows 7, Windows Vista, Windows Server 2008 R2, or Windows Server 2008, where it is installed and enabled by default. Many disable IPv6-based on the assumption that they are not running any applications or services that use it. Others might disable it because of a misperception that having both IPv4 and IPv6 enabled effectively doubles their DNS and Web traffic. This is not true.

From Microsoft's perspective, IPv6 is a mandatory part of the Windows operating system and it is enabled and included in standard Windows service and application testing during the operating system development process. Because Windows was designed specifically with IPv6 present, Microsoft does not perform any testing to determine the effects of disabling IPv6. If IPv6 is disabled on Windows 7, Windows Vista, Windows Server 2008 R2, or Windows Server 2008, or later versions, some components will not function. Moreover, applications that you might not think are using IPv6—such as Remote Assistance, HomeGroup, DirectAccess, and Windows Mail—could be.

Therefore, Microsoft recommends that you leave IPv6 enabled, even if you do not have an IPv6-enabled network, either native or tunneled. By leaving IPv6 enabled, you do not disable IPv6-only applications and services (for example, HomeGroup in Windows 7 and DirectAccess in Windows 7 and Windows Server 2008 R2 are IPv6-only) and your hosts can take advantage of IPv6-enhanced connectivity.”

“What Microsoft products support IPv6?”

Get the official list here. It is a lot.

“Anything else I should know?”

A quote from the Foreword of Understanding IPV6 – Third Edition sums it up very well.

“In the past 24 months, we’ve made immense progress toward the goal of upgrading the Internet. IPv6 is no longer the next-generation Internet Protocol; it has become the now-generation Internet Protocol.

 

The World IPv6 Launch in June 2012 marked a key turning point in this transition. When you read this book, some of the most important web services in the world, not only from Microsoft but from across the technology community, are operational on the IPv6 Internet. Millions of users with IPv6-ready computers are using IPv6 to interact with these services and with one another. The apps, the operating systems, the routing infrastructure, the ISPs, and the services are not merely ready, they're activated.”

-Chris Palmer

IPv6 Program Manger

Microsoft

 

Ok hopefully by this point in the post you’ve come around fully on IPv6 and are ready to dive in. The point of this is that IPv6 is not coming, it is here now. IPv4 is in fact the legacy technology. In our next post we’ll get into more of the innards and making sense of it all. Don’t worry it’s not that scary. As always let us know what you think in the comments. 

-Mark “IPv6 Ready” Morowczyski and Ray “IPv6 Ready” Zabilla

 

Update (6/17/13 5:00 PM CST). One of our readers, MVP Richard Hicks, points out in the comments there is a way to do this using powershell.  Set-NetAdapterBinding -Name MyAdapter -DisplayName "Internet Protocol Version 6 (TCP/IPv6)" -Enabled $true. This is correct but only will work in Windows 8/2012. For more info on this command check here

Part 2 of this series can be found here

Part 3 of this series can be found here


IPv6 for the Windows Administrator: IPv6 Fundamentals

$
0
0

Hey y’all, Mark here again. The last post we talked about why you should care about IPv6. In this next installment Ray Zabilla and I are going to demystify these IPv6 addresses you keep seeing and give you a better understanding of the IPv6 address space and syntax. We’ll also compare different addressing concepts between IPv4 and IPv6. As always let us know in the comments if posts like these are helpful and you want more IPv6.

Let’s start with a common example you are used to seeing.

clip_image003[26]

There it is, an IPv6 address. Scary isn’t it.

Let’s break this down and compare it to something we do know fairly well IPv4. IPv6 address are 128 bits long where IPv4 are 32 bits long. This allows for A LOT more addresses. If you want to get specific IPv6 allows for 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses. Say bye-bye to NAT baby!

Breaking down the IPv6 Address

IPv4 Address are divided by 8 bit boundaries, written in decimal and separated by a “.”. From the screenshot we have the 10.0.1.114 address.

IPv6 address are divided by 16 bit boundaries, written in Hex and are separated by a “: “. From the screenshot we have the FE80::d9e:bed6:4917:C7DF%12

One of the other significant differences with IPv6 addresses and IPv4 addresses is that IPv6 addresses are expressed as hexadecimal numbers instead of decimal numbers. Depending on your background this may make it easier or more difficult to understand but stay with us we will explain the rules of the IPv6 address. If you haven’t much experience working with hexadecimal numbers here are a couple of links which provide some more detail if you would like to get a better understanding.

http://www.codemastershawn.com/library/tutorial/hex.bin.numbers.php

http://en.wikipedia.org/wiki/Hexadecimal

The built in calculator can also covert hex for you as well. Just change it to “Programmer”

OK back to our IPv6 addresses. What really helped me understand how to read them is recognizing that each boundary should contain 4 hex-characters and there should be 8 sets of them. For example it would look something like “abcd:abcd:abcd:abcd:abcd:abcd:abcd:abcd” Each little character in that group can be made up of 4 bits also known as ‘nibbles’. So let’s do some math here. Each character is 4 bits, there are 4 characters per set for a total of 16 bits. We have 8 sets, 8 x 16 = 128 bits. Everything checks out.

Now our IPv6 address in the screen shot doesn’t meet the total number of characters and is missing some groups. Let’s write it out the long way and talk about tips how on to shorten the address by compressing zeros.

FE80:0000:0000:0000:0d9e:bed6:4917:C7DF%12

First a group of 0s can be represented by a double colon “::”. You can only use this one time per address. So our new address with 0s compressed can be written as FE80::0d9e:bed6:4917:C7DF%12. My other mental trick is this. I know I should have 8 sets, so I take the number of sets I have and subtract that from 8. That’s how many sets of 0s I have. Ok back to our address.

If you compare our address in the output of FE80::d9e:bed6:4917:C7DF%12 to our new compressed 0s address FE80::0d9e:bed6:4917:C7DF%12 we have an extra 0. You can also compress the leading 0s in address. Thus we have, FE80::d9e:bed6:4917:C7DF%12. Let’s do some other examples and it will become more clear.

Examples

Let’s start with the original IPv6 address from above.

This is known as Colon hexadecimal

FE80:0000:0000:0000:0d9e:bed6:4917:C7DF%12

 Binary form

1111111010000000000000000000000000000000000000000000000000000000

0000110110011110101111101101011001001001000101111100011111011111

 Divided along 16-bit boundaries

1111111010000000 0000000000000000 0000000000000000 0000000000000000

0000110110011110 1011111011010110 0100100100010111 1100011111011111

 

Leading zero suppression

 FE80:0:0:0:d9E:bed6:4917:C7DF

 Leading zero suppression with “double colon” suppression

A single contiguous sequence of 16-bit blocks set to 0 can be compressed to “::” (double-colon)

A double-colon can only be used once when compressing an address.

 

FE80::d9E: bed6:4917:C7DF

You cannot use zero compression to include part of a 16-bit block

FF02:30:0:0:0:0:0:5 does not become FF02:3::5, but FF02:30::5

 

More examples of zero compression

2003:00ef:67ea:0000:ffdc:1268:0002:0044

2003:ef:67ea::ffdc:1268:2:44

 

3ffe:0039:ebc0:5600:cda0:0098:bca2:0096

3ffe:39:ebc0:5600:cda0:98:bca2:96

 

2109:de00:b00d:0087:0000:0000:0000:0027

2109:de00:b00d:87::27

 

300f:0000:0000:0096:0000:0000:0054:fdec

300f::96:0:0:54:fdec

Notice we used the “::” one time even though we had multiple blocks of 0.

 

2999:0000:dead:beef:cafe:0000:0000:0001

2999::dead:beef:cafe:0:0:1

 

Overall the summary of the IPv6 address space can be seen as follows.

· 128-bit address space

· 2128 possible addresses

· 340,282,366,920,938,463,463,374,607,431,768,211,456 addresses (3.4 x 1038 or 340 undecillion) (undecilion wasn’t even in the MS Word spell checker!)

· 6.65 x 1023 addresses for every square meter of the Earth’s surface

· 128 bits to allow flexibility in creating a multi-level, hierarchical, routing infrastructure

Now that we’ve defined the format of an IPv6 address let’s move on to some of the other characteristics and features of IPv6.

 

Types of IPv6 addresses

IPv6 has three types of addresses, which can be categorized by type and scope.

 Unicast - A unicast address identifies a single interface within the scope of the type of unicast address. With the appropriate unicast routing topology, packets addressed to a unicast address are delivered to a single interface.

What this means for example is with a Global Unicast address, which is similar to an IPv4 Public address and unique across the Internet, a packet is delivered from single interface to another single interface. A Link-local Unicast address is similar to an APIPA address which is unique to local subnet so the packet can only be delivered to a device within the scope. We’ll talk more about IPv6 Address Scopes next in this post.

The following types of addresses are unicast IPv6 addresses:

· Aggregatable global unicast addresses (think public IPv4)

· Link-local addresses (think IPv4 APIPA 169.254.x.x)

· Unique Local (think IPv4 Private addresses)

· Site-local addresses are formally deprecated in RFC 3879

· Special addresses

· Compatibility addresses

· Transition addresses

Mulitcast - Address of a set of interfaces delivered to all interfaces in the set (packet is delivered to multiple interfaces)

Anycast– Address of a set of interfaces but delivery is to only a single interface in the set. A packet is delivered to the nearest of multiple interfaces (in terms of routing distance). This one can be a little tricky to understand but I think one of the better examples we came up with was something like a proxy server where you may have multiple servers located across you network but you only want to forward packets to the closest one.

No more broadcast (sort of)

Note: (Technically IPv6 does not have a broadcast address but in practicality the special IPv6 Multicast address will send a packet to all nodes which will accomplish the same result, for example FF02::1)

That’s it, those are the types of IPv6 addresses. Now let’s move on to the scope of them.

 

IPv6 address Scopes

Global Address - Address scope is the entire IPv6 Internet

A Global Unicast address is equivalent to an IPv4 Public address. The scope is the entire IPv6 Internet, therefore they are globally routable and reach-able on the IPv6 Internet. The IPv6 Internet address has been designed from its establishment to support efficient, hierarchical addressing and routing so unicast addresses are designed to be aggregated or summarized to facilitate creating an efficient routing infrastructure.

· Global Routing Prefix (part of the Public Routing Topology – along with 001 prefix)

· Subnet ID (Site Topology)

· Interface ID

clip_image005[23]

 

 Link-Local address - Address scope is a single link

An IPv6 unicast link-local addresses are similar to IPv4 APIPA addresses used by computers running Microsoft Windows. Hosts on the same link (the same subnet) use these automatically configured addresses to communicate with each other. A link local address is required for some Neighbour Discovery processes and is always automatically configured, even in the absence on all other unicast addresses.

· Equivalent to IPv4 APIPA address

· FE80::/64 prefix

· Single subnet, router-less configurations

· Neighbor discovery process

· Link-local addresses are ambiguous so Zone ID is used to identify specific interface

· Zone IDs are only used for link-local addresses since routable addresses are non-ambiguous. Ex. fe80::2b0:d0ff:fee9:4143%3

· Windows Vista and above display the IPv6 zone id of local addresses in the ipconfig output.

clip_image007[23]

 

Unique Local/Site Local addresses – Private addressing alternative to global addresses for intranet traffic

Site-local addresses provide a private addressing alternative to global addresses for intranet traffic. However because the site local address prefix can be reused to address multiple sites within an organization, a site local address prefix can be duplicated. The ambiguity of site local addresses in an organization adds complexity and difficulty for applications, routers and network managers.

Consequently, Site-Local address have been deprecated and Unique Local addresses have superseded them with this challenge in mind. The aim is to replace all site local addresses with a new type of address that is private to an organization yet unique across all the sites in the organization. In other words, Unique Local addresses have global scope to the organization but their reachability is limited by the routing topology and filtering policies at Internet Boundaries. Organizations would not advertise their unique local address prefixes outside their organization or create DNS entries for these for the internet.

The Global ID (see diagram below) identifies a specific site within an organization and is set to a randomly derived 40-bit value. By deriving a random value for the Global ID, an organization can have statistically unique 48-bit prefixes assigned to their sites. Additionally, two organizations that use unique local addresses that merge have a low probability of duplicating a 48-bit unique local address prefix, minimizing site renumbering. Unlike the Global Routing Prefix in global addresses, the Global IDs in unique local address prefixes are not designed to be summarized.

While ULAs were not intended to be registered in any way, it could still happen that multiple organizations generate or use the same prefix and as such there is still a chance of collisions. As a result, a voluntary ULA registration site has been established at http://www.sixxs.net/tools/grh/ula/ to help minimize any ULA collisions. If everybody uses this registry though, the chance for collisions should be near nil.

· RFC 4193 define this unique local address

· Equivalent to IPv4 Private address

· FD00::/8 prefix

· Replacement for site-local addresses

· Global scope, no zone ID required

clip_image009[23]

 

 

Special IPv6 Addresses

· Unspecified Address

· 0:0:0:0:0:0:0:0 or ::

· Loopback Address

· 0:0:0:0:0:0:0:1 or ::1

 

Compatibility or Transition Addresses

Used for IPv4 to IPv6. We’ll have an upcoming blog posts devoted to transition technologies if it’s of interest to our readers. Let us know. Otherwise here is a quick overview. 

· IPv4-compatible address

0:0:0:0:0:0:w.x.y.z or ::w.x.y.z

The w.x.y.z is the dotted-decimal representation of a public IPv4 address, is used by IPv6/ IPv4 nodes that are communicating with IPv6 over an IPv4 infrastructure that uses public IPv4 addresses, such as the Internet. IPv4-compatible addresses are deprecated in RFC 4291 and are not supported in IPv6 for Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows 8, Windows 7, and Windows Vista.

· IPv4-mapped address

0:0:0:0:0:FFFF:w.x.y.z or ::FFFF:w.x.y.z

The IPv4-mapped address used to represent an IPv4 address as a 128-bit IPv6 address

· 6to4 address

2002:WWXX:YYZZ,

An IPv6 6to4 address has the format where WWXX:YYZZ is the colon hexadecimal representation of w.x.y.z (a public IPv4 address) 2002:WWXX:YYZZ::WWXX:YYZZ

· ISATAP address

64-bit prefix:0:5EFE:w.x.y.z or 64-bit prefix:200:5EFE:w.x.y.z

An ISATAP address has a 64-bit prefix:0:5EFE:w.x.y.z, where w.x.y.z is a private IPv4 address and is assigned to a node for the Intra-Site Automatic Tunnel Addressing Protocol (ISATAP) IPv6 transition technology.

· Teredo address

Prefix of 2001::/32

A global address that uses the prefix 2001::/32. Teredo is designed to work even in the presence of network address translators (NAT).

Bringing it Home

clip_image010[24]

I know this has been a lot to process but let’s go back to original screenshot and summarize the address FE80::d9e:bed6:4917:C7DF%12.

We now know how long it is (128 bits) and where all the zero’s went. We also know the different types and scopes of addresses for IPv6. Bringing it back to the screenshot the FE80 prefix means it is just a Link-Local address which is the equivalent to IPv4 APIPA. Next time someone says this IPv6 address FE80 is causing routing issues you can simply say, that’s nothing more than an IPv6 APIPA address and wow them with your knowledge of IPv6. In our next post we’ll cover some more advanced topics in IPv6 addresses.

Mark “FE80:Chicago” Morowczynski and Ray “FE80:Minneapolis” Zabilla

 

Part 1 can be found here

Part 3 can be found here

PFE Sessions at Tech Ed Europe 2013

$
0
0

Hey y’all, Mark here with a quick post. For all our readers that are attending Tech Ed Europe 2013 make sure you stop by these 2 sessions.

Field Experience: Troubleshooting Long Boot and High Resource Consumptions June 28th at 2:45 PM Milad Aslaner who is doing double duty from this time in Tech Ed in New Orleans.

How Many Coffees (New 2013 Edition) Can You Drink While Your PC Starts? June 26th at 3:15 PM Pieter Wigleven

 

Make sure you drop in, and fill out your sessions feedback.

Mark ‘Send Me To Madrid’ Morowczynski

Server 2012 PKI Key Based Renewal Explained

$
0
0

Hello everybody, Randy here. I am new to the PFE role, but have a number of BLOG Posts under my belt from my time as a CTS engineer for the Directory Services team. One of my first tasks as a PFE was to clarify some features available in Server 2012 AD Certificate Services. This BLOG Post focuses on the Key Based Renewal feature but also can be a refresher on the topics of the Certificate Enrollment Policy Web Service (CEP) and the Certificate Enrollment Web Service (CES).

The Windows Server 2012 Key Based Renewal feature offers non-domain joined computers the ability to automatically renew their certificates. You may be asking yourself “Why do we need Key Based Renewal to accomplish this?”

For auto enrollment and renewal of certificates to work, the CA requires a very important piece of information: The identity of the requestor. In an intra-forest or Trust scenario, we can leverage Windows Integrated security to achieve this goal. Key based Renewal considers that the ownership of the certificate can be your identity. With the certificate itself as the authentication, any subject with access to the private key will be able to renew the certificate prior to its expiration. Now let’s look at the components and their settings to better explain this concept and how it can be secure. First let’s look at the certificate template requirements.

The Certificate Template: Requirements for Key Based Renewal

· The first requirement is that the template be marked to require “CA certificate manager approval". This is an extra layer of administrative overhead to ensure the auditing of certificates issued with this capability.

image

· A parameter on the certificate template to identify its eligibility for key-based renewal. You (the PKI administrator) choose what templates can produce certificates that renew themselves by selecting this option under the “Issuance Requirements” tab of the template.

image

NOTE: Marking a certificate template as Allow Key Based Renewal does not restrict it from use by authenticated requestors and leveraging auto-enrollment or auto-renewal. This behavior would be controlled by your standard permissions on the template.

· A parameter on the certificate template allowing for the use of the existing subject name of the certificate to be used for auto enrollment renewal request. This option is under the “Subject Name” tab of the certificate template. You cannot leverage AD information to build the subject name because the requestor is not identified by a security principal that holds the certificate.

image

· The certificate itself is the identity of the requestor, therefore the key usage must include “Client Authentication.” This setting is on the “Extensions” tab and double click on “Application Policies.”

image

Now that we have our template, we need a way to get to it. We cannot use our default Enrollment Policy (which is LDAP.) LDAP would need authentication and the CA will just use that to identify the requestor. What we need is a Web Enrollment Policy Service that does NOT use Windows Integrated.

Here is a refresher on the concepts of CEP and CES Certificate Enrollment Web Services.

First we will take a look at the requirements for the Certificate Enrollment Policy Service. This service is responsible for querying Active Directory for a list of available certificates the requestor is permitted to enroll. This service will require some unique settings in order to process a request using only the previous certificate as identification. Fortunately, the setup Wizard will guide you through all these settings if you select the CEP for Key Based Renewal.

Certificate Enrollment Policy: Requirements for Key Based Renewal

· Authentication Type cannot be “Windows Integrated.” We are not authenticating in the traditional sense, so we must select either Username \ Password or Client Certificate. Client Certificate in this context means a certificate mapped to a specific security principal in Active Directory and does NOT mean Key Based Renewal. This setting identifies how the CEP will initially validate a connection when configuring a CEP target on a client. I was able to change the password of the account originally used when setting up the policy and was still able to renew my certificate.

image

· We need to Enable Key-Based Renewal on the Policy server. Pay close attention to the informational in the wizard. It indicates that this feature is “All or Nothing” so if you offer Key-Based Renewal on this Policy Server, it will only offer those templates where Key-Based Renewal is configured.

image

Certificate Web Enrollment: Requirements for Key Based Renewal

· Renew on Behalf of (ROBO) mode. When setting up a CES Server, you can configure it for both enrollment and renewal services, or (for the security conscious) only for servicing certificate renewal requests. I mention the security conscious because CEP servers are often public facing or in a DMZ, and therefore exposed to a greater attack surface. By configuring the CEP service in ROBO mode, an attacker would only be able to extend the validity of an existing certificate and not obtain fresh ones. Revocation checking can deactivate any certificate that has been compromised. The important point here is the requirement of this setting, meaning that enablement of Key Based Renewals now limits the CES to performing ONLY renewal requests.

image

· Selecting a Service account. This step is slightly more important when considering Key Based Renewal because this Service Account will be enrolling on behalf of the owner of the certificate. You will also need to register the http SPN to this account and use the same account for CEP if running both these services on the same machine. See the TechNet articles in the “Additional Resources” section for more information.

image

· Authentication Type must be Client Certificate Authentication. Although the Policy server can implement “User Name\Password” or “Client Certificate authentication”, the Web Enrollment Service requires that you use “Client Certificate authentication.” This setting coincides with the requirement of the Certificate Template EKU to have “Client Authentication”.

image

· And Last but not Least, enabling the Key-Based Renewal feature.

image

To summarize our requirements of Key-Based Renewal. This feature is only available through the Web Enrollment services and require that both of these Services be configured specifically for this purpose. This configuration limits the CEP service to only enroll requestors for Key-Based Renewal Certificate Templates and the CES service to only provide Renewal services to the certificate and only allow for the client certificate to be used in identifying the requestor.

Additional Resources

If you have read this information and feel something is missing, or ready to implement and need a more step-by-step approach, then here are some great resources on TechNet:

Certificate Enrollment Policy Web Service Guidance

Certificate Enrollment Web Service Guidance

Test Lab Guide: Demonstrating Certificate Key-Based Renewal

Certificate Enrollment Web Services Whitepaper

Renewing a Certificate

Now that we have all the configuration out of the way, my next step is comparing the Key Based Renewal Capabilities against a Windows Integrated CEP \ CES Authentication. To do this I set up two separate Servers; one hosting CEP \ CES in regular Windows Authentication mode and the other CEP \ CES server configured for Key Based Renewal. Now I can create two different Web Enrollment Policies on a Domain-Joined member and be able to toggle between the two for testing.

image

The First thing to point out is that my Key-Based enabled Certificates are also available for enrollment when going through the Windows Auth CEP (Adatum AD CEP.) I was able to enroll in the new certificate as well as any number of certificates currently available for a domain-joined computer.

image

So I select my KBWeb (Key Based Renewal Certificate Template) and enroll using my Windows Auth CES service (remember that the Key Based Renewal CES service can only do renewals.) I must issue the certificate and copy it to my client because of the template restriction of requiring CA Certificate Manager Approval. This step is a typical PKI trade-off. You are taking the security burdens away from your users and replacing it with the administrative burden of your CA administrators. The explanation of that for mentioned comment is as follows:

· The Website administrator does not need to worry about setting and protecting a secure password, or even now requesting a new certificate before expiration.

· The CA administrator now needs to perform potentially high assurance issuance measures to vet, track and monitor these self-managing certificates.

Now I have my new certificate on my domain joined member and I turn off my Windows Auth CEP / CES server and try “Renew Certificate with New Key…”

image

I ultimately receive an error that the “Windows Auth CES server” is unavailable (Not the Key Based Renewal Server.)

image

Keep in mind that because this machine is domain-joined, it will be able to present its identity and enroll in either of the CES servers that are defined in the msPKI-Enrollment-Servers attribute on the object “CN=<CA Name>, CN=Enrollment Services, CN=Public Key Services, CN=Services, CN=Configuration, DC=corp, DC=adatum, DC=com.” This is where the Policy server knows to direct the client to for Enrollment.

Another interesting point on the error above is the “Continue” button. If I press this it brings up a Certificate Selection Window with all of my certificates that include the EKU for “Client Authentication”.

image

I select this certificate and now it goes to the Key Based Renewal CES server. So ultimately what happened was:

· The Policy server identified the available template and iterated through each of the Web Enrollment Servers in order of Priority.

· The Windows Auth failed because it was turned off, so we went to the Key based Renewal CES and that server was only able to accept the credentials of the certificate.

To verify this behavior, I renewed again with the Windows Auth CES server online and it went through Windows Auth CES to process the request.

This can also be seen by looking in the CA database at the issued certificate requests and the identity of the requestor.

image

The first and last certificate was requested by the computer account which enrolled in the template Randt38$ and the one in the middle was my Key Based Renewed certificate and the requestor was identified as my service account running the Key Based Renewal CES (CESSvcKB.)

I then wondered if there was any auditing I could implement to identify the requestor for Key Based Renewals. I turned up security auditing and turned up the CEP and CES operational logging and received nothing that would indicate a name or IP address.

NOTE: Enabling Verbose Logging on the CEP and CES services: <add key="LogLevel" value="4" /> under the web.config file. For more information read the Certificate Enrollment Web Services Whitepaper

Your solution to audit this would be to query the Request IDs where the requestor name equals the CES service account name and give each certificate issued of this type a unique parameter in the “Subject Name” that would identify where it came from.

Summary

The example above shows how a renewal request can be serviced without supplying credentials to the CA. Certificates can perform a variety functions, including client authentication, which can now be leveraged in the renewal request. This reduces the administrative overhead involved with supporting certificates issued outside of the Windows Security Boundary. The behavior of my non-domain joined machine would be that I would not be offered the option to enroll with the Windows Auth CES because the machine account was unable to identify itself to the Policy (CEP) service. I would be offered a renewal of the existing certificate through my Key Based Renewal CES by selecting the certificate as my credentials. The auto enrollment services on my client will be able to detect the expiration of the certificate and locate the Policy (CEP) service as defined in the local GPO configuration of the non-domain joined client.

How to Fix Windows Server 2012 Shared Folder Inaccessible on a VM

$
0
0

Hey y’all, Mark here with a post about an issue we’re seeing with many of our customers lately. Hopefully, we’ll save you a couple hours of head-scratching and Bing searches.

The Problem:

You deploy a shiny new Windows Server 2012 Virtual Machine on Hyper-V or VMWare, and then you notice that no file shares are accessible. For example, on a Domain Controller you can’t access the SYSVOL share. You tend to get an error message like so:

clip_image002

You may notice other puzzling things such as services failing to start when they are on removable or hot pluggable drives, maybe even some SBSL issues with logon scripts, loading user profiles, etc. Windows 8 modern apps might throw the error “app didn’t start”. So what is the cause of all this seemingly unconnected things?

The Cause:

You are the unfortunate victim of two specific configurations. First, you have a specific auditing setting turned on. Second, the drive that where your shared folder resides, or service launches, shows up as a removable or hot pluggable drive.

The Auditing settings are as follows:

You have Audit Removable Storage explicitly enabled for Success and/or Failure. This configuration can be found at Windows Settings, Security Settings, Advanced Audit Policy Configuration, System Audit Policies, Object Access, Audit Removable Storage

clip_image003

Or, you have Audit Object Access Policy Success and or Failure, which implicitly enables all object access. This setting is found at Windows Settings, Security Settings, Local Policies, Audit Policy, Audit Object Access

clip_image004

The Resolution:

Fantastic, we identified two seemingly innocent configurations. How can we fix our problem? VMWare has two KBs that suggested work-arounds by disabling the audit policy and/or disabling the HotAdd/HotPlug capability. These will indeed make the issue go away but what if you are unable to do either of these two actions?

The recommended solution to this is actually apply the hotfix in KB 2811160– which, by the way, is included the Windows Server 2012 April 2013 update rollup. If you look closely at the April 2013 Update Rollup at what’s included we’ll find KB 2811670“Issues when the Audit object access policy is enabled on Removable Storage in Windows 8 or Windows Server 2012”. Looking through the details of the KB pretty much hits the nail on the head of our issues. (We are reaching out to VMWare to have them update their KB as well.).

A Friendly Reminder:

For many of you, this might be the first time hearing about update rollups. However, regular readers of the blog (hint: you should subscribe if you haven’t already) know we covered this topic way back in May. Read “Update Rollups for Windows Server 2012 and Windows 8 Explained” by Steve Mathias. His hard work is already paying off on this. And for those of you who are proactively applying the Windows 8/Server 2012 Update Rollups, you’ve already dodged this issue, plus a couple past and future problems. So pat yourselves on the back.

If you found this post helpful please let us know in the comments. It’s what keeps this blog running. Until next time.

Mark “Another Holiday Issue Averted” Morowczynski

IPv6 for the Windows Administrator: More IPv6: Subnetting, Zones, Address Autoconfiguration, Router Advertisements and IPv4 comparisons

$
0
0

Hey y’all Mark and Ray back again with more IPv6 for the Windows Administrator. So far we discussed why you should care about IPv6 and some basic fundamentals on IPv6 addressing. In this third installment we going to discuss setting up and IPv6 address scheme, Zone IDs, how clients can potentially get IPv6 address, a nice comparison of IPv4 and IPv6 differences and equivalents you can print out for your cube notes collection and finally some additional info. So let’s get right back at it.

We’ll start with a quick summary of some basic IPv6 terminology which should help provide some clarification as we discuss some of the topics.

Additional IPv6 Terminology

 Node- An IPv6-enabled network device that can describe a host or a router.

Host- An IPv6-enabled network device that cannot forward IPv6 packets that are not explicitly addressed to itself. A host is an endpoint for IPv6 communications (either the source or destination) and drops all traffic not explicitly addressed to it.

Router- An IPv6-enabled network device that can forward IPv6 packets that are not explicitly addressed to itself. IPv6 routers also typically advertise their presence to IPv6 hosts on their attached links.

Link- One or more LANs (such as Ethernet) or WANs (such as PPP) bounded by routers. Like interfaces, links may be either physical or logical. Links can also be referred to as Subnets or Network Segments..

Neighbors- Nodes that are connected to the same physical or logical link.

Interface- A representation of a node‘s attachment to a link. This can be a physical interface (such as a network adapter) or a logical interface (such as a tunnel interface).

 

A key thing to note is an IPv6 address identifies an interface, not a node. A node is identified by having one or more unicast IPv6 addresses assigned to one of its interfaces.

IPv6 prefixes and Subnetting

Just like IPv4 you can divide the IPv6 address space using the high-order bits that do not already have an assigned value to create subnetted address prefixes. Since IPv6 has so many more addresses available, 18,446,744,073,709,551,616 to be a little more specific, that’s 18 quintillion, 446 quadrillion, 744 trillion, 73, billion, 709 million, 551 thousand and 616 just in case you’re counting, there are a few options. I’m sure you get the idea, but the real point of it is all those addresses create a lot more options and a lot more flexibility for creating an IPv6 addressing plan so you may want to be thinking about how you could redesign you current IPv4 addressing plan to take advantage of some of these capabilities.

Creating an IPv6 addressing plan is somewhat analogous to creating an Active Directory OU structure. You can create a subnet plan by geographic location having a different primary subnets for each location to facilitate router optimization. You may create primary subnets by use type, such as Engineering and Accounting which makes it easier to manage security and policies, you may use a combination of both or come up with something completely different. That’s one of the benefits of having all those additional addresses in IPv6. So let’s go into a little more detail and look at an example.

Just a quick refresher from our previous post. The concept of the host ID is different from IPv4 in IPv6. In IPv4 the host ID can be of varying length where as in IPv6 the address is split 50-50 with 64 bits for the subnet prefix and 64 bits for interface ID. The first 48 bits will always be fixed for both global and unique local address. If it’s a global address the first 48 are assigned by an ISP. For example 2001:db8:1234. If it’s a unique local the first 8 bits are FD00: plus the random 40-bit global ID is assigned to a site of an organization

clip_image002

For most organizations this will typically mean that Subnetting an IPv6 address will consist of dividing the 16 bit subnet ID portion of a global or unique local address prefix to provide for route summarizations and delegation of the remaining address space to different areas of the IPv6 intranet.

In the blog here we are just trying to provide a good overview and some background information to pique your interest and get you thinking about your IPv6 addressing plan. For some more detail information and guidance on creating an IPv6 subnet plan check out the following article entitled “Preparing an IPv6 Addressing Plan” (March, 2011) - Sander Steffann, RIPE NCC which was inspirational for some of the examples.

One of the first and more important steps in creating your IPv6 Addressing Plan is to decide how you want to allocate or assign the subnet bits.

OK, hang with us here we are going to go a bit deep. Let’s look at a theoretical example. I have an assigned Global Address with a 48 bit prefix from my ISP, let’s say 2001:db8:1234. I have a 100 locations around the world and I wish to use router optimization. I have 67 departments. What could my address plan look like?

Summary

Global Address 2001:db8:1234

100 locations around the world (Primary Subnet)

67 departments (Secondary Subnet)

How could I allocate the 16 bits of the Subnet ID for my intranet?

To allow for a minimum of a 100 locations I would need 7 bits

Nearest 2^n = 128 or 2^7 - 7 bits

To allow for a minimum of a 67 locations I would also need 7 bits since 2^6 is only 64

Nearest 2^n = 128 or 2^7 - 7 bits

So I would be using a total of 14 bit out the 16. This would make my address prefix /62 (48 + 14) 2 bits left unused at this point.

Have we lost you? Let’s try a visual representation.

 

 

2001:db8:1234:

L

L

L

L

L

L

L

D

D

D

D

D

D

D

U

U

::/62

 

 

 

 Fixed Global Address: 2001:db8:1234

LLLLLLL: 7 bits for Locations - 100 = 2^7(128)

DDDDDDD: 7 bits for Department -67 = 2^7(128)

UU: 2 bits currently unused

 

So what would an address for location 58, department 27 look like?

Global Address LLLLLLL DDDDDDD UU

2001:db8:1234 0111010 0011011 00

2001:db8:1234:746c::0/62

Hopefully that makes some sense. Like all things new it may take a little time to get comfortable but in no time at all it will become familiar like the IPv4 subnet masks are today.

Zone IDs

Link Local and Site Local address can be reused (Global addresses cannot). Link Local addresses can be used on each link. Site local addresses can be reused within a site of an organisation. This capability means that link local and site local addresses are ambiguous. To specify the link on which the destination is located or the site within the destination is location and additional identifier is required. This additional identifier is called a zone identifier (Zone ID), sometimes called a scope id, and this is how we identify the portion of a network that has a specified scope. Zone IDs are only used for link-local addresses since routable addresses are non-ambiguous.

The syntax for this ID is specified in RFC 4007.

The values of the zone id are defined relative to the sending host. So it is possible that different hosts might determine different zone ids for the same physical zone. As an example, host X might choose a value of 3 to represent a zone, and host Y might choose a value of 4 to represent the same link.

Windows Vista and above display the IPv6 zone id of local addresses in the ipconfig output. For example, you might see: “Default Gateway . . . . . . . . Fe80::20a:42ff:feb0:5400%6

In our first IPv6 address example, “12” is the Zone ID.

FE80::d9e:bed6:4917:C7DF%12

Address Autoconfiguration

Ok Windows Admin, really pay attention to this section, you’ll see why shortly. One of the really neat things about IPv6 is that is has the ability to configure itself even without the use of DHCP! By using a process of router discovery, which involves an exchange of Router Solicitation and Router Advertisement messages, the host determines which method to use to obtain an IPv6 address as well as the addresses of neighboring routers, additional stateless addresses, on-link prefixes, and other configuration parameters.

Included in the Router Advertisement message are flags that indicate whether an address configuration protocol (such as DHCPv6) should be used for additional configuration. The host decides which method to use based on the configuration of a Router Advertisement message. Link-local addresses are always generated regardless of any other options

These are the four general methods for obtaining how a host obtains an IPv6:

· Statically configured

· Stateless Address AutoConfiguration (SLAAC)

· Stateless DHCPv6

· Stateful DHCPv6

 

Router Advertisements

IPv6 hosts are always listening for RA’s. Additionally a host will request a RA by sending a Router Solicitation when the host’s configuration changes (Power-up, Network Configuration Change). An RA is usually sent by a Layer 3 device and has specific options available. RA’s control both addressing and routing on the host. The most common options are listed below but there are several more options not covered here.

Router Advertisement Options

· Autonomous flag (A bit) – Hosts will generate an address based on this RA and if this bit is enabled.

· Valid Lifetime – a 32-bit number representing the length of time (in seconds) that a prefix will be used in the host’s routing table

· Managed Address Configuration flag (M bit) – Hosts will contact a DHCPv6 server to obtain an IPv6 address if this bit is set

· Other Stateful Configuration flag (O bit) – Hosts will contact a DHCPv6 server to obtain non-address configuration information if this bit is set.

This can create an “interesting” dilemma which does not occur in the IPv4 world. Suppose I have the following Router Advertisement configuration. What will happen?

Autonomous flag =1, Managed Address flag =1, Other=1, Lifetime=86,400

Answer: The host will configure TWO IPv6 addresses!

One autoconfigured, and one from DHCPv6, along with options from the DHCPv6 server. This will also generate a route table entry valid for 24 hours. So you can see that when implementing IPv6, communication and collaboration between Server Administrators and the Network Administrators becomes crucial.

Specific autoconfiguration behaviors of IPv6 for computers running Windows Server 2012, Windows Server 2008 R2, Windows Server 2008, Windows 8, Windows 7, or Windows Vista:

· Generate random interface IDs for non-temporary autoconfigured IPv6 addresses, including public and link-local addresses, rather than using EUI-64–based interface IDs.

· Use optimistic duplicate address detection (DAD) which means they do not wait for duplicate address detection (DAD) to complete before sending router solicitations or multicast listener discovery reports using their derived link-local addresses.

· Computers running Windows Server 2012, Windows Server 2008 R2, Windows 8, or Windows 7 attempt stateful address autoconfiguration with DHCPv6 if no router advertisements are received. Computers running Windows Server 2008 or Windows Vista do not attempt stateful address autoconfiguration with DHCPv6 if no router advertisements are received.

· Send the Router Solicitation message before performing duplicate address detection on the link-local address.

· Continue address autoconfiguration even if link-local address is duplicate with the receipt of a multicast Router Advertisement message containing unique local or global prefixes.

 

In the Field

As a Windows Admin you are probably thinking, who would configure router advertisements we have DHCP? The most common scenario seen in the field is the network team “testing” some IPv6 stuff. They think that they are only affecting routing between the network devices but not the hosts since hosts get their IP from DHCP and that is only configured for IPv4. Then we start to see routing weirdness and AAAA records in DNS. The knee jerk reaction is to fix the problem by unchecking the IPv6 check box we detailed in our first post (hint don’t do that!). This probably seems far-fetched but I have seen this happen on more than one occasion. If you do start seeing IPv6 addresses assigned and your org hasn’t rolled out IPv6 yet go to your network team and say “Hey man, I think some of the router advertisements might be leaking into production”. This generally a good place to start.

 

Comparison and compatibility table of some of the IPv4 and IPv6 features

IPv6 Addresses

IPv6 Unicast Address

IPv4 Equivalent

Global Address

Public

Local-use Address (Link-Local)

APIPA

Unique local Address

Private

Specialty (unspecified, loopback)

Multicast, Loopback, etc

Compatibility

n/a

 

IPv4 Address and IPv6 Address Feature Equivalents

Feature

Ipv4

Ipv6

Address length

32 bits

128 bits

IPsec header support

Optional

Required

Prioritized delivery support

Some

Better

Fragmentation

Hosts and routers

Hosts only

Packet size

576 bytes

1280 bytes

Checksum in header

Yes

No

Options in header

Yes

No

Link-layer address resolution

ARP (broadcast)

Multicast Neighbor Discovery

Multicast membership

IGMP

Multicast Listener

Router Discovery

Optional

Required

Uses broadcasts

Yes

No

Configuration

Manual, DHCP

Automatic, DHCPv6

DNS name queries

Uses A records

Uses AAAA records

DNS reverse queries

Uses IN-ADDR.ARPA

Uses IP6.ARPA

More Info

Well, hopefully we’ve covered enough substance to start getting you to feel a little more comfortable with IPv6 and like all new technologies it’s not magic, just takes a little time, and a good blog of course, to understand. If you are a Premier customer we have an IPv6 workshop with tons more info and all kinds of fun labs. Let us know or you TAM and we’ll get you going. If you are more the lone wolf self-study type we have, http://technet.microsoft.com/en-us/library/gg250710(WS.10).aspx and the IPv6 book by MS Press is quite good. Please let us know in the comments what you think and other IPv6 info you’d like to see.

Mark “128 bit” Morowczynski and Ray “128 bit” Zabilla

Part 1 can be found here 

Part 2 can be found here

 

 

 

 

Why You Shouldn’t Disable The Task Scheduler Service in Windows 7 and Windows 8

$
0
0

Hello, Jeff “The Dude” Stokes here for an installment on a very important topic.  Why should I not disable the task scheduler in Windows?

Long, long ago in the annals of IT history, the Task Scheduler was a poorly understood component of Windows.  “What does it do?” We’d wonder…

Fast forward to today and now, the Task Scheduler is still a poorly understood component of Windows.  “What does it do and why can’t I disable it to be secure?” We ask…

We have heard about some changes in Vista and Windows 7 regarding the task scheduler, but really, why not disable the dang thing to be more secure or increase system performance?

Because disabling the task scheduler does not make your system more secure, nor does it increase system performance.  In fact, it makes your system less secure in Windows 8, and in Windows 7 and 8 makes performance worse, especially over time.

In Windows 7 the Task Scheduler is responsible for background health and cleaning processes such as optimizing prefetch and readyboot for instance.  It also handles light defragmentation runs on the system.

In Windows 8, it’s even more important. It optimizes the start menu…

Pic1

 

 What else?  File History is task scheduler based.

image

 

Bluetooth device cleanup (when you unpair a device)

image

 

Cleaning up Application Temporary Files as well

image

 

How about making sure the file system is healthy?  Yeah that’s a task, too.

image

 

Run RAID sets on your machine?  You’ll want task scheduler.

image

 

How about Windows Updates?

image

 

So let’s leave the Task Scheduler Service alone in our quest for security hardening and go pick on more interesting things like Anti-Virus and Data Loss Prevention kits. 

So remember, Relax, don’t do it. Don’t disable the task scheduler!

 For more information on the Task Scheduler see below:

Task Scheduler Changes in Windows Vista and Windows Server 2008 – Part One
http://blogs.technet.com/b/askperf/archive/2008/06/24/task-scheduler-changes-in-windows-vista-and-windows-server-2008-part-one.aspx

Task Scheduler Changes in Windows Vista and Windows Server 2008 – Part Two
http://blogs.technet.com/b/askperf/archive/2008/10/10/task-scheduler-changes-in-windows-vista-and-windows-server-2008-part-two.aspx

Task Scheduler Changes in Windows Vista and Windows Server 2008 – Part Three
http://blogs.technet.com/b/askperf/archive/2009/03/17/task-scheduler-changes-in-windows-vista-and-windows-server-2008-part-three.aspx

Two Minute Drill - Quickly test Task Scheduler
http://blogs.technet.com/b/askperf/archive/2011/06/10/two-minute-drill-quickly-test-task-scheduler.aspx

What’s New in Task Scheduler for Windows 8 & Server 2012
http://blogs.technet.com/b/askperf/archive/2013/07/05/what-s-new-in-task-scheduler-for-windows-8-amp-server-2012.aspx

 

Jeff “The Dude” Stokes

FAQ on ADFS - Part 1

$
0
0

Hello everyone, Jasmin here again and this time I am writing about Active Directory Federation Server (ADFS). Lately, I have been getting several questions from most of my customers and some of my peers around ADFS deployment, planning, setup, implementation etc. While addressing these questions, I realized that I was answering similar type of queries especially when it was a first time ADFS deployment effort. I have therefore created a list of common Q/A around ADFS in hopes that it would benefit those looking into federation for the first time.

What is ADFS?

ADFS helps you use single sign-on (SSO) to authenticate users to multiple web applications over the life of a single session. This is accomplished by securely sharing digital identity and rights (Claims) across security and enterprise boundaries. Some of the ADFS uses can be found here

What are the different versions of ADFS? Which one is the latest?

There are four versions of ADFS.

  • AD FS 1.0 - released with Windows Server 2003 R2 as part of the operating system and could be installed as a Windows component.
  • AD FS 1.1 - released with Windows Server 2008 and was carried into Windows Server 2008 R2. In both editions, AD FS was installed from the Server Manager as a role. There were minimal changes from AD FS 1.0 to AD FS 1.1.
  • AD FS 2.0 was released after Windows Server 2008 R2. It was released to the web and is free to download. It requires at least Windows Server 2008 SP2 to install. Two versions (x86 and x64) are available for Windows Server 2008, while only the x64 version is available for Windows Server 2008 R2.
  • ADFS 2.1 was released to Windows Server 2012 as part of the operating system and therefore, can be installed as a Role from Server Manager.

One thing to note is that, AD FS 1.x is limited in its standards support which includes WS-Federation Passive Requestor Profile (browser) and SAML 1.0 TOKENS while AD FS 2.0 extends standards support for WS-Federation. It supports WS-Federation PRP, WS-Federation Active Requestor Profile, SAML 1.1/2.0 TOKENS, SAML 2.0 Operational Modes, IdP Lite/SP Lite/eGov 1.5

What is the benefit of installing ADFS on Windows Server 2012 versus on Windows Server 2008 R2?

In Windows Server 2012, ADFS 2.1 is released as part of the operating system and is installed from the Server Manager as a role. Server Manager provides configuration wizard pages that perform validation checks and automatically install all the services that AD FS depends on. Whereas, in Windows Server 2008 SP2 or Windows Server 2008 R2, ADFS 2.0 must be installed from the web. You will also need to install the update rollup 3 for Windows Server 2008 and 2008 R2 which is located here. Furthermore, With Windows Server 2012, the AD FS server role now includes new cmdlets that you can use to perform PowerShell-based deployment within your federated identity installations and environments. Detailed cmdlets information can be found here. Lastly, with Windows Server 2012, AD FS can be integrated with Dynamic Access Control scenarios allowing AD FS to consume AD DS claims that are included in Kerberos tickets as a result of domain authentication. More information on claims can be foundhere

Which AD FS configuration database store should I choose, Windows Internal Database (WID) or SQL?

The AD FS configuration database stores all the configuration data. It contains information that a Federation Service requires to identify partners, certificates, attribute stores, claims, etc. You can store this configuration data in either a Microsoft SQL Server 2005 or newer database or the Windows Internal Database (WID) feature that is included with Windows Server 2008, Windows Server 2008 R2 and Windows Server 2012. Following is a short description of

 

WID Advantages

WID Disadvantages

Very easy to setup and implement

Supports five federation servers in a farm

Load balancing and fault tolerance is possible if setup as a farm.

SAML artifact resolution and SAML/WS-Federation token replay detection feature is not available

Supports multiple Federation Servers in a farm (limits to 5 federation server in a farm)

It is not supported if there is more than 100 claim trust providers trust or more than 100 relying party trusts.

 

More info: In a farm with WID as the database, the first server in the farm act as the primary server and host a read/write copy of the database. Secondary servers then replicate inbound the configuration data into their read-only database. They are fully functional federation members and can service the clients just like the Primary server. They are just unable to write any configuration changes to the WID which does not take place every day.

 

SQL Advantages

SQL Disadvantages

Supports multiple federation servers (not subject to the limitation of WID)

Additional setup complexities. Require PowerShell to install it

Load balancing and fault tolerance

SQL cluster introduces another potential point of failure

Easily Scalable

SQL server must be performing well to service requests

SAML artifact resolution and SAML/WS-Federation token replay detection supported

 

 

If the Primary Server in the farm is down, what happens?

Another server in the farm can be configured as the primary server. Below is the PowerShell command to run on the secondary server which you want to make primary:

Add-PsSnapin Microsoft.Adfs.PowerShell

Set-AdfsSyncProperties -Role PrimaryComputer

Once the primary federation server is set run the following PowerShell commands on the other secondary federation servers to sync them with the new Primary ServersCommand to run on the other farm member servers:

Add-PsSnapin Microsoft.Adfs.Powershell

Set-AdfsSyncProperties -Role SecondaryComputer -PrimaryComputerName {FQDN of the Primary Federation Server}

Is it possible to move from WID to SQL at some point in the future?

Yes it is supported to move from WID to SQL. Detailed steps are documented here

 

Is SAML artifact resolution and SAML/WS-Federation token replay detection feature required by most Relying Parties?

From my experience most Relying Parties do not require this feature. However, there are some that do. So it would be wise to check on that before deciding the database configuration store. If that is a requirement, the SQL must be selected.

 

What is the difference between a single ADFS server versus a farm? Which one is better?

ADFS can be setup as a

  • Standalone federation server.
  • Farm Federation Server using WID
  • Farm Federation Server using SQL

Farm federation server is definitely a better option than a standalone federation server for the obvious reasons – scalability and redundancy. Standalone federation server only support a single server and only store configuration information on a Windows Internal Database (WID). Of course It is easy to setup and its best for lab environment but lacks scalability and redundancy. Moreover, you cannot add more than one server to the Standalone federation server. However, with a farm federation server, you can start a farm with one single ADFS server and add more ADFS servers to the farm at that time or sometime in the future. I often get this question, can a farm federation server using WID function with one server? And the answer is YES! But remember you cannot benefit from load balancing and redundancy since there is only one server in the farm. For more information on Federation Server using WID or SQL please refer to the question of which database to choose.

Which type of certificates does AD FS require?

Basically you need three types of certificates.

  • Service communication certificate
    • AD FS uses this certificate to enable HTTPS which is a requirement for traffic to and from the federation server and federation server proxies ( to secure communication) So it is basically a SSL certificate which needs to be installed on the IIS for each federation server and federation server proxy
  • Token signing certificate
    • AD FS uses this certificate to digitally sign outgoing AD FS tokens. This is not used to secure data but in fact it is used to ensure the integrity of the security tokens as they pass between the federation servers and application server via the client computer.
  • Token decrypting certificate
    • AD FS 2.0 and above has the ability to encrypt the contents of the AD FS tokens. This is in addition to having these tokens signed by the server's token signing certificate.

Where can I obtain the required certificates from?

There are several options and each have their pros and cons.

  • Server communication certificate
    • This certificate must be trusted by the client computers so it is recommended that in a production environment this certificate is obtained from a public CA. Other alternative is to use your enterprise CA (PKI) to issues this cert however, you will need to ensure that this certificate is trusted by all client computers. You may have to use Group Policy to manually push down this certificate. Bear in mind that if the client machines are not joined to the domain, they may not be able to trust your internal certificate which could result in bad user experience such as receiving security alert prompts when they try to access the federated resources. In your test environment, you can easily use a self-signed certificate if you wish as security is usually not of a concern in a lab environment.

       

  • Token Signing Certificate
    • This certificate can be issued via enterprise CA, public CA or by creating a self-signed certificate. The way it is installed depends on how you create the AD FS farm. It is required that all federation servers in the farm use the same token signing certificate. Hence you can install this certificate from the CA on a federation server and export the cert along with the private key to other federation servers in the farm and save the cost involved in obtaining a certificate from the public CA. However, the option that I personally favor is to allow what AD FS 2.x does by default i.e. it creates a self-signed certificates for signing tokens. I like this option because the maintenance is very low. It has a validity of one year after which it must be renewed however, AD FS provides the capability for automatic renewal (Automatic Certificate Rollover) for self-signed certificates before expiry and if the relying party trust is configured for automatic federation metadata, the relying party will automatically sync the new public key.

       

  • Token Decrypting Certificate
    • Similar to Token Signing Certificate AD FS 2.x by default will use another self-signed certificate for the Token decrypting/encrypting certificate and as stated above, it also provides the capability for Automatic Certificate Rollover.

How can I check if my ADFS server is operating successfully?

Check for Event ID 100 under Applications and Service Logs | AD FS | Admin. This event verifies that the federation server was able to successfully communicate with the federation service.

Is there a checklist that I can follow to setup ADFS in my environment?

ADFS 2. 0 http://technet.microsoft.com/en-us/library/dd807086(v=ws.10).aspx

I also found a checklist specifically for Windows Server 2012 which is located at http://technet.microsoft.com/en-us/library/dd807086.aspx

More useful resources:

That's it for now. As I get more questions, I will create part 2 of the ADFS FAQ.

Cheers,    

Jasmin Amirali

 

 


Happy 20th Birthday Windows NT!

$
0
0

Hey y’all, Mark here with a quick history post. Tomorrow July 27th 2013 will be the 20th birthday of Windows NT which is quite the birthday. It can almost drink legally! Just the thought of 20 years makes greybeards like Doug “Bad Cop” Symalla spit out his morning coffee. If you happen to meet Doug have him tell you some stories from the “old days”. They are entertaining.    

For those that are interested in computer history there is a great book by G. Pascal Zachary called “Showstopper! The Breakneck Race to Create Windows NT and the Next Generation at Microsoft” Here is a list of some other computer history books we here at the blog enjoy.

 

Where Wizards Stay Up Late: The Origins Of The Internet

Dealers of Lighting

The Soul of A New Machine

Hackers: Heroes of the Computer Revolution

 

Also a quick reminder, Windows XP will reach end of support on April 8th 2014 which is just a little over 8 months away so if you haven’t started your migration I can’t think of a better time to start. Full panic mode should be just over the horizon. If you are a premier support customer contact your TAM to get any help you need. Don’t forget Windows Server 2003 is not far behind ending support on July 14th 2015.

If we missed your favorite book or you agree with those above let us know in the comments. Until next time.

Mark “the librarian” Morowczynski 

Upgrade Active Directory to Windows Server 2012 - Phase 2: Build Your Plan

$
0
0

**Be sure to read the entire series on the AD Upgrade**

----------------------------------------------------------------------------------------------------------------

Here’s the next step in our AD Upgrade Series – building your plan. Below is your template, just fill in the blanks. Don’t forget that the assessment data you’ve already collected will fill in a lot of the blanks.

Active Directory Health

The health of the Active Directory environment is critical to the overall success of the upgrade. Ideally, you should be monitoring the health of AD real-time 24x7x365. Periodically you may wish to do a more thorough analysis of the health of the environment with an Active Directory Risk Assessment (AD RaaS).

⎕  Check health now, and remediate any serious health issues before proceeding.

o AD Replication

o SYSVOL Replication

o DC Performance (Memory, CPU, Disk)

o Time Synchronization

⎕  Keep checking health as you proceed to make sure you don’t accidently break something.

Proactively Addressing Known Issues

Install these recommended hotfixes for AD-specific issues, on new and existing DCs to proactively avoid known issues. These may, or may not, affect you; however its good practice to be proactive and avoid issues.

On your Windows Server 2003 DCs install these hotfixes:

⎕  RODC Compatibility Pack [KB 944043]

⎕  AD-Integrated Secure DDNS Bloats AD Database[KB 2548145]

On your Windows Server 2008/2008R2 DCs install these hotfixes:

⎕  KDC Upate [KB 976442]

⎕  Protect Against RID Pool Exhaustion [KB 2618669]

⎕  AD-Integrated Secure DDNS Bloats AD Database[KB 2548145]

On your Windows Server 2012 Server DCs install these hotfixes:

⎕  All Released Update Rollups

⎕  Be aware, if you host Virtual DCs on Server 2012 Hyper-V, you’ll definitely want to Install the July 2013 Update Rollup on your Hyper-V hosts.

Active Directory Design and Architecture

Upgrading the Active Directory infrastructure is an ideal time for addressing any architectural changes to the forest. This could include changes to sites, site-links, the number of domain controllers in the forest, the sizing of domain controllers or the placement of domain controllers. Prior to upgrading the Active Directory the forest architecture should be reviewed, the desired end-state should be specified and upgrade plans should include provisions to reach the desired state.

⎕  Look at your current AD Architecture. Print out AD Topology Diagram.

⎕  Document your future (desired) Architecture including any changes to:

o Site/site-links

o Domain controller placement

o GC placement

o DNS Server placement

o DNS configuration

Domain Controller Build

Once the architecture/design has been finalized, the build for new domain controllers should be determined. This should include the following considerations:

⎕  Do you have a base OS build for Server 2012?

⎕  Amount of memory/CPU/disk is required to meet the performance profile of the domain controller. Look here for a detailed approach to capacity planning.

⎕  Will virtual domain controllers will be deployed? Be sure to review best practices.

⎕  Will RODCs will be deployed? Understand the differences between RODCs and RWDCs.

⎕  Have you considered locating DCs in Azure?

⎕  Do you have non-default configurations that need to be carried forward? How will you do so, configuration in the build? GPOs?

Compatibility Testing

The first step towards compatibility testing is generating a list of applications that depend upon Active Directory. While the information may never be perfect, it should include at least those applications upon which the business depends. For each of the dependents, determine if the applications/client is compatible with the new OS version on the new Domain Controllers. This could be as simple as contacting the application vendor for a support statement. For other applications, this may require testing in a lab environment.

For Active Directory dependents, make note if the application/client automatically discovers domain controllers, or if it contains static configuration on domain controllers (using hostnames or IP addresses). For those dependents that are statically configured to find domain controllers, re-configuration may be required when new domain controllers are deployed and old domain controllers are retired.

⎕  List of Known AD Dependents. For Each:

o Is it compatible with New Version of AD/New OS of Domain Controllers?

o Can/will you test?

o How does the dependent discover Domain Controllers? If manually configured make a note so you can manually change it when new DCs are deployed.

Address Changes/New OS default configurations

With respect to any configuration changes (whether they are introduced by you or the new OS defaults), consideration should be made to implementing these changes prior to the introduction of new domain controllers. For example, configure your existing domain controllers with these new configurations to ensure the changes will not negatively impact the environment. This will allow the specific changes and pace of changes to be easily controlled. For specifics of these changes, see the table of OS changes in our previous blog.

Consider Implementing These Configuration Changes (if necessary) on Existing DCs Before the Upgrade

⎕  Disable the Storage of LMHash

⎕  Turn on SMB signing

⎕  Disable the Computer Browser Service

⎕  Increment LMCompatibilityLevel to (at least) 3.

⎕  Enable DFS Site-Costed Referrals

⎕  Remove NullSessionShares and trim NullSessionPipes

⎕  Enable EDNS0

⎕  Limit NSPI Connections (requires 2008 DCs or higher)

⎕  Disable NT4Crypto (requires 2008 DCs, or higher)

⎕  Disable DES encryption (requires 2008 DCs, or higher)

⎕  Adopt new RPC dynamic port Range

Database Upgrade (ADPREP)

One of the first milestones in the upgrade process is upgrading the database using what’s known as ADPREP. This is required before any new DCs (with the new OS version) are deployed. The ADPREP process historically was a stand-alone executable, but has recently been integrated into the domain controller promotion process (as of Windows Server 2012). Now you can either run ADPREP by itself, or as part of promoting the first DC. Regardless of the option you choose, upgrading the database is irreversible. Thus, it is strongly recommended to test the ADPREP process and to ensure there is a documented and tested Active Directory forest-recovery plan in place. This will ensure a smooth production AD database upgrade.

⎕  Do you have a Forest Recovery Plan?

⎕  Is it (recently) tested?

⎕  Have you tested ADREP/Database Upgrade in you lab and/or recovery environment.

Deploying New Domain Controllers

The next milestone in the upgrade process, is the deployment of domain controllers running the new operating system. Prior to deploying the first new domain controller, considerations must be made to determine where, how many and how quickly the new domain controllers will be deployed. Also, administrators must be prepared to manage the new operating system. Additionally, decisions must be made when applications/clients with static configurations will be (re)targeted to the new domain controller(s). Monitoring should be in place to discover any problems with health or client compatibility. Finally, don’t forget that FSMO roles will need to be moved to the new DCs and forest time synchronization configured to follow the new forest root PDC.

⎕  Decide sequencing for introduction of new DCs.

⎕  When/how will you (re)target applications/clients that statically find DCs?

⎕  When/how will you move the FSMO roles to the new DCs?

⎕  Remember to keep checking health during these changes.

Retiring Domain Controllers

As new domain controllers are deployed, old domain controllers will be retired. To avoid potential issues care must be taken to ensure a smooth transition. This will include, for example, migrating other services that are hosted on existing domain controllers to alternate locations, and re-targeting clients/applications that are manually configured to use specific domain controllers.

⎕  Which applications/services (DHCP,IAS/Radius,etc.) need to be migrated from your existing DCs?

⎕  Do you need to retain IP addresses and/or hostnames? If so, when/how will you move these to the new DCs?

⎕ Remember to keep checking health during these changes.

When/how will Upgrading Domain and Forest Functional Levels

Once all domain controllers in a domain/forest are running a new OS, the domain or forest functional level can be raised. Prior to doing so, the domain/forest recovery plan should be reviewed and re-tested. Once functional levels have been raised, new features may be implemented such as DFSR for SYSVOL replication or the Active Directory Recycle Bin.

⎕  Upgrade the Domain Functional Level/Forest Functional Level.

⎕  Migrate SYSVOL replication to DFSR.

⎕  Enable the Recycle Bin

⎕  Using Fine-grained Password Policies.

How to Determine if Smart Card Authentication Provider Was Used

$
0
0

Hey folks, Keith Brewer here to discuss how to determine how a user has authenticated. Recently I was onsite with a Microsoft Premier Customer and they asked if there was a way for them to determine if a user had used username and password or their issued smart card for logon.

Problem:

IT Organization has been provisioning smart cards for all users but not enforcing its use. The IT organization would like to get a better picture if users are actually using their smartcards for authentication purposes prior to implementing smart card enforcement options.

Solution:

A fellow PFE Fabian Müller has written on this topic on TechNet WIKI here:

http://social.technet.microsoft.com/wiki/contents/articles/11844.find-out-if-a-smart-card-was-used-for-logon.aspx

On Fabian’s WIKI page you will find more information on the different approaches but I will summarize them here:

Server Side:

Using server side (Domain Controller) auditing alongside centralized managed storage of events such as System Center Operations Monitoring Audit Collection Service (ACS). ACS together with System Center Operation Manger 2012 can be deployed to gather, evaluate and report on the occurrences of events specific to user logon.

Client Side:

Authentication Mechanism Assurance (AMA) is an added capability in Windows Server 2008 R2 AD DS that you can use when the domain functional level is set to Windows Server 2008 R2. When it is enabled, authentication mechanism assurance adds an administrator-designated global group membership to a user’s Kerberos token when the user’s credentials are authenticated during logon using a certificate-based logon method.

The last alternative approach discussed was the “quick-and-dirty” approach leveraging data stored within registry on the client machine.

In deploying an enterprise solution I would generally lean towards the server side solution as this would provide authoritative information on the logon from the domain controller where the logon was processed. However to effectively implement you would need to be logging the appropriate events on all domain controllers and centrally storing all the events for analysis. If you do not meet these requirements then a client side approach may be warranted.

Now headed down the client side approach the preferred method would be leveraging Authentication Mechanism Assurance (AMA). However this requires that AMA be setup and configured.

The above 2 methods report with certainty that a Smart Card was used for logon.

The Event targeted with the server side (Domain Controller) solution will identify that PKINIT was used for logon and as mentioned on the WIKI currently the only built-in logon method that uses PKINIT is Smart Card Logon

AMA method would identify a group in the users token that would only be present if authenticated with the configured certificate.

If you cannot currently meet these requirements then as an alternative approach we can try to determine the provider used on the client side via the local machine registry.

The challenges with these methods are that the provider information is only written at logon. For example the provider information in this area of the registry is not updated when a user locks/unlocks their workstation.

The UserTile approach works very well for client machines running Windows 8 or later operating systems. The default behavior of Windows 8 and later is to present the user the same credential provider used during logon on subsequent attempts (unlock). This is what is stored in the UserTile branch. We can use this information to determine what provider was used the last time this user logged on. This will be accurate if a user changes credential provider in an unlock operation as the new provider will be stored in the UserTile branch.

However this customer’s client machines were mostly Windows 7 and administrators wanted to capture user’s credential providers utilizing terminal servers on both Windows Server 2008 and Windows Server 2008 R2. As such the solution could not consistently rely on the presence of the UserTile registry branch.

Fabian did show us the SessionData sub key of the LOGONUI registry branch. This was crucial to gathering the information we needed both for Windows 7 clients but also for remote logon (Remote Desktop Services).

clip_image002

I have uploaded the following script to the script gallery on TechNet:

http://gallery.technet.microsoft.com/Detect-Authentication-09b0a749

This script gathers the current session of the logged on user to identify the correct sub key in the LOGONUI\SessionData branch.

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\LogonUI\SessionData

On a Windows 8 Client:

If the user has logged on using the default Smart Card Authentication Provider the output should show:

clip_image004

If the user has logged on using the default UserName/Password Authentication Provider the output should show:

clip_image006

On Windows Server 2008 R2

If the user has logged on using the default Smart Card Authentication Provider the output should show:

clip_image004[1]

If the user has logged on using the default UserName/Password Authentication Provider the output should show:

clip_image008

Much of the solutions mentioned in this article focus on the use of native Windows credential providers or other Microsoft Services. If 3rd party products are included within the Smart Card implementation mileage may vary.

So to summarize:

Preferred Method:

Centralized Storage of Security Logon Events from all Domain Controllers

Leveraging Authentication Mechanism Assurance

Alternative Solutions:

If gathering from Windows 8 and Windows Server 2012

UserTile Registry Query

If Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2

SessionData Registry Query

Hope This Helps

Keith “Authentication Mecha What??” Brewer

Friday Mailbag: Best practices for DFS-R on Domain Controllers

$
0
0

Greg Jaworski here again…After a lengthy conversation with one of our readers around migrating SYSVOL to DFS-R and some confusion around the various KBs I decided it was worth blogging some best practices and Frequently Asked Questions around SYSVOL and DFS-R.

1. If you have not yet migrated to DFS-R make sure you have the latest version of robocopy. This applies to Windows Server 2008 R2 RTM and SP1. If you are running Windows Server 2012 then robocopy is up to date. This ensures that all of your files don’t end up conflicted and we have to then replicate everything. If you missed or forgot this hotfix you won’t lose data you will just have the files twice essentially. This causes extra replication and storage use. This hotfix will require a reboot due to an update of ntfs.sys http://support.microsoft.com/kb/2639043.

2. Update to the latest DFS-R binaries. You can do this prior to the migration since DFS-R will already be running on the DC. Also a good time to check the status of this service and that it wasn’t removed or stopped as part of a server hardening procedure. List of latest binaries http://support.microsoft.com/kb/968429.

a. Install http://support.microsoft.com/kb/2780453 and enable content freshness protection on Windows Server 2008 R2 DCs. Also see http://blogs.technet.com/b/askds/archive/2009/11/18/implementing-content-freshness-protection-in-dfsr.aspx.

b. Windows Server 2008 R2 install http://support.microsoft.com/kb/2663685 and then enable DFS-R autorecovery as outlined in http://support.microsoft.com/kb/2846759. For Windows Server 2012 you just need to enable autorecovery. It is the usual double negative so this should be changed from 1 to 0. Restart the DFS-R service for the change to take effect. There is conflicting information between these two KB articles however after further review we recommend that autorecovery be enabled for Domain Controllers. For file server workloads we recommend that it be disabled.

3. Start the SYSVOL migration and be patient. DFS-R only polls AD once every 60 minutes and that plus replication means it will take some time for DCs to complete each step. We have lots of great blogs and documentation on that procedure so I won’t repeat that here. Please see the references section below for those.

References:

968429 List of currently available hotfixes for Distributed File System (DFS) technologies in Windows Server 2008 and in Windows Server 2008 R2

http://support.microsoft.com/kb/968429/EN-US

2639043 A robocopy command updates DACLs incorrectly in Windows 7 or in Windows Server 2008 R2

http://support.microsoft.com/kb/2639043/EN-US

2780453 Event ID 4114 and Event ID 4008 are logged in the DFS Replication log in Windows Server 2008 R2

http://support.microsoft.com/kb/2780453/EN-US

2846759 DFSR Event ID 2213 is logged on Windows Server 2008 R2 and Windows Server 2012

http://support.microsoft.com/kb/2846759/EN-US

Implementing Content Freshness protection in DFSR

http://blogs.technet.com/b/askds/archive/2009/11/18/implementing-content-freshness-protection-in-dfsr.aspx

SYSVOL Replication Migration Guide: FRS to DFS Replication

http://technet.microsoft.com/en-us/library/dd640019(v=WS.10).aspx

DFSR SYSVOL Migration FAQ: Useful trivia that may save your follicles

http://blogs.technet.com/b/askds/archive/2009/01/05/dfsr-sysvol-migration-faq-useful-trivia-that-may-save-your-follicles.aspx

Until next time….

Greg “Good Cop” Jaworski

Becoming an Xperf Xpert Part 6: RIP Xperf. Time to Learn Windows Performance Analyzer!

$
0
0

Hey y’all Mark here with a post we should have written a long time ago. I obviously will blame someone else. It’s probably Tom Moser’s fault. Anyway, for those that have been fighting the good fight against SBSL and reading our blog you are probably very familiar with the tool Xperf. We’ve done five posts on that topic which can be found here. Xperf is a great tool but it has some areas to improve. Then James Klepikow showed you how to use the new and improved Windows Performance Recorder to capture a trace. So long remembering a super long command when you could just check some boxes, hit start and you had your data. Then you opened your data in Windows Performance Analyzer (WPA) and it looked like this.

 

image

 

Then you had no idea what to do next. So you probably moved your trace file back to a system that had Xperf on it and pretended like nobody saw it. You are not alone. I too felt the same way. Some of us PFEs that used WPA hit the ground running with this thing and never looked back. Some did not. I did not. I hit the ground and left a Wile E. Coyote hole shaped like me. Fear not, I have survived the fall and will show you how you probably did things in Xperf and now how to do them in WPA. We’ll have you moving like the Road Runner in no time. Let’s get to a few quick tips.

 

Some Basics and Process Lifetimes

The first thing you probably noticed is our graphs are all on the left. There is no more default view you are used to seeing. As our text indicates drag the graph you want to investigate to the right to start getting more info from that graph. You can order the graphs in the main window any way you want! Here I’ve selected our old friend Process Lifetimes and dragged him over.

clip_image002

Few things to notice. First, the time bar goes across the bottom for all charts, in Xperf each chart had the same duplicate time data. Second, this looks very similar to how the Xperf process lifetimes looked but cleaner. I know what you are thinking though. “Hey man, I tried right click and pick ‘Select Summary Table’ so I could sort and move columns around and it doesn’t exist. I used that all the time what gives?” You are correct you cannot do that. There is a new way to accomplish this though. In the upper right corner there a few different views you can select. Click the left most one.

 

image

 

There is your same view but even better. As you switch the columns around the data above will change how it’s displayed and even highlight the graph as you work. For example I’ve sorted my table by Start Time and picked the process winlogin.exe. The upper data set changed from the default (Lifetime type) to Start Time to reflect this. It also highlighted the time of when this process is running and shows exactly where it starts and ends in the graph all automagically.

 

image

 

This is obviously awesome.

 

Finding Slow Services

We’ll use the same techniques we used in Xperf, look for services that are long running. To start with, the services graph is on the left, grab and drag it over. It’s located under System Activity.

 

image

 

Now I know what you are thinking, “Hey man, I use to have way more services starting up than this where did they all go?” They are actually being grouped by service groups which is very similar to Xperf where they would show up as [Group: Name]. If they don’t belong to a service group they are now listed under Group None. To see them all expand the Group: None. Then continue your hunt. Here we can see sftlist is taking a long time to start. We can also see where that is affecting the boot time in the boot phases graph that we’ve also added.

 

image

All the highlighting across all charts happens automatically. This is also very useful and sweet.

 

Finding Disk Usage by Process

This one is a little bit trickier. First start by taking where it says Storage and dragging it to the right. It should look similar to this.

 

image

 

Then you’ll want to change our view to display both graph and table like we did before. Click the left most view button. Then take the column Process and drag it all the way to the left. The table will then sort by this and the graph will update as well. I’ve clicked on the system process which gives a tool tips, highlights the process on all charts and tables.

 

image

 

 

You can clearly see what process is taking most disk time and where on the timeline it’s the most active. Don’t forget you can continue to drag and add columns next to the yellow bar to continue changing how you are sorting your view. Let us know in the comments what you think and look for more posts in the future on how to fully utilize Windows Performance Analyzer.

Mark “beep, beep” Morowczynski

Important Announcement: AD FS 2.0 and MS13-066

$
0
0

 

Hey y’all, Mark here with a quick announcement. There have been reports of customers running into some ADFS issues after applying this past Tuesday patches, specifically MS13-066. Our partners in crime over at AskDS have some more info about it and are knee deep in the weeds working to get this resolved.  Keep an eye over there for more info as it develops. 

 

Mark “Hear ye hear ye (rings bell) “ Morowczynski

How To Setup Your Own Direct Access Lab With Windows Server 2012

$
0
0

Hello there! Welcome to this edition of the Ask PFEPlat Blog. I’m Tom Daniels with the PFE team here to show you how to setup a basic DirectAccess server configuration. These instructions below will get you setup to allow Windows 8 clients to connect to your new DirectAccess server. It’s possible to get Windows 7 clients to connect to a Windows 2012 DirectAccess server but there are a few more steps and we’ll cover them another time. First we are going to get into some checklist items you should cover with any DirectAccess install which starts off below.

I wanted to build a running list of pre-setup checklist items you will want to do with every DirectAccess install.  First and foremost you are going to need a licensed copy of Windows 2012 installed.  You can choose either Windows 2012 Standard or Data Center Edition, either one has the same exact DirectAccess technical feature set.  Once you’ve got the OS installed, the next step is to add the Remote Access role.  This is the piece that’s going to provide the base components for us to get DirectAccess configured at a later time.  Go into Add Roles and Features and check the Remote Access Role as shown below :

clip_image002

After you select the Role, it will prompt you to install some additional components which you can just select “Add features” to continue :

clip_image003

At this point you can keep hitting next until the Install option becomes available. This will install all the Remote Access components needed to get started with DirectAccess. After all these are installed, it’s very important to ensure you are downloading all available Windows Updates for the OS. Not only do we release security updates each month, starting with Windows 8 and Windows 2012 we also have been releasing monthly reliability updates that actually have updates for many OS components including DirectAccess. You can refer to the following article for more information :

http://blogs.technet.com/b/askpfeplat/archive/2013/05/13/update-rollups-for-windows-server-2012-and-windows-8-explained.aspx

We release these every single month and it’s very important to include them in your patch installs for Windows 2012 and Windows 8 systems. When building a new DirectAccess server, grab all of the monthly updates as part of the build process.

Once you have your new Windows 2012 server fully patched and the Remote Access role installed, there is one final list of DirectAccess Server related hotfixes to grab to avoid hitting known issues with the DirectAccess setup wizards.  I would recommend downloading and installing every single one of these hotfixes for any DirectAccess install  :

http://support.microsoft.com/kb/2782560
http://support.microsoft.com/kb/2788525
http://support.microsoft.com/kb/2836232
http://support.microsoft.com/kb/2859347
http://support.microsoft.com/kb/2845152
http://support.microsoft.com/kb/2844033
http://support.microsoft.com/kb/2855269

Once you get all the Windows Updates and list of hotfixes installed above, we can begin the basic setup for your new DirectAccess server. Let’s start by opening up the Remote Access management snap-in and then selecting the “Run the Getting Started Wizard” as shown below :

clip_image005

The next option you are presented with asks if you want to run this Remote Access server as a combination DirectAccess & VPN server, just a DirectAccess server, or just a VPN server :

clip_image007

It’s entirely possible to run this server as your central Remote Access solution providing DirectAccess for your domain joined Windows 7 & 8 machines while allowing VPN for other devices. In this scenario, we are just going to cover a DirectAccess deployment only so select option two (Deploy DirectAccess only). After you select your option, the setup wizard will analyze the OS configuration, network stack, and other prerequisites to ensure the server is ready to configure DirectAccess.

The next screen that gets presented will ask you about the network configuration you would like to use with DirectAccess :

clip_image008

It will ask if you want to configure the server on the edge (if your external facing network card has a public IPv4 address), second option is to configure the server behind an edge device (if the external facing network card has a NATed IPv4 address), or the third option presented is if you want to use a single network card behind the edge. Select which network profile best represents the server network configuration. You will also have to either create an external DNS entry and enter in the box at the bottom or enter in the Internet facing IPv4 address clients will use to connect.

The last and final screen that gets presented will give you a chance to review the configuration settings before applying them. I highly recommend you click on the “here” text that’s highlighted in blue :

clip_image009

There are a couple of important items to review. First one is the name of the GPOs that will be created. Two GPOs get created at the root of your domain by default. The first one by default is called “DirectAccess Server Settings”. This new GPO will be linked to the root of your domain but will use security filtering to only apply to the DirectAccess server computer object directly. This GPO has critical settings for the DirectAccess server itself and always needs to be applied.

The second GPO that gets created is called “DirectAccess Client Settings”. Just like the name mentions, this GPO will be linked to the root of the domain but again we use security filtering to scope the GPO to your DirectAccess clients.

Important note is that you can change the name of the GPOs that get created only during creation in this screen. Moving forward these will be the permanent names of the GPOs so feel free to change them to suit your environment at this time.

After reviewing the GPO names, the second item to pay attention is the Remote Clients section which includes the AD security group that will be used to security filter the “DirectAccess Client Settings” GPO. The default out of the box is to apply the DirectAccess Clients GPO to all Domain Computers that are mobile class hardware (we use a WMI filter to determine if a machine is a mobile computer). I would HIGHLY advise changing the scope to a different security group. Best practice is to create a new security group in AD and use this new security group as your DirectAccess Remote Clients scope. You will just need to remember to add new DirectAccess clients into this AD security group when you want to push out DirectAccess settings. Be sure you add computer accounts to this newly created AD security group, not user accounts since DirectAccess GPO settings are computer specific :

clip_image011

Now you can hit the finish button to create the GPOs and finalize the DirectAccess server and clients. A progress screen will pop up and give you the current status. You can click on the “more details” section to see what’s happening under the covers as shown here :

clip_image012

Make sure this finishes up all green and you will be set! One final fun fact about this progress screen is that you can right click on the bottom pane and expose an option called “copy script” :

clip_image013

This will actually give you the exact PowerShell command that was run to configure the DirectAccess!

clip_image015

This is great in case you ever need to setup DirectAccess again quickly using PowerShell. It’s also possible to run DirectAccess on server core and this would be the only way to configure a new DirectAccess server.

Now you will need to open up TCP/443 on your edge firewall to the DirectAccess server and then you should be ready to have your Windows 8 DirectAccess clients connect. We walked you though using the quick setup wizard and is great for a quick install for Windows 8 DirectAccess clients only. This is great to setup in the lab or a small pilot but I would caution against using this for a production install of DirectAccess. The full setup wizard is much better suited for a production install as it will ask many more questions needed for a proper install.

Hopefully these setups will get you started with DirectAccess. For more in-depth articles you can refer to my DirectAccess blog at www.DirectAccessGuide.com

Tom “Mr DirectAccess” Daniels


Finding User Groups, Finding Speakers and Project Aware

$
0
0

Hey y’all, Mark here again with some info I think our readers will like to get involved with and one that is near and dear to my heart. That is IT User Groups and community as a whole. I know I’m a sap like that. Back before I was a “blue badge” (aka an MS Employee) I used to attend the Chicago Windows Users Group (CWUG) each month. It was great. I really enjoyed it. I’d get to meet fellow people in the community and best of all get some good free info from Microsoft speakers. It could be a PFE, an evangelist or even someone from the product group. There would be demos, I could ask questions and sometimes there would even been some swag. I mean I’ll never wear those t-shirts in public and sometimes my brother takes them or friends of mine ‘borrow’ them never to be returned but that’s not the point. Back before it was called hoarding it was called collecting and I get that. So that was an added bonus. Now I know what you are thinking, “Hey man, I’d join a user group if there were any near me, how can I find these?” Glad you asked but first...

 

Project Aware

There is a lesser known initiative (hence this blog post) at Microsoft called Project Aware. Its goal is pretty simple, connecting Microsoft speakers with user groups, communities and events. This is great! It allows me to search for user groups and for you to search for Microsoft speakers that are willing to participate. For more info around this and a quick video check our own PFE Eric Harlan’s blog.

 

Finding a User Group

It’s actually really easy. Go to https://www.technicalcommunity.com and register. Eric’s video has the goods on this if you get stuck so I wont repeat it all here. Once in there is a nice big button that says “Communities”. It will show you the communities in your area. If you want to do a bigger search click “Communities Worldwide”. Below I searched for Chicago and it displays all the groups in that area.

clip_image002

(you can see my apartment from here)

 

Finding a Speaker

Let’s say you got the first part figured out and you want to get someone to speak at your next user group meeting/conference/sweet 16 party. The last one is not recommended. Just simply click the huge button that says “Speakers”. Input your search criteria and see who pops up. Then you are able to reach out. As you can see we have some members  of AskPFEPlat they are willing to be speakers….

 

image

 

 

You get the point. Search for whatever your group needs or is interested in and reach out to that speaker. If you have some great conference and are looking for your dream speaker you’ll probably find them here. No middle man to go through, no fuss, you can get connected to that speaker right there.

 

Community

I started this post talking about how much I enjoyed being part of my local tech community. Now that I frequently travel I’m always looking for new groups to join while I’m in town. I’ve personally reached out to several groups that I will be near while traveling to see where I can get involved. If you are a member of the Topeka Kansas PC Users Club, I’ll see you Sept 5th. I know I’m not the only PFE doing this and some of our evangelists like Joey Snow have been doing this for some time. If you are not involved in these things you are truly missing out. Finally don’t forget we here at this blog are also a community, reach out to us through the comments, our twitter account @PFEPlatforms and many of us have individual accounts listed below. Until next time….

 

Mark Morowczynski - @markmorow– Chicago, IL

Jeff Stokes - @stokesmsft Atlanta, GA

Rick Sheikh @ricksheikh Chicago, IL

Greg Jaworski @gjaworski - Chicago, IL

Tom Moser @Milt0r - Metro Detroit

Keith Brewer - @msft_KBrewer - Washington, DC

Charity Shelbourne - @PlatformsGal - DFW, Texas

Roger Osborne - @RogOsb - Washington, DC

Victor Zapata - @viczaptx - DFW, TX

 Jasmin Hashmani- @jasminATX -DFW, TX

 

Mark ‘Coming to a city near you’ Morowczynski

Common Mistakes When Troubleshooting Critical System Issues

$
0
0

Hey y’all Mark here again. A lot of the content we provide here on this blog is to keep you guys ahead of potential issues that can arise and also explain how things work in the field. Doing education like this gives you the ability to investigate "possible hiccups" before they become "giant eruptions of sleepless agony" in your environment. We here in PFE call that type of work “Proactive”.  The vast majority of the engagements we provide are proactive in nature. Our famous RAPs and now RaaS are pretty much focused on this. 

Sometimes, though, things break and PFE is here to help with that, too. This type of work is classified as “Reactive.” Sometimes, these issues are small and sometimes they are the huge problems. When they fall into the "huge problems" category, we have a name for them - you've likely heard of them - we call them Critical Situations or “CritSits.” Usually we are talking massive, "business down" outages affecting company-wide or highly critical systems. Think along the lines of a major cluster outage that won’t come online, nobody can login, extremely slow performance on a system that is affecting the business, etc. You get the picture. While you are working with our amazing support folks, a PFE will sometimes get dispatched to come help if the customer requests it. Recently a few of us here on the blog worked a CritSit and thought it might be a good idea to document some common mistakes that take place in these types of situations. So without further ado….

 

Stay Calm

I know this is easier said than done, especially when your environment is down and management is asking you for a status update (I’ll get to those) every 20-30 seconds but this is not the time to let things fall apart.  When things get tough, stiffen up.  Look at the floor and take a few deep breaths. Things can be made much much worse when you make a quick rash decision. These types of decisions tend to get made closer to when an SLA is about to be missed. I’ve made a non-scientific graph to illustrate this: 

 

image

 

The red line is the SLA you need to meet before stuff gets seriously bad or the time the bar closes. The blue line is the likelihood you’ll make a bad decision and everything gets worse. All joking aside, I’ve heard some pretty off-the-wall suggestions when things get desperate. “Let’s bounce the data center." Yikes. How about we don’t do that. Management will typically be pushing harder as you get closer to do something. Remember stay calm. The horse is out of the barn, let’s not let have him run clear across the continent by doing something rash.

 

Bring all parties to the table

This is not usually the best time for in-fighting or pointing the index or middle finger. Often, though, this can be when you see it the most. Right now you are in a pickle, and the faster this gets resolved the better it will be for everyone involved, period. Make sure all folks who represent the impacted systems are represented and available to troubleshoot the issue, including your related vendors. You’d be surprised on how many times we just sit everyone down and describe the issue and someone will say “Oh I made a change sort of around this a few days ago, could that have something to do with it?”

 

Have a schedule rotation

Many times these issues go long into the night and into the next day. This is not the time to pull a marathon session of 40 hours straight. Work your normal 8, 10 to 12 hours shifts as much as you can. Having a fresh set of eyes and a sharp mind are critical to getting this thing solved. This goes for all teams involved. Even if it’s as simple as having someone from another team “on call” can be a lifesaver when you need them at 3 AM. Just to clarify something people say all the time, ‘I’m fine working 24 hours straight with no issue’ would you want the person flying your plane to be up for 24 hours when you hit some turbulence or the guy who got 8 hours of sleep? That’s what I thought. Usually once the issue has been determined and you are making a change the issue could be open for 24, 48 hours or even longer. Making sure you are plugging the right cable, on the right server, etc. are easy mistakes to make when you’ve been up way too long.

 

Death by Status Update

We’ve all been on these calls. We talk about what we just did for 20 minutes and what the results are, we talk about what we think it might be for 20 minutes. Then we talk about what the next steps are for 20 minutes. Then we have to update everyone else on what we are going to do for 20 minutes. Then we do the actual thing we are going to do for 20 minutes and have to stop because we need to start preparing for our next status update. Ok it might not be this bad but it’s probably not that far off. Management needs to know what’s going on and that’s fine but spending time, every time, explaining the same thing to different people is a real waste of time. Having the information in one central spot for everyone to read or hear really saves time. It also helps if there is 1 person in charge of this that is not part of the core troubleshooting team. That way, if people are late, shifts are changing, etc, they can get caught up and the troubleshooting train keeps on a-rolling. Another idea is to actually setup two conference rooms and phone bridges - one for the tech folks and another one for the management folks.

 

The Art of Evidence Gathering

One of the most important things to do after determining there’s a problem is being able to define it.  One critical component for being able to troubleshoot an issue, as well as better defining it, is gathering data…or what I like to call, evidence.  When troubleshooting challenging server issues, we become more like detectives sometimes when the cause isn’t so obvious.  Whenever there is a significant issue, management wants root cause.  How can you determine root cause 3 weeks after a problem occurs if the data is no longer around to be gathered?  Someone reading this is asking “Really? 3 weeks later?”  That happens quite often for a variety of reasons.  Could be that others have already attempted to find root cause for some time before contacting Microsoft.  Could be that someone noticed on a management report that a server had an issue weeks ago and now someone wants to know what happened.  Event log data and other logs should be gathered as soon as possible after the actual incident.  Some logs can be chatty, may have size limits or are circular, and may wrap around and lose history as significant time passes.   This is definitely true when troubleshooting issues on server clusters.  It is no fun to try to determine root cause with missing data.  It is also no fun to try to restore logs from backups weeks later.   Management tools that periodically gather event or performance data can be quite helpful as well.  Gathering good data in a timely manner can be a great precursor to gathering the right parties to troubleshoot the issue or find root cause.

 

Use Solid Troubleshooting Techniques and Start with the Basics

This could really be its own whole thing here but there is lots out there on troubleshooting already. Our own Hilde has written two posts on the topic, part 1 and part 2 and CTS has done a great job with this post. Guess when you need to rely on these more than ever? What does the evidence point to? Don’t go with your gut, go with solid troubleshooting techniques. Making lots of changes at once “to see what happens” is a sure fire way to waste time and probably make it worse. You start to end up like this. One of the things that is usually overlooked is documenting what you are changing/doing. You think you’ll remember, but that test you ran was 16 hours ago and a full pizza + 4 fully leaded Mountain Dews are between then and now. Also start with the basics. I know we immediately like to jump to some crazy in the weeds advance topic and turn the debug level up to a Spinal Tap 11 but resist this urge. Maybe there is an issue with how the applications is warming up the cache on first start up and every hour, but only on the even hours, and on hour 7…..or maybe the tab on the network cable is broken and it is ever-so slightly ajar? You get the point. It's often a good use of time to spin up efforts in parallel; get someone to begin the process of recalling the tapes from the recent backups, start building the OS on a recovery server or VM, get it patched, etc. If it's needed, the restore option is now closer at hand.

 

Backups! Backups! Backups!

Much like the elusive Beetlejuice, but probably far more troublesome for customers we’ve arrived at that sensitive topic, backups. There have been many times over the years I can think of where during a critical server outage we could have had things back online within minutes (time = $$$) by restoring a recent backup.  In those situations, many times valuable root cause data could have been captured, backup restored, and crisis mitigated – if only they could restore a backup.  I remember one particular incident about a decade ago where an administrator was calling me over and over from a bathroom stall as not to let anyone know he was having to call for help because there was no backup available and they were having an outage.  As a result it took many hours to resolve the situation.  Not having a functional backup to restore is a common mistake that can be a very unpleasant surprise in addition to an already unplanned outage.

Typical reasons a backup might not be available for restore include:

· Nobody ever tested the ability to restore…and restore doesn’t work

· Backups weren’t capturing what they thought they were

· The person with the ability to access and restore backups is in the Caribbean somewhere with no phone.

· Scheduled backups weren’t actually running so there is no backup

· They thought that since the data was on a RAID set or SAN that one wasn’t needed

· Backups are stored offsite for safe keeping and the facility is closed

 

 

That’s it for now. What did we miss? What are you tricks? Send us your questions and comments about this topic and anything else.

 

Mark ‘I need a status update’ Morowczynski + The AskPFEPlat blog team. Much like a bad date, everyone has a troubleshooting horror story.

Office 365 & Single Sign-On: How to Handle Different UserPrincipalName (UPN) Values

$
0
0

Hey folks, Keith Brewer here to discuss an issue I encountered while working with a Microsoft Premier Customer. As a PFE we are often asked to assist our Premier customers with a specific technology. In this instance I was asked to assist with Active Directory Federation Services (ADFS). Fellow PFE Jasmin Amirali recently blogged an ADFS FAQ on ASKPFEPLAT here. Now you ask what this has to do with Office365 and UPN values. Well after ADFS was up and running, validated with a sample claims aware application in the cloud an issue arose with finalizing Single Sign-On (SSO) with their email provider O365. While not an ADFS issue the below information may be helpful if you were to find yourself in the same situation.

Their particular configuration is described as Scenario 2 here:

How to pilot single sign-on in a production user forest http://community.office365.com/en-us/wikis/sso/357.aspx

Scenario 2: The organization has decided initially not to use single sign-on (identity federation). Instead the organization’s users are using Microsoft Online cloud IDs (i.e. non-federated IDs) to sign in to Office 365 services. At some point later the organization decides that they want to start using single sign-on, by converting their existing users from standard Microsoft Online cloud IDs to federated IDs.

While the ADFS infrastructure was validated another problem was lurking waiting to rear its ugly head just shortly after the online domain was converted from Standard (Managed) to Federated.

Problem:

Users within an IT organization receive 8004786C from O365 error shortly after converting the online from standard (Managed) to Federated.

In troubleshooting you may come across:

Troubleshoot Active Directory user accounts that are piloted as Office 365 SSO-enabled user IDs http://support.microsoft.com/kb/2392130

Now you may or may not have piloted these users but the key for this scenario was:

•The on-premises Active Directory user account should use the federated domain name as the user principal name (UPN) suffix

•The UPN of the on-premises Active Directory user account and the Office 365 user ID must match.

In this customers scenario the on premise user accounts value for UserPrincipalName mapped to the internal Active Directory domain which was different then their online domain within O365.

It is highly recommended to run through some readiness checks before enabling SSO (creating or converting) to ensure accurate functionality with SSO for Office365.

Microsoft Premier Customers can engage with their TAM and have an Office365 Migration Readiness Assessment (OMRA) completed.

Additionally there are some other utilities and articles that can help expose situations like this one prior to encountering an issue.

Office365 Ramp Tool

https://onramp.office365.com/onramp/

Office365 Advanced Deployment Guide

http://technet.microsoft.com/en-us/library/hh852483.aspx

Solution:

There are a number of potential solutions to this issue as discussed in the above articles

This customer decided to change all users on premise UPN value to match that of the online domain within O365.

*** This is not a subtle change and can have massive repercussions to user authentication and/or 3rd party applications leveraging the current UPN value ***

Now to move onto how to wholesale change the UPN value for numerous users quickly.

I have written a PowerShell script to evaluate the current UPN suffix for a specific administrator provided string (OldUPNString) and if present to replace it with an administrator provided string (NewUPNString). It will perform this operation on all user accounts found starting at an administrator provided location (TargetDN).

This script is available here:

http://gallery.technet.microsoft.com/Change-On-Premise-Active-93d5cc2d

clip_image002

Notes:

The value provided with OldUPNString is case sensitive.

Which means a value provided of Contoso.com will not match contoso.com.

This is by Design.

Additionally Line 55 of this script Contains the –whatif switch:

clip_image003

This switch ensures that the script in its default configuration will make no changes to the Active Directory. The –whatif switch must be removed from the script in order for any changes to be made.

I will re-state the earlier warning here:

*** Changing a user’s UPN value is not a subtle change and can have massive repercussions to user authentication and/or 3rd party applications leveraging the current UPN value ***

Script in use:

Some Information on the users being queried:

clip_image005

Run the script targeting the UPNChangeTest OU

clip_image007

Screen output shows that all user accounts including those located in Sub OU’s. The script will output to the screen each user account and appropriate action taken.

Here is the resulting configuration:

clip_image009

In addition to the screen output the script will create a log file with the SAMAccountName of each user where the userprincipalname suffix was changed.

If adding a new suffix it is also recommended to add this new suffix to the Active Directory Configuration.

See:

HOW TO: Add UPN Suffixes to a Forest

http://support.microsoft.com/kb/243629

Hope this helps

Keith “Do you know you’re UPN?” Brewer

Why you want to be using VHDX in Hyper-V whenever possible and why it’s important to know your baselines

$
0
0

Hello, Jeff “The Dude” Stokes here for another post.

The other day I was implementing in my lab a pure Microsoft VDI solution (blog coming soon) leveraging the latest and greatest Hyper-V on Server 2012 R2. While doing so, I noticed I was getting abysmal performance coming from a four disk enclosure of RAID class prosumer hard drives. The enclosure was configured to use hardware RAID 5 and connected over USB 3.

My disk performance was looking something like this:

image

 

 

(Note on Windows Server 2012 R2, to enable disk you must (from an elevated command prompt) type diskperf –Y.)

For the unseasoned performance analyst out there, I’ll point out that my USB 3 enclosure was going slow, 2.396 seconds to do a transaction. Holey Moley! We’re looking for enterprise class hardware to generally perform transactions at 15ms or less, consumer grade maybe 20ms max? Also we’re getting about 1.7 MB/sec of throughput while doing this. Why is it so slow?!

At the time I was running 3 VMs at the same time, each with static 40 GB VHDs, deploying Windows 7 and some stuff from a MDT 2013 share on another LUN. This is not a light load, but it should go a bit quicker than that. Funny enough, when I stopped Hyper-V VMs residing on the enclosure, I got something like this when running a speed test:

 

image

 

As you can see, this looks much better, more um, sane? Now we are pushing 154 MB/sec writes at an average of 566ms, as opposed to 1.7 MB/sec total with 2.4 seconds (2400ms) of latency.

So, I did some troubleshooting:

o Swapped USB 3 cable

o Tried the on-board USB 3 ports, and two different brands/chipsets of Add-on cards

o Updated BIOS on MB

o Updated Firmware on the enclosure

o Updated USB 3 drivers

o Searched the web for super-secret reg keys to make USB 3 ‘go faster’ (doesn’t exist so save yourself the time)

o Talked to developers of the USB 3 stack for Microsoft who traced it and saw no issues

o Tore my hair out and complained to the wife about it

By the way for those interested, USB 3 tracing is wicked easy, by going to this blog:

http://blogs.msdn.com/b/usbcoreblog/archive/2012/08/07/how-to-trace-usb-3-activity.aspx

[So after verifying it wasn’t USB 3, I was pretty sure about that, I thought to myself, what the heck is going on here?... I then started tinkering in the lab some more, had an epiphany on VHD vs VDHX and searched and found this document:

http://msdn.microsoft.com/en-us/library/windows/hardware/jj248719.aspx

It says, AND I QUOTE, Page 157:

“The VHDX format also provides the following performance benefits (each of these is detailed later in this guide):

· Improved alignment of the virtual hard disk format to work well on large sector disks.

· Larger block sizes for dynamic and differential disks, which allows these disks to attune to the needs of the workload.

· 4 KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4 KB sectors.

· Efficiency in representing data, which results in smaller file size and allows the underlying physical storage device to reclaim unused space. (Trim requires pass-through or SCSI disks and trim-compatible hardware.)

When you upgrade to Windows Server 2012, we recommend that you convert all VHD files to the VHDX format due to these benefits. The only scenario where it would make sense to keep the files in the VHD format is when a virtual machine has the potential to be moved to a previous release of the Windows Server operating system that supports Hyper-V.”

These two points stuck in my head, “Improved alignment of the virtual hard disk format to work well on large sector disks.” And “4 KB logical sector virtual disk that allows for increased performance when used by applications and workloads that are designed for 4 KB sectors.”

Huh. How ‘bout that.

So I fired up Fsutil and verified my disk format type (4k, 512e or 512 bytes per sector, more on this here: http://en.wikipedia.org/wiki/Advanced_Format)  by doing the following:

fsutil fsinfo ntfsinfo F:

image

And hey, it’s NOT Advanced Format according to fsutil. But the spec sheet for the drive says otherwise (512e to be exact). Who is correct here? Turns out drivers can give back data that appears to be garbage to fsutil, and then it defaults to 512 when this happens . So I know the drives SHOULD show 4096 for Physical Sector and 512 for Sector, but don’t. So I should still continue as if they do show 4096 for Physical Sector and 512 for Sector, which is 512e standard, which means I need to upgrade the VHDs to VHDX!

Easy peasy in Powershell 3.0, Convert-VHD on Server 2012/2012 R2 will do it, or you can use the GUI too.

Parameter Set: Default

Convert-VHD [-Path] <String> [-DestinationPath] <String> [-AsJob] [-BlockSizeBytes <UInt32> ] [-ComputerName <String[]> ] [-DeleteSource] [-ParentPath <String> ] [-Passthru] [-VHDType <VhdType> ] [-Confirm] [-WhatIf] [ <CommonParameters>]

or

image

image

image

image

image

 

Then just reconfigure the VM to talk to the new file instead of the old one and blamo, now you’re cookin with gas here!

After converting from VHD to VHDx, my performance (again, with RAID 5 so there is a serious write penalty) it looked like this:

 

image

 

Above you can see that we are running at a total of about 42.4 MB/second, it’s still a little slow with the response time, but the throughput has gone from 1.7 MB/second to 42.4 MB/second. Much better. Latency has halved just about.

If you were wondering if that’s all that VHDX is good for, search no further!

VHDX format supports virtual disk storage capacity of up to 64 TB

Protects against data corruption during power failures by logging updates to the VHDX metadata structures

Improves alignment of the VHD file to work well on large sector disks.

Support for TRIM on direct attached/SCSI hardware that supports TRIM, which results in smaller file size and allows the underlying physical storage device reclaim unused space.

Dr. Jeff’s Deep in the Weeds Section

Note that if you are in this boat and want to REALLY figure out what’s going on under the hood, Neal Chistiansen at Microsoft was kind enough to give this advice on attaching WINDBG or LiveKD to the Windows Kernel and figuring it out:

“You want to break in: nt!FsRtlGetSectorSizeInformation and follow what happens when the PhysicalGeometry is queried.

Note that there is a lot of validation of the returned information because we have seen garbage come back from this call. If incorrect information is returned then we fall back to a 512b sector device.”

There you have it. Our performance greatly increased just by using the latest hyper-v disk type. We didn’t even have to do anything else. We also discovered this performance problem by knowing what our performance baselines should be. Without this we’d have to rely on the users to complain about how slow the new VDI environment is which would probably get some folks in some hot water on this brand new solution…. if this was production. So the next question is… do you know your baselines :) ? That’s it for now.

 

Jeff ‘the Dr. is in’ Stokes

Clarifications on KB 2853952, Server 2012 and Active Directory error c00002e2 or c00002e3

$
0
0

Hey y’all, Mark and Tom here to clear up some confusion on MSKB 2853952, that describes the corruption of Active Directory databases in virtualized domain controllers running on Windows Server 2012 Hyper-V host computers.

 The article was released in July 2013 with title “Active Directory database becomes corrupted when a Windows Server 2012-based Hyper-V host server crashes” but has sense since been renamed to “Loss of consistency with IDE-attached virtual hard disks when a Windows Server 2012-based Hyper-V host server experiences an unplanned restart” Confused already?  Please continue reading!! 

 

 The Problem

Following “hard” shutdowns (i.e. the plug is pulled) on Windows Server 2012  Hyper-V hosts, virtualized Domain Controller role computers may experience boot failures with error 2e2.

2e2 boot failures have occurred for years on DCs running on physical hardware when some specific guidelines (we’ll get to those in a minute) were not being followed. Deploying Active Directory – and therefore, AD databases, which are really just Jet databases, (as discussed in our AD Internals post) in a virtual environment introduces another additional root cause which is mitigated by MSKB 2853952.

 The KB tells us that Jet databases placed on virtual IDE drives on virtual guests are vulnerable to corruption when the underlying Windows Server 2012 hyper-V host computer experiences an unplanned shutdown. Possible causes for such unscheduled shutdowns might include a loss of power to the data center or simply the intern tripping on the power cable in the data center. It has happened before and it will happen again.

Domain controller log files or database files that are damaged by an unscheduled shutdown may experience normal mode boot failures with a stop c00002e2 or c00002e3 error. If auto reboot is enabled on your domain controllers following a blue screen, DCs may continually reboot once their hyper-V host restarts.

Text and graphical examples of the c00002e2 are shown below

c00002e2 Directory Services could not start because of the following error: %hs Error Status: 0x%x. Please shutdown this system and reboot into Directory Services Restore Mode, check the event log for more detailed information.”

 

image

Figure 1 - Uh oh...

The KB goes on to explain that this behavior occurs because the Hyper-V virtual IDE controller reports incorrectly “success” if the guest requests to disable the disk cache. Consequently, an application, like Active Directory, may think an I/O was written directly to disk, but was actually written to the disk cache. Since the power was lost, so was contents of the disk cache.

 

The Fix

There are four fundamental configuration changes to lessen the possibility from this occurring (whether DCs are deployed on physical or virtual machines):

1.) Make sure you are running on Server class hardware. That means that physical hard drives hosting Active Directory databases and other jet-dependent server roles (DHCP, FRS, WINS, etc) reside on SAS drives as opposed to IDE drives. IDE drives may not support forced unit access that is needed to ensure that critical writes by VM guests get transitively committed through the virtual hosts to underlying disk.

2.) Drive controllers should be configured with battery-backed caching controllers so that jet operations can be replayed when the hyper-V hosts and guests are restarted.

3.) If Hyper-V hosts can be configured with UPS devices so that both the host and the guest enjoy graceful shutdowns in the event of power losses, all the better.

4.) If you feel like the auto-reboot behavior masks the 2e2 or 2e3 boot errors, then disable the “automatically restart” option by going to the advanced tab on system properties under startup and recovery.

 

Next, MSKB 2853952 or the July 2013 cumulative rollup 2855336 (we’ve detailed these rollups in an earlier post) which includes standalone QFE 2853952 should be installed on Windows Server 2012 Hyper-V hosts and Windows Server 2012 guests.

A pending update, currently scheduled for release today (September 10th, 2013) will update 2853952 to apply to

  • Windows Server 2008 R2 Hyper-V hosts.
  • Windows 7 and Windows Server 2008 R2 virtual guests running on either Windows Server 2008 R2 or Windows Server 2012 Hyper-V hosts.

 

In summary, the updated version of KB 2853952 should be installed on both Windows Server 2008 R2 and Windows Server 2012 Hyper-V hosts (using the existing version of KB 2853952), and Windows 7 / Windows Server 2008 R2 virtual guests utilizing a jet-based store like Active Directory.

A workaround that can be deployed NOW, is to deploy jet databases, including the Active Directory  database and log files on virtual SCSI drives when Windows Server 2008 R2 and Windows Server 2012 virtual guests resides on Windows Server 2012 virtual hosts. 

The reason SCSI or Virtual SCSI is recommended is that SCSI controllers will honor forced unit access or requests to disable write cache. Forced Unit Access (FUA) is a flag that NTFS uses to bypass the cache on the disk – essentially writing directly to the disk. SCSI has supported this via the t10 specification but this support was not available in the original t13 ATA specifications. While FUA support was added to the t13 ATA specifications after the original release, support for this has been inconsistent. More importantly, Windows does not support FUA on ATA drives.

Active Directory uses FUA to perform un-buffered writes to preserve the integrity of the database in the event of a power failure. AD will behave this way on physical and virtual platforms. If the underlying disk subsystem does not honor the FUA write, there could be database corruption and/or a “USN Bubble”. Further, some SCSI controllers feature a battery backed cache, just in case there are IOs still in memory when power is lost. (Thanks to fellow PFE Brent Caskey for doing some digging on this)

Applying the July update rollup and the pending September updates on the relevant Hyper-V hosts and virtual guests will greatly reduce the likelihood of damage to jet files when Hyper-V guests reside on virtual IDE disks. However the recommendation is still to use virtual SCSI disks for jet-based workloads and other critical data.

 

FAQ about this update

This update probably sent many of your admin spidey sense tingling and for good reason. Let’s try to answer ones that you are thinking about.

Does this only affect Active Directory?

By reading the actual problem you’ll notice it’s not a problem with Active Directory itself so the answer is no. The title of the KB has been updated to reflect this and hopefully provide some clarity. The problem is with applications that require I/O guarantee. IDE doesn’t provide I/O guarantee and neither does Virtual IDE.

How Should I Be Configured?

You are going to want to have your data stored on Virtual SCSI (vSCSI) disks for the reasons stated above.

What about physical machines on IDE drives, are they at risk too?

Yes. If you still have physical machines that are running on IDE drives, you will want to try to move the server data to SCSI disks as well.

I have all my data on the boot drive, can I boot off Virtual SCSI?

You cannot. In Server 2012 R2 we actually have Virtual SAS which you can use for both boot and data. For now you’ll need to use a separate virtual SCSI disk for data.


Is only Server 2012 affected by this?

No this also affects 2008 R2. However the new update is now for both 2008 R2 and 2012.

Where do I apply this update, host, guest or both?

The update should be applied to Windows Server 2012 hosts, and in a post July 2013 update, Windows Server 2008 R2 Hyper-V hosts, and Windows Server 2012/Windows Server 2008 R2 / Windows 7 virtual guests.

Anything else we should be doing for this?

You’ll want to make sure any operational and configuration changes are in place to avoid any unscheduled down time until you are able to move the data to a virtual SCSI disk and apply the appropriate updates.

I have a lot of DCs that are set up improperly, a little help?

Tom recently helped out a customer with moving their DB and logs to SCSI disks. Thanks to PowerShell and his powershell-fu, this is all pretty simple but it does take the AD service down on the target DC for a period of time.

First, on the Hyper-V host, you’ll need to attach a new disk to the virtual machine. Launch PowerShell as an admin on the host. Pre-identify the VM name and the physical location where you’ll create the new VHDX file. Then run:

$vhd=New-VHD-Path[PATHTOVHDX]-SizeBytes10GB-Dynamic:$false

 

Add-VMHardDiskDrive-path$vhd.path -ControllerType:SCSI-ControllerNumber0-VMName[VMNAME]

 

Obviously, replace the bracketed parameters with your parameters. Also modify the disk size to something appropriate for your database. 10GB will cover most customers.

After, you need to log on to the guest VM to create the volume and move the DB. In the example below, we’ve used drive letter E. Modify this based on your company standards or configuration.

First, check to see if the disk is offline, and set it to online if it is.

Get-Disk|Where { $_.OperationalStatus -eq"Offline" } |Set-Disk-IsOffline:$false

 

Once it’s online, you just need to create the volume. PowerShell makes this very easy on Windows Server 2012. If you’re using 2008, you will need to replace this part with diskpart commands. For the sake of brevity, we’ll just cover PowerShell.

Get-Disk|Where { $_.PartitionStyle -eq"RAW" } |Initialize-Disk-PartitionStyle:GPT-PassThru|New-Partition-UseMaximumSize-DriveLetterE|Format-Volume-Full:$false-FileSystem:NTFS-NewFileSystemLabel"NTDS"-Force

 

Ok, that doesn’t look easy, but it’s all one line, making use of the PowerShell pipeline. As we complete each task, we pass the result to the next cmdlet. Finally, we end up with an E drive. Next, we need to move the database and logs. We’ll use ntdsutil to do this.

First, stop NTDS. Then, run ntdsutil. Modify the paths below to fit the drive letter you chose above.

#stop NTDS

Stop-ServiceNTDS-Force

#use NTDSutil to move logs/db

ntdsutil

activateinstancentds

 files

movedbtoe:\NTDS

movelogstoe:\NTDS

 quit

quit

 

Verify the output from ntdsutil. If you’re scripting this out, I recommend extensive testing ahead of time. You may be able to use Test-Path to figure out if the database and logs moved successfully or not. Assuming everything checked out, run Start-ServiceNTDS to restart NTDS. Congrats, you’re made it to SCSI disks.

Any questions please let us know in the comments.

Mark “Crash test dummy #1” Morowczynski and Tom “Crash test dummy #2” Moser

Viewing all 501 articles
Browse latest View live