Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

New Features in the Netlogon Parser (v1.1.4) for Message Analyzer

$
0
0

 

Hi all! Brandon Wilson here again talking to you about some new features added in the updated Netlogon parser (v1.1.4) for Message Analyzer. This parser was developed with fixes that are specific to Message Analyzer 1.2; however this version is backwards compatible with Message Analyzer 1.1 (with some caveats that are explained below). As such, Message Analyzer 1.2 will be the focus that I speak about in this blog…

The interface for Message Analyzer v1.2 has changed a bit, and I’ll try to touch on the areas pertinent to the Netlogon parser here, but outside of the GUI changes the pertinent methods for troubleshooting and parsing using the Netlogon parser are the same as we’ve went over in the previous blog posts. If you haven’t reviewed the previous blog posts, these are essential reading for proper usage of the Netlogon parser, and you should review the Introduction blog and the Troubleshooting Basics for the Netlogon Parser for Message Analyzer blog as pre-requisites, which cover some of the main features and troubleshooting techniques that were available in v1.0.1 of the Netlogon parser (the initial public release). It would also be a good idea to get a handle on Netlogon error codes from the Quick Reference: Troubleshooting Netlogon Error Codes blog and troubleshooting MaxConcurrentApi issues in the Quick Reference: Troubleshooting, Diagnosing, and Tuning MaxConcurrentApi Issuesblog, both of which can help guide you to proper troubleshooting and root cause analysis for Netlogon related issues.

I talk about versions a lot when it comes to the Netlogon parser but in reality, as of the date of this post, they are all named Netlogon.config, and the only way for you to truly know the version you have is to open the file and look at the version table at the top. The initial version released to the public, as you can probably tell by the previous blog posts, was v1.0.1. Today we are talking about v1.1.4, which has had many new features added to help you understand and diagnose Netlogon issues. This updated parser is provided in the installation package for Message Analyzer 1.2. If for some reason you are unable to upgrade to Message Analyzer 1.2, then the Netlogon parser v1.1.4 is available to download in this blog.

As with all of our parsers, this is provided as is, however there are a few different feedback mechanisms for you to use. You can use the feedback link/button in Message Analyzer, reach out in the Message Analyzer forum, you can send an email to MANetlogon@microsoft.comwhere you can submit ideas and any problems or buggy behavior you may run across, or of course you can leave a comment here. I highly recommend reaching out through one of the available methods to provide your suggestions for additional detections to add, problems you encounter, etc.

You can also read up more on Message Analyzer itself at http://blogs.technet.com/MessageAnalyzer

In this new features walkthrough, we will cover the following:

GUI changes in Message Analyzer 1.2

Updates and New Detection Features in the Netlogon Parser v1.1.4

Known issues

How to update the Netlogon parser manually if you are unable to upgrade to Message Analyzer 1.2

Reference links

GUI changes in Message Analyzer 1.2

The first thing you might notice when opening Message Analyzer 1.2 is that the view has changed and now look like the below screenshot. It’s a bit different than you’re used to, but it’s actually a little easier to work with IMO, but you can still customize it to how you like it.

image

The major components that tie to the Netlogon parser that changed are the view filter and the “hide operations” button. The view filter is now found towards the top right corner, which is much easier than having to drill through the tabs on the bottom right to get to the view filter. It’s there, it’s ready to use, and you’re probably going to use it! In the below screenshot, I’ve highlighted the “View Filter” area in a green square.

image

The second component that you may use with the Netlogon parser that has gotten a little hidden (ironically) is the “Hide Operations” button. In Message Analyzer 1.2, this is now found in the bottom right corner in the Viewpoints tab. In there, you will see an Operations dropdown button that includes the options: Show, Hide, and Exclusively (all of which are pretty self-explanatory). To make this a bit easier, I highlighted this area in a green square in the below screenshot:

image

Updates and New Detection Features in the Netlogon Parser v1.1.4

As I mentioned, there are numerous new features and updates added to v1.1.4 of the Netlogon parser. Before I show you the guts of the new features, I want to give you an idea of the updates:

1. Significantly improved performance!

2. Modified wording in the summary to better translate the lines being parsed

3. Added some additional syntaxes for NO_CLIENT_SITE detection

4. Corrected issue with the timestamp being off by minus UTC time

5. Changed the hard coded year in the output to 1601 (I’ll explain why…)

6. Added “sysvol not ready” detection

7. Added detection of specific error codes to detect issues without reviewing individual authentication calls

8. Combined SChannel and Kerberos PAC validation calls into their own groupings

So, let’s go over these additions in a bit more detail:

1. Significantly improved performance!

Performance with the parser has been improved by more than 2x! I won’t get into the nitty-gritty details of how, because that’s more of a development talk and doesn’t really serve a purpose here, but suffice it to say, the performance has been improved dramatically! Stay tuned though because more performance enhancements and features will be coming out in the future. Keep the input coming, so we can keep helping you achieve your goals!

2. Modified wording in the summary to better translate the lines being parsed

There was some confusion in some of the wording around Kerberos PAC validation that external users noticed and brought to my attention (thank you for that!), as well as some other random wording that needed some improvements to be more straightforward. So as a result, this wording was updated to reflect the proper verbiage to ensure consistency. A small change, but it makes a difference when you’re reading your log output!

3. Added some additional syntaxes for NO_CLIENT_SITE detection

There was an issue identified where certain unexpected syntaxes for NO_CLIENT_SITE detection on domain controllers came up. The Netlogon parser v1.1.4 has these syntaxes added to ensure proper detection with all known formats of these lines in the Netlogon log.

4. Corrected issue with the timestamp being off by minus UTC time

Beginning in MA 1.1, the timestamp would show the year, followed by the time. However, the time being shown wasn’t the time actually seen in the Netlogon log. Instead, it was the time in the Netlogon log with the current UTC time difference subtracted (-4 hrs for example). This resulted in a little confusion from time to time when reviewing log files. As a result, the parser has been updated to reflect the timestamp shown in the Netlogon log to eliminate confusion. NOTE: This issue is only resolved in Message Analyzer 1.2. If you are using the manual update method to update the Netlogon parser to v1.1.4 to use in Message Analyzer 1.1, this timestamp issue will still exist!

5. Changed the hard coded year in the output to 1601

In previous versions of the Netlogon parser, the year in the timestamp field always displayed as “2013” (the year of the initial release of Message Analyzer). This was a bit confusing, so we thought it would be better to use something we use in Active Directory as a standard for attributes that require dates but have not yet been set, which is the year 1601 (ok you caught me, we use 1/1/1601 in AD attributes). Remember that the Netlogon log does NOT contain a year in it, so a year was picked to allow the TimeElapsed field to operate properly.

6. Added “sysvol not ready” detection

Recently an issue came across my plate that involved failed authentication and trust establishment. The problem was a massive scale of sysvol not being shared on all/nearly all domain controllers. While fairly easy to spot in the Netlogon log directly, it still involved a lot of review of a lot of logs to find the errors and determine the cause. As a result, the “sysvol not ready” detection was added into the parser to streamline this type of troubleshooting (and besides that, I think you probably want and NEED to know that Sysvol isn’t being shared on your domain controller!).

7. Added detection of specific error codes to detect issues without reviewing individual authentication calls

Long ago when we started down the road to creating a Netlogon parser, I felt it was extremely important to help the general public to resolve their problems outright and not just provide a new way to read the logs. We’ve come a long way in those regards with the Netlogon parser v1.0.1, and have expanded on that significantly in v1.1.4. In v1.0.1, we streamlined identification of things like no client site detection, MaxConcurrentApi issue detection, and RPC port exhaustion…3 very common things that can cause some very major and very expensive problems.

With the Netlogon parser v1.1.4, we’ve expanded on that significantly to include detection of all of the potential issue causing error codes (plus the aforementioned sysvol not ready detection). Note that it does not look for all known error codes, only those that are likely to cause you some headaches. The error code detection evaluates outside of the authentication calls themselves to allow authentication attempts to still be properly grouped together. Here is a list of the new error code detections added into the Netlogon parser v1.1.4 (note that these error detections are grouped together and tell you what the error code means; you will see this in the upcoming screenshots):

Status/Return Code

Technical Meaning

Sysvol not ready

Sysvol not ready

0xC0000234

STATUS_ACCOUNT_LOCKED_OUT

0xC003000C

RPC_NT_BAD_STUB_DATA

0xC0020050

RPC_NT_CALL_CANCELLED

0xC000018C

STATUS_TRUSTED_DOMAIN_FAILURE

0xC0000192

STATUS_NETLOGON_NOT_STARTED

0xC0000017

STATUS_NO_MEMORY

0xC000005E

STATUS_NO_LOGON_SERVERS

0xC000018A

STATUS_NO_TRUST_LSA_SECRET

0xC000009A

STATUS_INSUFFICIENT_RESOURCES

0xC00000DC

STATUS_INVALID_SERVER_STATE

0xC0000022

STATUS_ACCESS_DENIED

0x00000005

ERROR_ACCESS_DENIED

0xC0020008

RPC_NT_INVALID_NET_ADDR -or- ERROR_NOT_ENOUGH_MEMORY

0xC0020017

RPC_NT_SERVER_UNAVAILABLE

8. Combined SChannel and Kerberos PAC validation calls into their own groupings

Since what you can get out of SChannel authentication and Kerberos PAC validation is limited to the domain being called and any return code, it was decided to free up some space in the analysis grid for more useful information. These types of calls are still parsed, however they are each now added to a corresponding larger operational group. If there were errors, they should still be detected by the aforementioned error code detections above.

So now with some of the explanations of the updates out of the way, let’s take a look at the new detections available. The Winsock/RPC port exhaustion and the MaxConcurrentApi detections were discussed in my previous posts for the Netlogon parser v1.0.1, so I will not go into those in this blog. Please review the Introduction blog and the Troubleshooting Basics for the Netlogon Parser for Message Analyzerblog if you want to see details on those particular detection mechanisms. Other than a slight change of wording, the detection has been unchanged in the Netlogon parser v1.1.4.

Normally, I would provide you a breakdown with screenshots of each individual detection mechanism, but since this is a new features post, I am trying to minimize that a bit. Along those lines of thinking, what I’ve done is created a sample Netlogon log file that includes only lines that are applicable to the new features to show those to you. As a heads up, these screenshots are taken with using a custom view layout I created (which in plain English means I removed the columns that aren’t of any use for the Netlogon parser).

image

It wouldn’t be right if I didn’t expand that view a little more to show you the calls within these groupings…

In this screenshot, we can see the “sysvol not ready”, the MISC and MAILSLOT (which were covered in previous blogs), account lockout, RPC bad stub data, RPC call cancellation, trusted domain failure, no memory status, and finally, the no logon servers available detections. As you can see, the wording doesn’t change *much* under each of these error detection groupings.

image

In this next screenshot, you can see the invalid server state, insufficient resources, no LSA secret, access denied calls, Netlogon not started, invalid network address (or out of memory), and the RPC server is unavailable detections. There’s also a glance of the SChannel authentication grouping, which I’ll show you in just a second. As with the first screenshot, you can see the wording doesn’t change that much between the operational grouping and the individually parsed line.

image

In this last screenshot, you can see the changes made to the SChannel and Kerberos PAC validation authentication groups. Again, this was done because the information provided in these calls is minimal, and is basically limited to the user’s calling domain, the proxying server/machine, and the return result. If there is an error on the authentication return call, then that should be detected by the specific error code detection mechanisms put into place. But, when all else fails, you can still filter what you’re looking for as discussed in the Troubleshooting Basics for the Netlogon Parser for Message Analyzerblog.

image

Known issues

As with all new toys, there’s always some cool new features, but there’s also usually some sort of issue (in this case though, it’s *mostly* not the parser’s fault!). So in this section, I want to recap some of the known issues:

1. Message Analyzer performance

a. There are known issues with using Message Analyzer on single core virtual machines where the CPU can (and will) spike up to 99-100%.

b. Message Analyzer, when used with the Netlogon parser, can have a decent memory footprint. I recommend having at least 4GB of RAM, but as we all know, the more RAM the better!

2. Netlogon parser performance

a. There are known performance issues when the Netlogon log file being reviewed is larger than 100MB.

b. Netlogon parser performance and functionality can be impacted if there are non-contiguous timestamps within the log file being reviewed. Put another way, if you have temporarily enabled Netlogon logging in the past, and then re-enable it 6 months later (as an example), you may impact performance and functionality due to the differing timestamps.

i. In this situation, you can stop the Netlogon service, delete or rename Netlogon.log, then start the Netlogon service once again to start from scratch with your file

3. Authentication groupings

a. There are known issues with authentication groupings that tie back to certain versions of Netlogon.dll. In order for the authentication grouping to work properly, the entered and returned line must contain a timestamp, and must be on their own line.

i. In situations where you see authentication times have a large time elapsed where the lines in the grouping are a distance apart in the log file (ex: entered line on frame 1500 and returned line on frame 98000), then you can check the operational grouping with the summary “The lines grouped here are typically not useful for troubleshooting! Please expand grouping for details.”

ii. In high resource utilization times, such as lsass spiking the CPU, there could be gaps in data that lead to missed lines that can also lead to authentication grouping/TimeElapsed mismatches.

4. Timestamps (only when used with Message Analyzer 1.1)

a. When using the Netlogon parser v1.1.4 with Message Analyzer 1.1, the timestamp -UTC time issue that is corrected when the parser is used with Message Analyzer 1.2 still exists. You still gain the additional functionality.

How to update the Netlogon parser manually if you are unable to upgrade to Message Analyzer 1.2

If for some reason you are unable to upgrade to Message Analyzer 1.2, but still want to take advantage of the new features introduced in the Netlogon parser v1.1.4, then you can follow the below 4 steps to implement the updated Netlogon parser version for Message Analyzer 1.1. Please keep in mind that the Netlogon parser v1.1.4 is written for Message Analyzer 1.2 and beyond, so there may be bugs that were not identified in testing and are not covered in the above known issues list!

NOTE: No version of the Netlogon parser will function on any Message Analyzer version less than Message Analyzer 1.1. It is highly suggested to find a way around your deployment blocker so that you can upgrade to Message Analyzer 1.2 as soon as possible!

With that being said, here’s how you manually update the parser:

1. If Message Analyzer 1.1 is running, please shut it down and ensure the process is no longer listed in Task Manager

2. Download the Netlogon-config.zip file in this blog (this is v1.1.4 of the Netlogon parser)

3. Unzip Netlogon-config.zip to a location of your choosing

4. Copy the Netlogon.config file that you unzipped into %userprofile%\AppData\Local\Microsoft\MessageAnalyzer\OPNAndConfiguration\TextLogConfiguration\AdditionalTextLogConfigurations (when prompted to overwrite the file, select the option to replace the file in the destination)

After following the above 4 steps, the Netlogon parser v1.1.4 should now be implemented and available for use once you reopen Message Analyzer.

Reference links

Message Analyzer v1.2 download (highly recommended!)

http://www.microsoft.com/en-us/download/details.aspx?id=44226

Introducing the Netlogon Parser (v1.0.1) for Message Analyzer 1.1 (By: Brandon Wilson)

http://blogs.technet.com/b/askpfeplat/archive/2014/10/06/introducing-the-netlogon-parser-v1.0.1-for-message-analyzer-1.1.aspx

Troubleshooting Basics for the Netlogon Parser (v1.0.1) for Message Analyzer (By: Brandon Wilson)

http://blogs.technet.com/b/askpfeplat/archive/2014/11/10/troubleshooting-basics-for-the-netlogon-parser-v1-0-1-for-message-analyzer.aspx

Quick Reference: Troubleshooting Netlogon Error Codes (By: Brandon Wilson)

http://blogs.technet.com/b/askpfeplat/archive/2013/01/28/quick-reference-troubleshooting-netlogon-error-codes.aspx

Quick Reference: Troubleshooting, Diagnosing, and Tuning MaxConcurrentApi Issues (By: Brandon Wilson)

http://blogs.technet.com/b/askpfeplat/archive/2014/01/13/quick-reference-troubleshooting-diagnosing-and-tuning-maxconcurrentapi-issues.aspx

Message Analyzer Forum

http://social.technet.microsoft.com/Forums/en-US/home?forum=messageanalyzer

Message Analyzer blog site

http://blogs.technet.com/MessageAnalyzer

Just to recap; please send us any suggestions or problems you identify through the comments below, the Message Analyzer forum, via email to MANetlogon@microsoft.com, or using the integrated feedback button in Message Analyzer as seen below!

image

Thanks, and talk to you folks next time!

-Brandon Wilson


Mailbag: Superbowl Superbag (Issue #6)

$
0
0

 Hey y'all, Mark, Tom and Lakshman are back for another mailbag. All of our NFL teams are out of the playoffs but I promise you this mailbag meets the proper inflation requirements for a blog post. Sorry Boston I had to.  Ok let’s jump into it.

 

Web Application Proxy Cert Renewal

Static IPs on Azure

Adding Additional O365 Directory in Azure

Workstation Deployments with RODCs

Stuff from the Interwebs

 

Question

I followed your ADFS seriesand my Web Application Proxy cert is going to expire. I don’t see a GUI way to do update the cert like in ADFS. How do I do this?

Answer

In the ADFS server you have this nice menu.

image

But nothing today for WAP in the Remote Access panel.

image

We'll need to turn to PowerShell. First run the "Get-WebApplicationProxySslCertificate" command to get the current certificate hash.

image

Then we will want to copy the new cert(if you got it from a 3rd party) into the Local Machine Personal certificate store. Then get the thumbprint for this new cert in details.

image

Then run the Set-WebApplicationProxySslCertificate -Thumbprint "NewThumbprintWithNoSpaces"

image

Then re-run our first command to verify the correct thumbprint is listed.

image

That's all there is to it.

Question

Can I specify a static internal IP address for a VM within Azure IaaS? If so how?

Answer

The answer is yes. One of the more recent additions (or enhancements, if you will) to Azure IaaS is the ability to specify a static IP address for an Azure VM. Prior to this enhancement, server roles (such as domain controllers that typically use static IP addresses) deployed in Azure IaaS could only use dynamic IP addresses, albeit with an extremely large lease lifespan in excess of 130 years. There are essentially two steps to assigning a static IP address

  1. Ensure that the IP address is actually available in the virtual network using the Test-AzureStaticVNetIP PowerShell cmdlet
  1. Assign a static IP address using the Set-AzureStaticVNetIP cmdlet

The following link discusses how to assign a static IP address to a newly created VM or to assign one to a previously created VM.
Configure a Static Internal IP Address for a VM

http://msdn.microsoft.com/en-us/library/azure/dn630228.aspx

-Lakshman Hariharan


Question

Is there a way I can view my O365 directory in my Azure Portal with my Microsoft account?

Answer

Yes! This one was troubling for me as well. The AD team bloggedabout this a while ago. Follow the 2nd example and you should be all set. Subscribe to their blog while you are at it.

 

Question

My workstation deployments are done by a vendor in a remote location where we only have an RODC. How can we join workstations to the domain without opening up the firewalls to permit RWDC access?

Answer

You'll need to pre-create the computer objects on an RODC and use a script for the deployment. Ingolfur has a detailed article about the requirements here: http://blogs.technet.com/b/instan/archive/2008/08/13/troubleshooting-rodc-s-troubleshooting-domain-joins-against-rodc-s.aspx

 

Stuff from the Interwebs

-The X-files might be coming back. Nerds everywhere rejoice and cringe at the same time. The truth is still out there…on Netflix streaming so go catch up.

-Marvel has figured out a way to make the universe even more confusing with Secret Wars.

-Next Tuesday (Jan 27th) myself(Mark) and fellow nerd friends will be seeing Neil DeGrasse Tyson which I'm sure will be awesome. Tom is going Wednesday (Jan 28th) in Detroit. Say hi if you see us there.  If you live in the USA do yourself a favor and see if he's coming to a city near you.

-Finally, two of our own are leaving the PFE ranks and joining the Surface product group. I want to send a big good luck to Joao Botto and Milad Aslaner. Don't forget our little blog all the way in Redmond. The phrase "The fox is in the hen house" comes to mind with those two. Good luck guys!

Mark 'made of stardust' Morowczynski, Tom 'More Scully than Mulder' Moser,  and Lakshman 'secret superhero' Hariharan

ADFS Deep Dive: Certificate Planning

$
0
0

The last blog was about planning for ADFS and what questions you should be asking when deploying it.

http://blogs.technet.com/b/askpfeplat/archive/2014/11/24/adfs-deep-dive-planning-and-design-considerations.aspx

I said that the next blog would be about what conversations and questions you should have with the application owners. After some thought, I’ve changed my mind and decided to write about certificate planning. During almost every ADFS deployment I’ve been a part of, most of the conversations and planning revolve around certificates so I figured we should take some time to talk about this. ADFS relies heavily on public/private key certificate so if you’re not already familiar certificates, deploying ADFS will quickly get you re-acquainted. Like I’ve mentioned before, ADFS is a service that will need to grow with your organization’s needs and so proper planning is also required for certificates to ensure they will meet your growing needs and requirements.

The funny thing about certificates is that almost anything goes. For example, installing ADFS is really black and white – you either install it or you don’t. With certificates, there are so many options for deploying them that many customers forget the basics about public/private certificate signing and encryption. Like most things, certificates are mostly 90% planning and 10% execution.

ADFS requires three certificates to be properly installed.

  • SSL certificate
  • Token Signing Certificate
  • Token “Decryption” Certificate

You can use the same certificate for all three of these purposes, use separate certificates for each purpose, the choice is really yours. You can acquire them from an internal CA, a public CA, or use the self-signed ones. Remember I said that all these options befuddle many customers. Smile

 

Misconceptions

Before we dive into more details about each of these certificates and requirements, let’s first clear up some misconceptions about how these certificates work. To properly do this, we’ll have to take a step back and refresh our memory on how public/private certificates work.

Digital Signatures

When we want to digitally sign tokens, we will always use the private portion of our token signing certificate. When a partner or application wants to validate the signature, they will have to use the public portion of our signing certificate to do so. This doesn’t protect the data from someone viewing it but does ensure that if the data gets modified somehow, the signature verification will fail. What this means is that each ADFS server will only have one digital signature certificate. Digital signatures are required for ADFS.

image

Key Takeaway: The token signing certificate is considered the bedrock of security in regards to ADFS. If someone were to get ahold of this certificate, they could easily impersonate your ADFS server.

Mega Takeaway: What this also means is that every SaaS application must have a copy of the public portion of your ADFS token signing certificate.

 

Digital Encryption

When we want to encrypt something, we will always use the public portion of our partner’s encryption certificate. When that partner needs to decrypt the data, they will have to use the private portion of their encryption certificate to do so.

image

Consequently, just because you see the token decryption certificate in the ADFS console under the certificates container, doesn’t mean encryption of tokens is actually being performed. There is A LOT of confusion because of how the ADFS management console displays the self-signed certificates. It says Token-decrypting above the certificate but the CN on the certificate says ADFS Encryption.

image

If you want to verify whether token encryption is enabled for a specific relying party application, you will have to go and look at the encryption tab on that specific relying party application. As you can see here, I don’t have an encryption certificate installed for this relying party application, so encryption will notbe used.

image

 

Digital Decryption

I know what you’re thinking – We just covered digital encryption so why are we covering it in the reverse – decryption. Because in this scenario, rather than our ADFS server sending tokens to a SaaS application, we’re receiving tokens from a partner Identity Provider (IDP). I know this one seems obvious but many customers get hung up on this. When a partner wants to send us something encrypted, they will use the public portion of our “decryption” certificate. To decrypt it, we will use the private portion of our “decryption” certificate to do so. Since we are the only person that has the private key, no one else can decrypt the data.

image

Mega Takeaway: The easiest way to remember which certificate is being used is by asking yourself the two following questions

  • What direction is the token flowing? Are we on the receiving end or sending end?
  • Is the token be digitally signed or encrypted? 

If you’re sending a partner applications a token, ADFS be use the private key of your token signing certificate and perhaps the public key of your partner’s token encryption certificate. If you’re receiving tokens from a third party identity provider because you are a SaaS provider, they’ll be sending you tokens signed with their token signing certificate private key and they’ll encrypt the token with the public portion of your token encryption certificate.

 

Now, with all that being said, who can explain the signature tab on each relying party application? Rarely does this ever get used but if I’m going to explain certificates, I might as well cover this one too:

image

If you remember from my blog back in November 2014, I compared sign-in protocols:

http://blogs.technet.com/b/askpfeplat/archive/2014/11/03/adfs-deep-dive-comparing-ws-fed-saml-and-oauth-protocols.aspx

In this blog, I showed you what a SAML Protocol Sign-In request looked like. Well, the above signature tab is if the SaaS application provider also wants to digitally sign the SAML Sign-In request when the request is sent over to our ADFS server.

Why would you/they want to do this? Well, for the same reasons we digitally sign anything – To ensure the SAML request doesn’t get modified somehow. There typically isn’t anything really important in the SAML request but there are times when the SaaS application owner or you may want to enforce a certain authentication type by hardcoding it into the SAML request. If they also digitally sign the SAML request, they can ensure no modifies the request and bypasses what they’re trying to enforce.

BTW, it’s also possible to encrypt the SAML request but I don’t think I’m personally ever seen this done. In most cases when the SAML request is obfuscated, it’s just Base-64 encoded.

 

Drawing out the Requirements

Now that we’ve covered the basics of digital signatures and encryption, let’s cover the questions you should be asking about the certificates required for ADFS. Once you answer these questions, they myriad options for certificates that I mentioned above will become clear.

1.) Will external customers, partners, or non-domain-joined devices being using your ADFS service in some capacity?

What is this important:This question leads to whether you should use publically issued SSL (think Verisign) certificates or not. Since you may not be able to anticipate how ADFS will be used in the future, I would just recommend publically issued SSL certificates every time.

 

2.) Will external customers, partners or your users need to access ADFS from outside the corporate network?

What is this important: This leads to two important requirements:

  • whether you’ll need an ADFS Proxy/WAP server and corresponding SSL certificates.
  • whether you should pick a Common Name (CN) or Subject Alternative Name (SAN) on the certificate(s) that has a publically available DNS namespace.

For example, if you plan for users to access ADFS from outside the corpnet network, you definitely wouldn’t want to pick adfs.root.local as the name on the SSL certificate since this name is not a publically routable DNS name. If you plan to publish an ADFS Proxy/WAP server to the internet, make sure that the name of the certificate is a publically routable DNS namespace.

 

3.) Will you be using Outlook or other thick client applications with O365 for Exchange Online?

What is this important: If you remember from my first blog on ADFS back in August 2014, when thick clients like Outlook authenticate to O365, O365 will contact your ADFS Proxy/WAP to acquire a SAML token on behalf of the user. Consequently, if you plan to use Outlook with O365, the SSL certificate on your ADFS Proxy/WAP must be publically trusted. If the SSL certificate on the ADFS Proxy/WAP is not publically trusted, O365 will not be able to obtain a SAML token for users to access Exchange Online (EXO).

 

4.) If you must use an internal PKI infrastructure for the certificates, make sure your CA has the right certificate templates and other requirements in place.

What is this important: If for some reason you must use internally-issued certificates from your own PKI infrastructure, the SSL certificate used by ADFS must have the Server Authentication Enhanced Key Usage (EKU). The token signing and token decryption certs can have anyEKU. Make sure you have the right certificates templates published and you have rights to request said certificate templates.

 

5.) If you plan to use the self-signed certificate that ADFS generates for token signing and token decryption, are you a domain admin?

What is this important: When you use the self-signed certificates for token signing and decryption, the private keys are stored in Active Directory in the following container:

CN=ADFS,CN=Microsoft,CN=Program Data,DC=domain,DC=com

Consequently, for the ADFS installation to install the private keys into this location, you must be a domain admin to install ADFS or have the appropriate rights assigned to this container.

 

6.) Do you have a policy about using Next Generation (NGen) certificates?

What is this important: No version of ADFS supports Next Generation (NG) certificates. I was onsite with a customer helping them install and configure ADFS and couldn’t figure out why their SSL certificate wouldn’t work; they didn’t tell me they requested Next Generation certificates. Doh. Smile

 

7.) Will you be using the device registration feature in ADFS 2012 R2? If so, how many UPN’s suffixes do you have in your Active Directory forest?

What is this important: Clients will enroll in device registration and/or workplace join by typing in their UPN into the workplace join wizard on their device. If you have multiple UPN’s in your enterprise, all those UPN’s must resolve to the ADFS Proxy/WAP servers. To achieve this, you will typically want to ensure the SAN on the SSL certificate on the ADFS Proxy/WAP server(s) contains all enterprise-wide UPN’s. For example, let’s say you have both contoso.com and fabrikam.com UPN’s in your enterprise. You’d want to ensure that both enterpriseregistration.contoso.com and enterpriseregisteration.fabrikam.com are in the SAN of the SSL certificate installed on the ADFS Proxy/WAP servers(s). Another option is to go with a wildcard SSL certificate.

 

8.) What name have you decided for the ADFS service? Example – sts.domain.com or sso.domain.com? Have you ensured that the name isn’t already in use internally and publically?

What is this important: If you must use a publically available DNS namespace for the SSL certificate name, ensure the name is not already in use somewhere else in the enterprise. Perform a DNS lookup internally and externally for the name you want to use before you purchase any certificates. While you’re at it, run a quick check to ensure the SPN isn’t already in use in the enterprise:

Get-ADObject -filter 'ServicePrincipalName -eq "Host/sts.domain.com"'

If the name isn’t already in use, unless your enterprise has specific naming conventions or policies, the name you pick for the SSL certificate is mostly arbitrary.

 

9.) Do you have any policy about using the same SSL certificates on the ADFS servers as you do the public-facing ADFS proxy/WAP servers?

What is this important: While you can use the same SSL certificate across the ADFS servers and ADFS Proxy/WAP servers, some companies may have a security policy about using the same SSL certificate on the publically facing ADFS Proxy/WAP servers as the internal ADFS servers. While the internal ADFS servers have to use the same SSL certificate, the ADFS Proxy/WAP servers can use separate certificates as long as the Common Name (CN) or Subject Alternative Name (SAN) on the SSL certificate contains the same ADFS service name.

 

10.) Do you have any policy that dictates the use of wildcard SSL certificates?

What is this important: While wild card SSL certificates are supported by all versions of ADFS, make sure that it complies with your corporate policy. I don’t typically recommend the use of wildcards certificate on the ADFS Proxy/WAP servers. If you have many UPN’s within the enterprise and plan to do device registration/workplace join, you may want to consider a wildcard certificate.

 

11.) Are you required to use a physical, virtual, or networked HSM for storing the certificates?

What is this important: HSM devices are supported with ADFS but may cause performance issues since the private keys are stored on these other devices so make sure to test ADFS performance before going live.

 

12.) Do your monitoring devices support Server Name Indication (SNI)?

What is this important: If you monitoring devices don’t support SNI and you’re using ADFS 2012 R2, then you will need to install August 2014 Windows Update rollup, so that you won’t need to modify the Subject Alternative Name (SAN) of all the ADFS certificates to ensure your HTTP probes work properly:

http://blogs.technet.com/b/applicationproxyblog/archive/2014/10/17/hardware-load-balancer-health-checks-and-web-application-proxy-ad-fs-2012-r2.aspx

 

13.) If you are onboarding SaaS applications to your ADFS infrastructure, will the SaaS application want the SAML token encrypted and if so, do you know whether their encryption certificate is publically trusted?

What is this important: If token encryption needs to be enabled, make sure to get a copy of your SaaS partners public encryption certificate. Additionally, you’ll want to verify whether this is a publically trusted certificate because ADFS, by default, tries to check the validity of all encryption certificates. You can reconfigure ADFS to change this behavior by changing the EncryptionCertificateRevocationCheck property using the Set-ADFSRelyingPartyTrustPowerShell cmdlet.

 

14.) If a partner Identity Provider (IDP) will be sending you SAML tokens, do you know whether their token signing certificate is publically issued?

What is this important: Similar to the question above, by default, ADFS will check the validity of a partner’s token signing certificate. You can reconfigure ADFS to change this behavior by changing the SigningCertificateRevocationCheck property using the Set-ADFSClaimsProviderTrustPowerShell cmdlet.

 

15.) Do all your ADFS servers and ADFS Proxy/WAP servers have outbound TCP 80 open to the internet to perform certificate revocation checking?

What is this important: To perform certificate revocation checking, all ADFS and ADFS Proxy/WAP servers must have TCP 80 outbound access.

 

ADFS Server SSL Certificate Guidelines

All of the back-end ADFS servers must use the same SSL certificate. The ADFS configuration contains the thumbprint of the SSL certificate in its database so the ADFS service across all servers will try to find the same certificate based on this thumbprint. If you need to confirm what SSL certificate needs to be installed on all the ADFS servers, compare the thumbprints on the certificates. All you have to do is install the same SSL certificate into the machine certificate store on all back-end ADFS servers and this includes a wildcard SSL certificate if you plan to use one.

Get a Publically Trusted SSL Certificate

You may have special circumstances, but I typically just recommend that customers acquire a publically trusted SSL certificate for use on all the ADFS servers. This ensures that all clients or partners that may use ADFS will inherently trust the SSL certificate. If you must use a SSL certificate from your internal CA, make sure it has the Server Authentication EKU.

Pick a solid ADFS Service Name and Confirm Uniqueness

Next, you’ll want to determine what to call your ADFS service. Whatever you pick, make sure the domain suffix you want to be put on the SSL (sts.domain.com) certificate is notalready in use. If you plan to expose your ADFS to the internet via the ADFS Proxy/WAP, you’ll have to pick a domain suffix that is a publically routable DNS name. Your ADFS SSL certificate doesn’t have to match your Active Directory forest or domain name.

Be aware of DRS Naming Guidelines

The only time naming restrictions come into place is when you plan to do device registration on mobile devices since users will have to type in their UPN, which will need to resolve to the ADFS Proxy/WAP. So if you plan to do DRS, make sure the Common Name (CN) or Subject Alternative Name (SAN) contains all internal UPN’s.

After you’re done with these considerations, just pick a name – sts.domain.com, sso.domain.com, federation.domain.com, etc. It really doesn’t matter beyond the considerations above.

 

ADFS Proxy/WAP Server SSL Certificate Guidelines

While you could install the same SSL certificate on all of the ADFS Proxy/WAP servers as you did your ADF servers, I typically don’t recommend it. The ADFS Proxy/WAP servers are supposed to be installed into the DMZ an not domain joined for security reasons.

Get a Publically Trusted SSL Certificate

Once again, I recommend you get a publically trusted SSL certificate for your ADS Proxy/WAP servers.

Be Aware of Internally Issued SSL Certificate Caveats

If you installed an internally issued SSL certificate on your backend-ADFS servers, your ADFS Proxy/WAP servers, by default, won’t trust them. Consequently, you’ll have to either install the issuing CA certificate or the non-trusted SSL certificate into the Trusted Root certificate store on the Proxy/WAP servers so you can complete the installation wizard. The way to confirm whether the certificate is trusted is to open Internet Explorer on your Proxy/WAP server and navigate to the backend ADFS server and see whether you get any untrusted SSL prompts:

https://<sts.domain.com>/federationmetadata/2007-06/federationmetadata.xml

Get a Separate SSL certificate for your ADFS Proxy/WAP Servers

Due to security concerns with the ADFS Proxy/WAP server, I typically recommend that customers install a separate SSL certificate on their ADFS Proxy/WAP servers. The only thing you must ensure is that the Common Name (CN) or Subject Alternative Name (SAN) contain the same ADFS service name. You can install the same SSL certificate on all your ADFS Proxy/WAP servers though.

Don’t Install your Corporate Wildcard Certificate

I also don’t recommend you install your wildcard SSL certificate on your ADFS Proxy/WAP servers.

Be aware of DRS Naming Guidelines

Just remember that if you plan to do Device Registration for mobile devices, the Common Name (CN) and/or Subject Alternative Name (SAN) must contain all UPN’s within your enterprise. If you’re doing DRS, make sure to run the following PowerShell cmdlet on each ADFS Proxy/WAP to ensure it is aware and responding for all enterprise UPN’s:

Add-AdfsDeviceRegistrationUpnSuffix –UPNSuffix Contoso.com

Add-AdfsDeviceRegistrationUpnSuffix –UPNSuffix fabrikam.com

Mark the SSL Certificate Private Keys as Non-Exportable

Being that the ADFS Proxy/WAP servers are public facing, I typically recommend that the SSL certificates private keys are marked as non-exportable.

 

Token Signing Certificate Guidelines

It’s OK to use the Self-Signed Token Signing Certificate

Out of the box, ADFS generates some self-signed certificates for the token signing certificate. These self-signed certificates, by default, are good for one year. The token signing certificate will be used every time that a user needs to gain access to an application.

It’s also OK to get a Token Signing Certificate from your internal CA

You can also get a certificate from your internal CA and the Enhanced Key Usage (EKU) on the certificate does notmatter.

But be willing to get a Publically Trusted Token Signing Certificate

If the applications that you plan to federate with will perform certificate revocation on the public portion of your token signing certificate, I would just recommend you use a publically trusted certificate from VeriSign or another trusted certificate issuer.

Be aware of the Security Ramifications of BYOC

If you plan to not use the self-signed token signing certificate and bring your own certificates from your internal CA or get a publically trusted certificate, be aware that the private keys are no longer stored in Active Directory and are just installed into the computer certificate store on each ADFS server. You will want to restrict who can log onto the ADFS servers and may want to restrict the private keys from being exported as well.

Consider Extending the Validity Period

The self-signed certificates, by default, are good for one year. Most customer get SSL certificates that are good for 3 or more years. If you plan to use the self-signed token signing certificate, this means you’ll have to send the new public portion over to all SaaS applications - With the default setting, you’ll have to do this every year. Something you may want to consider is extending the validity period on the self-signed certificates to match your SSL certificate, so you can update that on the same schedule. To do this, you can run the following PowerShell to change the validity period on the self-signed certificates:

Set-AdfsProperties -CertificateDuration integer-number-in-days

Example for a 3-year certificate duration

Set-AdfsProperties -CertificateDuration 1095

If you want to force ADFS to immediately generate new self-signed certificates, you can run the following. Ensure that you only run this when you plan to change over your certificates (like a weekend). You’ll also have to send out the new token signing certificate to all relying party application owners.

Update-AdfsCertificate –Urgent

Have a Solid Plan when the Token Signing Certificate is Changing

When the token signing certificate needs to change, have a well documented plan about how you’re going to notify the owners of all of your relaying party applications. There is no way around this as this is just the nature of the beast. Tell them that if they don’t update to the new token signing certificate, users will fail to gain access to the application. If you are federated with O365 and have updated the token signing certificate, you can run the following to update the configuration in O365:

Update-MSOLFederatedDomain –Domain domain.com

Or use the following script to update the configuration in O365 with the new token signing certificate:

https://gallery.technet.microsoft.com/scriptcenter/Office-365-Federation-27410bdc

Rely on the Federation Metadata, if possible

Most SAML applications don’t support updating their configuration based on the federation metadata, but ask the vendor whether they support it. As long as you have a publically available ADFS Proxy/WAP, give them the following URL:

https://<sts.domain.com>/federationmetadata/2007-06/federationmetadata.xml

Be Aware of the Token Format Required for each Application

If the application is based on the Windows Identity Foundation (WIF) like SharePoint or other .Net web applications, all they need is the thumbprint of the token signing certificate. Just be aware that copying the thumbprint for a certificate can be tricky because there is a hidden character at the beginning, which won’t even display in notepad and may render the thumbprint invalid. A way to test whether this hidden character is present is to paste the thumbprint in a command prompt. As you can see here, the ? represents the hidden character:

image

Make sure to delete all spaces when sending the thumbprint to the application owners, like this:

‎8c7410dfdfbf4dd9cae23351a27ad86b5be42476

Some SAML relying parties will claim to need the token signing certificate in .PEM format. You can achieve this by exporting the token signing certificate as Base-64 Encoded X.509 (.Cer):

image

Don’t use the SSL certificate as your Token Signing Certificate

I’ve seen customers actually do this to simply their deployment but I don’t recommend this.

 

Token Decryption Certificate Guidelines

These guideline are actually just a subset of the token signing guidelines from above.

It’s OK to use the Self-Signed Token Decryption Certificate

Out of the box, ADFS generates some self-signed certificates for the token decryption certificate. These self-signed certificates, by default, are good for one year. Unless you have partner Identity Providers (IDP) sending you tokens that require encryption, this certificate will rarely be used.

It’s also OK to get the Token Decryption Certificate from your internal CA

You can also get a certificate from your internal CA and the Enhanced Key Usage (EKU) on the certificate does notmatter.

But be willing to get a Publically Trusted Token Decryption Certificate

If you have partner IDP’s sending you tokens that require encryption and they plan to also perform certificate revocation checking on the public portion, I would just recommend you use a publically trusted certificate from VeriSign or another trusted certificate issuer.

Be aware of the Security Ramifications of BYOC

If you plan to not use the self-signed token decryption certificate and bring your own certificates from your internal CA or get a publically trusted certificate, be aware that the private keys are no longer stored in Active Directory and are just installed into the computer certificate store on each ADFS server. You will want to restrict who can log onto the ADFS servers and may want to restrict the private keys from being exported as well.

Don’t use the SSL certificate as your Token Decryption Certificate

I’ve seen customers actually do this to simply their deployment but I don’t recommend this.

 

Dave “?“ Gregory

As you know, on this blog, we assign ourselves fun middle names after each blog. Can anyone figure what my middle name is? There is a clue in the blog. +10 points for the first one to figure it out. Smile

How to Restrict DNS Zone Scavenging When Hosting Multiple Zones on Multiple Servers

$
0
0

 

Dougga here – PFE (or “poofy” as one of my customers likes to call us). The DNS scavenging topic never dies - bear with me and I will reveal a not so obvious configuration to control which servers can scavenge a zone.

Let's go with a simple multi-domain forest named Contoso.com that has 3 child domains and AD integrated DNS configured to replicate as shown in the table below and try to not have more than 1 or 2 scavenging severs per DNS zone.

 

Domain

DNS

Scavenging server

Contoso.com

Domain replicated in contoso

ContosoDC1

_msdcs.contoso.com

Forest replicated

ContosoDC1

Child1.contoso.com

Domain replicated in child1

Child1DC1

Child2.contoso.com

Domain replicated in child2

Child2DC1

Child3.contoso.com

Domain replicated in child 3

Child3DC1

In this example, each child zone has a scavenging server and since the _msdcs.contoso.com zone is replicated forest wide that zone will have a total 4 scavenging servers. This breaks the only 1 or 2 scavenging servers per zone goal. If you have more domains or DNS zones stored in custom DNS zones this only gets worse.

In a complex environment, you may not be able to prevent over-scavenging by just using the GUI. By now you hopefully have read other posts on scavenging covering the basics, and if you haven't take a few minutes to review it. I am going to cover a setting that is not in the GUI setup that gives us a way to solve the problem.

How to configure the setting

This is a zone setting (so it will be replicated) that is configured using a DNScmd or powershell.

DNSCMD <Server> /ZoneResetScavengeServers <DNS zone> <IP address(es)
Set-DnsServerZoneAging <DNS zone> -ScavengingServers <IP address(es)

The normal configuration of setting up zone aging and choosing a scavenging server still must be done (see the link above). By default any owner of the DNS zone can scavenge if the server is configured to scavenge using the properties of the DNS server in the DNS management console. So, let’s start digging in to how this works.

In this case I am showing you how to prevent the child domain controllers/DNS servers from scavenging _mdscs.contoso.com.

What does the default look like?

To demonstrate what is currently configured use DNSCMD or PowerShell. I will be showing both DNSCMD and the equivalent powershell 4.0 commands.

DNSCMD /zoneinfo _msdcs.contoso.com
Get-DnsServerZoneAging _msdcs.contoso.com

<DNSCMD>

C:\dnscmd /zoneinfo _msdcs.contoso.com

Zone query result:

Zone info:
ptr = 000000F9FE14D110
zone name = _msdcs.contoso.com
zone type = 1
shutdown = 0
paused = 0
update = 2
DS integrated = 1
read only zone = 0
in DS loading queue = 0
currently DS loading = 0
data file = (null)
using WINS = 0
using Nbstat = 0
aging = 1
refresh interval = 168
no refresh = 168
scavenge available = 3629660
Zone Masters NULL IP Array.
Zone Secondaries NULL IP Array.
secure secs = 3
directory partition = AD-Forest flags 00000019
zone DN = DC=_msdcs.contoso.com,cn=MicrosoftDNS,DC=ForestDnsZones,DC=contoso,DC=com
Command completed successfully.

<END OF DNSCMD OUTPUT>

<POWERSHELL>

C:> Get-DnsServerZoneaging _msdcs.contoso.com

ZoneName : _msdcs.contoso.com
AgingEnabled : True
AvailForScavengeTime : 1/21/2015 8:00:00 AM
RefreshInterval : 7.00:00:00
NoRefreshInterval : 7.00:00:00
ScavengeServers :

<END OF POWERSHELL OUTPUT>

Making the change - what it looks like if it is restricted

In my example the IP address of ContosoDC1 is 192.168.2.52.

You can restrict which servers are allowed to scavenge by using DNSCMD or powershell.

DNSCMD /ZoneResetScavengeServers _msdcs.cotoso.com 192.168.2.52
Set-DnsServerZoneAging _msdcs.contoso.com -ScavengingServers 192.168.2.52

C:\> dnscmd /zoneinfo _msdcs.contoso.com

Zone query result:

Zone info:
ptr = 000000FAC8E2D100
zone name = _msdcs.contoso.com
zone type = 1
shutdown = 0
paused = 0
update = 2
DS integrated = 1
read only zone = 0
in DS loading queue = 0
currently DS loading = 0
data file = (null)
using WINS = 0
using Nbstat = 0
aging = 1
refresh interval = 168
no refresh = 168
scavenge available = 3629660
Zone Masters NULL IP Array.
Zone Secondaries NULL IP Array.
secure secs = 3
directory partition = AD-Forest flags 00000019
zone DN = DC=_msdcs.contoso.com,cn=MicrosoftDNS,DC=ForestDnsZones,DC=contoso,DC=com Scavenge Servers

Ptr = 000000FAC8E2EAF0
MaxCount = 1
AddrCount = 1
Server[0] => af=2, salen=16, [sub=0, flag=00000000] p=
0, addr=192.168.2.52

Command completed successfully

<END OF DNSCMD OUTPUT>

<POWERSHELL>

C:\> Get-DnsServerZoneaging _msdcs.contoso.com

ZoneName : _msdcs.contoso.com
AgingEnabled : True
AvailForScavengeTime : 1/21/2015 8:00:00 AM
RefreshInterval : 7.00:00:00
NoRefreshInterval : 7.00:00:00
ScavengeServers : 192.168.2.52

<END OF POWERSHELL OUTPUT>

Resetting to Default - If No IP addresses are defined and any DNS server can scavenge

To reset to the default to allow any server to scavenge, the IP address(es) need to be removed. This can be done with DNSCMD or PowerShell.

DNSCMD /ZoneResetScavengeServers _msdcs.cotoso.com
Set-DnsServerZoneAging _msdcs.contoso.com -ScavengingServers $NULL

What does scavenging look like when this is configured correctly?

When a DNS server attempts to scavenge because you triggered it or because it is scheduled, event ID 2502 is triggered. For scavenging to actually delete stale DNS records these conditions need to be met:

1) Server properties of a DNS server configured to scavenge.

AND

2) Zone configured to age records

AND

3) Records in the zone are stale (greater than no-refresh and refresh combined)

AND

4) Non-GUI Configuration

a. By default all zones are configured to allow all DNS server hosting the zone to scavenge.

OR

b. Only if the IP address configured on the zone matches the IP address of the server performing the scavenging (remember this is not visible in the GUI).

When this should be used

Use this setting to restrict a particular zone from being over-scavenged because it is a forest replicated zone with multiple DNS servers hosting other DNS zones that are not forest replicated.

When this should NOT be used

This should not be used on environments that do not need to minimize the number of scavenging servers. Perfect example is a single domain with a single DNS name space. Another example would be if all DNS zones are replicated in the same context such as forest replicated zones.

Risks

Be careful to understand this configuration because it can get confusing. If an administrator looks only at the GUI configuration, they may get confused why it is not scavenging. This has been the issue in several cases I was involved with helping.

Nothing is visible in the GUI that this is configured so it is very easily missed and likely not known.

If the scavenging is changed to a different server or change the IP address of the scavenging server, scavenging will quietly stop being effective in its job. This will drive you crazy because it will all look fine.

Dougga "You know how to pronounce it" Gabbard

Mailbag: Happy Birthday Ronald Reagan (Issue #7)

$
0
0

Mark and Tom here again with Mailbag Issue #7!

We’re light on overall questions, but in-depth on one of them this week. Let’s get to it!

RODC DNS Records

Johnny Mnemonic

Site Topology

From The Interwebs

  

Question

When I run nslookup contoso.com my RODCs don't show up in the list of DNS records! What gives?! Should I add them?

Answer

NO! Well, probably not. This is by design. RODCs, by default, don't register any of the generic DNS mnemonics. The reason for this that RODCs generally exist to serve branch locations where physical security is a concern. You probably wouldn't want apps that aren't DCLocator aware (It's 2015 and you don't support DCLocator ,Mr. Appdev?) to bind to your domain name to find a DC and discover an RODC in a remote branch. That could cause slew of issues ranging from WAN saturation and RODC performance problems to application compatibility issues between the app and the RODC.

This configuration is something we recommend even for RWDCs as part of the branch office deployment guide. Also make sure to check out the RODC BODG (SRSLY). In most cases, you should just leave the default behavior alone.

 

 

Question

OK. So what's a DNS mnemonic?

Answer

It's got nothing to do with the Yakuza, a data package, or Keanu Reeves. DNS mnemonics are all of the various "types" of DNS records that your domain controllers register to provide specific services to sites or the domain in general. The list is (from the BODG):

Mnemonic

Type

DNS Record

Dc

SRV

_ldap._tcp.dc._msdcs.<DnsDomainName>

DcAtSite

SRV

_ldap._tcp.<SiteName>._sites.dc._msdcs.<DnsDomainName>

DcByGuid

SRV

_ldap._tcp.<DomainGuid>.domains._msdcs.<DnsForestName>

Pdc

SRV

_ldap._tcp.pdc._msdcs.<DnsDomainName>

Gc

SRV

_ldap._tcp.gc._msdcs.<DnsForestName>

GcAtSite

SRV

_ldap._tcp.<SiteName>._sites.gc._msdcs.<DnsForestName>

GenericGc

SRV

_gc._tcp.<DnsForestName>

GenericGcAtSite

SRV

_gc._tcp.<SiteName>._sites.<DnsForestName>

GcIpAddress

A

_gc._msdcs.<DnsForestName>

DsaCname

CNAME

<DsaGuid>._msdcs.<DnsForestName>

Kdc

SRV

_kerberos._tcp.dc._msdcs.<DnsDomainName>

KdcAtSite

SRV

_kerberos._tcp.dc._msdcs.<SiteName>._sites.<DnsDomainName>

Ldap

SRV

_ldap._tcp.<DnsDomainName>

LdapAtSite

SRV

_ldap._tcp.<SiteName>._sites.<DnsDomainName>

LdapIpAddress

A

<DnsDomainName>

Rfc1510Kdc

SRV

_kerberos._tcp.<DnsDomainName>

Rfc1510KdcAtSite

SRV

_kerberos._tcp.<SiteName>._sites.<DnsDomainName>

Rfc1510UdpKdc

SRV

_kerberos._udp.<DnsDomainName>

Rfc1510Kpwd

SRV

_kpasswd._tcp.<DnsDomainName>

Rfc1510UdpKpwd

SRV

_kpasswd._udp.<DnsDomainName>

And if you enter any of those under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters\DNSAvoidRegisterRecords, delimited by newlines, your domain controller will no longer register that specific mnemonic. For example, if you don't want a specific DC to be globally available as a KDC, you'd set that key to contain Kdc. The domain controller will deregister its _kerberos._tcp.dc._msdcs.contoso.com SRV record and no longer be discoverable as a KDC forest-wide. Only client in the same site will find it. You can also set these in Group Policy as outlined, again, in the Branch Office Deployment Guide.

 

Question

I've seen links to guides to sizing hardware for AD, but I couldn't find any good design examples for AD. Could you possibly post examples of designs from real world deployments as a yardstick example or minimum recommended based on size or scale of implementation?

Answer

Since we're skipping the hardware sizing discussion, which you can learn all about here, let's talk about design. And let me start off with a big, fat it depends. Here are some questions you should probably ask when designing your site topology. We don't have a lot of real world examples to post, since our customers get a little weird about posting their site topologies on the blog (whatever that's all about), but we can talk about some discussions we would typically have while planning.

Do I need a separate AD site?

This TechNet reference says it a lot better than I will:

A site is defined as a set of IP subnets connected by fast, reliable connectivity. As a rule of thumb, networks with LAN speed or better are considered fast networks.

Have another location that isn't part of the LAN? Make it a site. Usually this will result in a hub-and-spoke topology where all of these remote branch sites connect back to a headquarters location. Multi-hub-and-spoke topologies are not uncommon, though.

Do I need a DC in each site?

This is a good place to start. Identify all of the sites in your organization that require a domain controller. Generally, these would be branch offices or sites that are physically separate from your datacenter site. A good example of this would be a headquarters in Detroit, where you've got a bunch of users and a local datacenter. The users next door to the datacenter quite likely don't need a domain controller in their building. However, those remote users and servers in the Las Vegas office would greatly benefit from having a local domain controller. Benefits of a local DC include the ability to withstand a WAN outage, as well as performance improvements due to lower latency. That said, I have some customers who put a DC in every single remote site, and those who pick and choose based on business requirements. That brings us to the next question…

How many DCs do I need in each site?

I'm just going to keep this on the clipboard now… but, it depends. You should always shoot for at least two just in case one goes down. If that's a hard sell to management, you need to decide if failing over to the next site over the WAN link is good enough. If you're connected with reliable, high speed, low latency links, that might be good enough.

The other half of this question depends on how many users you have in the site. A big campus with 25,000 users is going to need many more DCs than a branch office with 100 users. I can't tell you how many you'll need. You need to use performance monitor, measureperformance, and addcapacity if needed. Remember, if you have two servers at 50% capacity and one dies, you're going to have a real problem on your hands. Make sure to maintain enough available capacity so that if one, or even two nodes goes offline you have enough capacity to service all of your clients. Also keep in mind that developers and users aren't always nice to AD and like to do things like run super-generic LDAP queries against the entire directory.  Nobody likes it when you query an OU with 500,000 objects with something like (samaccountname=hahahaha) as your LDAP filter. Stop it.

How many global catalogs do I need?

This is a subject for a different day, but in the vast majority of situations you should just make all of your domain controllers a global catalog and call it a day. In a future mailbag we'll talk about some non-GC scenarios. In four years at Microsoft and seeing dozens of customer environments, I've seen one situation where the customer had a legitimate reason to make every DC a GC (dougga knows who I'm talking about!). Just check the box. Or, rather, don't uncheck the box during DCPromo Install-ADDSDomainController

How about DNS?

Well, if you're running DNS on your DCs, you're in luck. You'll have nicely distributed, multi-master DNS. If you aren't running Microsoft DNS, and are running something third-party, you'll need a DNS server in every site where a DC exists. Active Directory depends on DNS. No DNS, nobody can find a DC, DCs can't find each other, and everybody is generally unhappy.

So back to your original question, it depends.

Stuff from the Interwebs

Tom "Detroit" Moser vs. Mark "Everybody" Morowczynski 


 

How to Reduce the Size of the WinSxS directory and Free Up Disk Space on Windows Server 2012 R2 and Windows 8.1 or do we even need to?

$
0
0

When discussing a specific .NET framework issue a few months back, several people commented that they were unable to uninstall an update as the new /resetbase command was run against the image after the update was already installed.

So what is this command? Why were they unable to uninstall this update? What other new servicing enhancements were added to Windows 8.1 and Windows Server 2012 R2? Keep reading to find out.

Let’s start by discussing the latter question in the title of this blog. Do we still need to clean up the WinSxS directory?

In short, maybe.

The operating system will now automatically do it for you and you do not have to do anything, but if you want to, you still can. What do I mean by automatically doing it for you? Check out this greatness:


Yes, you are seeing things correctly. That is a scheduled task built in to Windows Server 2012 R2 and Windows 8.1 to automatically cleanup the component store.

What’s the component store? It’s that “pesky” and “misunderstood” WinSxS directory everyone on Windows Server 2008 R2 and Windows 7 complained about that took up too much space. For background information on WinSxS and the need to cleanup the WinSxs directory to free up disk space in previous versions of Windows, see my prior posts:

How to Clean up the WinSxS Directory and Free Up Disk Space on Windows Server 2008 R2 with New Update:
http://blogs.technet.com/b/askpfeplat/archive/2014/05/13/how-to-clean-up-the-winsxs-directory-and-free-up-disk-space-on-windows-server-2008-r2-with-new-update.aspx

Breaking News! Reduce the size of the WinSxS Directory and Free up Disk Space with a New Update for Windows 7 SP1 Clients:
http://blogs.technet.com/b/askpfeplat/archive/2013/10/07/breaking-news-reduce-the-size-of-the-winsxs-directory-and-free-up-disk-space-with-a-new-update-for-windows-7-sp1-clients.aspx

How to Reduce the Size of the Winsxs directory and Free Up Disk Space on Windows Server 2012 Using Features on Demand:
http://blogs.technet.com/b/askpfeplat/archive/2013/02/24/how-to-reduce-the-size-of-the-winsxs-directory-and-free-up-disk-space-on-windows-server-2012-using-features-on-demand.aspx

But back to the scheduled task.

30 days after installing an update or hotfix, we automatically kick off this bad boy to remove previous versions of the updated files. Is that greatness or what? And so easy! You could literally just let Windows do its job and safely know that the WinSxS directory isn’t going to chow down on all your free disk space! We automatically clean things up for you!

You can kick it off manually anytime by running the scheduled task. By default, it runs for an hour. However, what if it doesn’t complete? Well, it will pick back up where it left off the next time or you could also kick it off via command line by running the following command from an administrative command prompt:

Dism.exe /online /Cleanup-Image /StartComponentCleanup

For those of you with the Desktop Experience installed or on Windows 8.1, you can still use the Disk Cleanup Wizard as well and select the Clean up system files button.

If you run this and check the scheduled task afterwards, you’ll notice that the last run time for the StartComponentCleanup task was approximately the same time clean up system files was kicked off from the Disk Cleanup Wizard.

So that’s a start, but what else has Microsoft done?

Compression of Unused Binaries

Well, for starters, we now compressed any unused binaries in the component store. That means that we compress all those features and roles you haven’t installed, but are there in case you decide to install them at any point in the future. You can still remove these in Windows 8.1 or Windows Server 2012 R2 using Features on Demand.

Want to reduce the size even further and cleanup even more?

/ResetBase

This is a great command added with Windows 8.1 and Windows Server 2012 R2.  Essentially, it’s the mother of all commands. It cleanups and removes all the old superseded stuff from every component in the component store.

Knowledgeable engineers focusing on reducing the size of their images often run this command to tidy up prior to rolling the image into production. It’s a great thing and really does have an impact. However, after running the /resetbase command, all existing updates cannot be uninstalled. It doesn’t block the uninstallation of future updates that are installed after running this command, but all prior updates are made permanent and cannot be removed. The command is as follows:

Dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase

/AnalyzeComponentStore

If you would like to see what impact this command has, you can run the following command to display the “true” size prior to cleanup:

Dism.exe /online /Cleanup-Image /AnalyzeComponentStore

What all does this mean? Most of it is self-explanatory, but here’s a quick rundown:

  • Windows Explorer Reported Size of the Component Store– As you may have guessed, this shows the File Explorer reported size. As seen above, it’s not completely accurate. This is due to the use of hard links by the operating system.

  • Actual Size of the Component Store– This is the true size of the component store.

  • Shared with Windows– This is the size that the component store would be just per Windows install whether or not the component store actually existed.

  • Backups and Disable Features – This is the size of the previous versions we store in the component store as well as the binaries of any roles or features you may wish to install in the future

  • Cache and Temporary Data – Just as it sounds.

As shown above, as part of the analysis, it will even tell you whether component store cleanup is recommended. In this case, it is recommended.

Now that we know the true size, let’s run the cleanup and check our results. For this first run, we’re going to use the following command which is what Windows does automatically behind the covers for you with the scheduled task:

Dism.exe /online /cleanup-image /StartComponentCleanup

It does take a while. Be patient. But the good news is, it doesn’t require a reboot. After it completes, check it again:

A little over 2GB smaller! Nice!

Now what if we reset the base.

A little bit more, but nothing drastic.

Now, if you don’t want to go to this degree and make all your hotfixes, security updates, etc. permanent, you can simply let the scheduled task do its job or run the /StartComponentCleanup without the /resetbase switch. This will cleanup previous versions of the updates installed, but still allow you to uninstall a security update or hotfix, if needed. Just keep in mind when you uninstall the update after the cleanup, you don’t have the prior version to roll back to, but instead will rollback further, even back to RTM potentially.

In my .NET post, if you’ve read through the comments, several users ran the /resetbase before encountering problems. What options do they have at this point?   In short, they need to use an updated source. Check back in a couple weeks when we’ll discuss all the available options for sources and how to keep that source up to date.

Until then,

Charity “Keep up with the cleanup” Shelbourne

KRBTGT Reset Script Now Available at the Script Gallery

$
0
0

Tom here with a quick Friday update...

Here's something that we hope you'll never need, but has become an unfortunate necessity. Jared Poeppelman, one of our colleagues over in Microsoft Consulting Services has built and tested a great PowerShell script for resetting your KRBTGT password.

You can find the post covering the topic over at the CyberTrust blog The script is over at the Script Gallery: https://gallery.technet.microsoft.com/Reset-the-krbtgt-account-581a9e51

I'll let you review the CyberTrust blog to dig in to why you'd neesuch a thing (we hope you don't). If there's one thing you take away from that blog, make it this:

  • It is important to remember that resetting the krbtgt is only one part of a recovery strategy and alone will likely not prevent a previously successful attacker from obtaining unauthorized access to a compromised environment in the future.

-Tom "I'm on vacation that weekend" Moser

Attending Microsoft Ignite? A Local’s Guide to Chicago

$
0
0

Hey y’all, Mark back again this time with a completely non-technical post. As you’ve probably heard by now, Microsoft Ignite is taking place in Chicago May 4-8th. This is great news as many of us from this blog actually live in Chicago and hope to see some of you folks there! I also know sometimes when people attend a conference they sometimes stick around and make it a three day weekend to see the sights. We thought it might be helpful to give our community some tips of things to do, see and drink while they are in Chicago from folks that live there This post is NOT meant for you to go all Ferris Buller and skip the conference. You wont want to miss it anyways.

 

Why Go

Transportation

Entertainment

Food

Breweries

 

Why Go?

We want you to attend so badly we’ve already got a nice pre-written email for you to use. Honestly before I joined Microsoft I went to as many of these things as I could. I always learned an amazing amount of information, got my difficult questions answered, and would meet awesome people in the field. I was even one of the early TheKrewe members community which has now grown to 1,000+ strong. I’ve always had a great experience. I wont spend too much time on this but if you are reading this post you are either going or thinking about going. Just go Smile.

 

Transportation

You probably do not need a rental car. Really. The MS Ignite hotel section says there will be shuttles between the hotels and the McCormick Place Convention Center so you are good to go for that. But what about getting from the airport to your hotel? You can take a cab which are plentiful around those parts or you can rely on Chicago’s wonderful and simple transportation system. I’ll make it really easy for you. If you are landing at O’Hare airport (ORD and my home away from home) you can take the Blue line into the loop. If you are landing at Midway (MDW) you can take the Orange line into the loop. Great news both the airports are the last stop on those lines so you can only go one way. Not even Hilde could screw that up. If you need to once you are in the loop you can take a cab to your hotel which will be much cheaper. Best of all the train ride will cost $5 or less. More info here.

 

Entertainment

There are obviously LOTS of things to see and do in Chicago. I’m only going to hit a few that tend to get overlooked.

The Museums- Everyone always goes to the Field Museum, the Shedd Aquarium and the Art Institute. All of these are fantastic no question. However two museums that everyone always overlooks are the Adler Planetarium and the Museum of Science and Industry. Lots of places have dinosaur bones, not everywhere has a planetarium. Adler is located right near the field and Shedd museums. If you can’t get excited about the universe I can’t help you. The Museum of Science and Industry is located a bit farther than the others but has lots of unique things. Ever want to get inside a submarine? It’s great.

Really Tall Buildings- Everyone always wants to check out the Sears/Willis tower. They have a skydeck where you can step out into this enclosed glass box. You can see pictures on the site. I’ve only ever gone here with people that are out of town. As a local though I know another spot you might want to check out. The Signature room at the 95th in the John Hancock building offers similar views but with the added bonus of being able to drink an adult beverage of your choice at the same time. If the line for the skydeck is way too long check out the Signature room or if you just want a drink while you stare out over the city. Another place to be outside is the Roof on the Wit. It’s a great spot to be but be warned there might be a line.

Music- Chicago home of the Blues Brothers is known for just that, Blues. Kingston Mines and Chicago B.L.U.E.S Bar are located down the street from each other on the north side of the city. If you are staying downtown though there is Buddy Guy’s Legendslocated basically in the loop. Most of these places offer music seven nights a week but check the calendar of each.

Comedy- Chicago is known for being an improv Comedy town. The big place everyone has heard about is the world famous Second City. Many comedians and members of Saturday Night Live have got their start at Second City. The shows are hilarious and are typically sketch like, think Saturday Night Live. There is usually some improv thrown in for good measure. If you want pure improve, and really you are in Chicago so who doesn’t, I recommend you check out Improv Olympic. They take one suggestion from the audience and do an entire 45 minute show around it with different story lines all tying back to one another. It’s great and amazing. Check it out.

Food

Again only going to hit a few here.

Pizza- The big ones are Gino’s East, Giordanos, and Lou Malnatis. All good in their own way. It’s actually hard to get bad pizza in Chicago. Just stay away from the chains. If deep dish isn’t for you we do still have thin crust in the city limits. I hope you were sitting down for that last part. All these places also make great thin crust pizza. Some other favorites that all the locals know about would be Piece Pizza and Pequods. Piece has great “white” pizza aka no sauce and garlic. Check these out. You are in Chicago though, get deep dish and then take a nap.

Hot Dog- The best place WAS Hot Doug’s, but they closed. The line could be 4-6 hours, not even joking which isn’t probably how you want to spend your Saturday. However Portillo’s is quite actually great so just go there. They also have beef sandwiches if that is more your thing or a “combo” which is Italian sausage with roast beef on top. It’s delicious. Here is how you order one of those. You have to indicate if you want it dunked in the sauce (wet) or not (dry) and if you want hot or sweet peppers. If you cant make it to Portillo’s, Al’s Beef is also really good. You can find them in a few spots around town.

Burger Experience-A heavy metal bar that makes great burgers. Go to Kuma’s Corner. The lines can be quite long so try to go off meal hours. Enjoy the atmosphere, it is unique. For those old timers and comedy nerds, be sure to check out Billy Goat Tavern. If you remember the SNL sketch of “Cheezborger! Cheezeborger! Cheezeborger!” That’s where that came from. Just make sure you get frys or they have a different name for you.

Pop-Corn- I know what you are thinking, how is this on the list. When everyone talks about Chicago food they always skip over Garrett’s popcorn. You get the Garrett Mix, Carmel and Cheese Corn combined. If you are flying in and out of O’Hare they have a shop in terminal 1 and terminal 3. Get 2 bags to go. One for you to eat on the plane, one for you to bring home to someone else. They will love it. Trust me.

Breweries

Goose Island-I’m well aware of who recently bought them but the place is exactly the same. Check out Goose Island which has been in Chicago forever. When you order the 312, don’t call it three hundred twelve. It is called three one two, like the Chicago area code. See you sound like a local already!

Half Acre- I’ve seen Daisy Cutter in more and more places which is a good sign for Half Acre. I think they’ve been around since the mid 2000’s and have been going strong ever since. Check em out.

Revolution- This place is great. It’s huge and they have a ton of great beers. Revolution is probably my favorite in the city. You can also find there beer in cans all over the city.

Three Flyods- Technically not in Chicago, it’s actually in Indiana (about 35-40 minutes away) but Three Floyds is insanely popular and sought after outside the midwest. I wanted to at least make everyone aware you are in driving distance to this.

Those are our tips. If you are looking for more stuff check out the Microsoft Ignite Countdown show on Channel 9 with friends of the blog Joey Snow and Rick Claus. They don’t even live in Chicago and they nailed the Portillo’s pick already so you know these guys know what is up. Keep watching and hopefully we’ll get to see some of you at Microsoft Ignite!

Mark “not that windy” Morowczynski


Guidance on Deployment of MS15-011 and MS15-014

$
0
0

Hi, my name is Keith Brewer and many of you will know of me from my other Active Directory related posts. A few folks have recently approached me about the recent security updates (The other week we released MS15-011& MS15-014). Most of the questions were general in nature but a few were specifically in relation to the interoperability between updated and non-updated systems. In this post I am hoping to deliver you some of the FAQ’s we have encountered and hopefully help you to better understand and deploy this important set of updates.

As you know these updates harden group policy and address network access vulnerabilities that can be used to achieve remote code execution (RCE) in domain networks. The MS15-014 update addresses an issue in Group Policy update which can be used to disable client-side global SMB Signing requirements, bypassing an existing security feature built into the product. MS15-011 adds new functionality, hardening network file access to block access to untrusted, attacker-controlled shares when Group Policy refreshes on client machines. These two updates are important improvements that will help safeguard your domain network.

Read more about these updates here:

http://blogs.technet.com/b/srd/archive/2015/02/10/ms15-011-amp-ms15-014-hardening-group-policy.aspx

MS15-011 is not turned on by default. It requires administrators to turn on Group Policy setting to harden specific SYSVOL and NETLOGON shares to protect enterprise deployments from the RCE vulnerability

After the MS15-011update is installed, the following new Group Policy Setting can be used to harden specific shares:

Computer Configuration/Administrative Templates/Network/Network Provider: Hardened UNC Paths

Complete details on configuring the setting can be found here

Frequently Asked Questions: (FAQs):

Do I need to install the update on only Windows Client OS or even Windows Server OS?

We recommend you update all Windows clients OS and Windows Server OS, regardless of SKU or role, in your entire domain environment. These updates only change behavior from a client (as in “client-server distributed system architecture”) standpoint, but all computers in a domain are “clients” to SYSVOL and Group Policy; even the Domain Controllers themselves

Do all my clients and servers need to be updated before configuring/enabling the UNC Hardened Access feature?

The feature and the configuration settings live completely on the client-side. Configuration is applied via Group Policy, but the settings do not take effect until after the GPO containing the settings is applied to the client. The update was designed in an interoperable way so that mixed-mode environmentswith updated servers and non-updated clients (or vice versa) continue to work as before.

What are the potential impacts of rolling out the GPO at the domain level to require hardened access to the \SYSVOL and \NETLOGON shares? There may be some clients that are not updated initially, or are but don't get the new GPO settings before the Domain Controllers are set to use them?

Test first before performing a broad deployment. If your Windows domain controllers accidentally have a firewall policy set that blocks incoming Kerberos traffic, then clients will be unable to mutually authenticate the domain controller and will be unable to apply future Group Policy updates until the firewall policy is corrected or the UNC Hardened Access configuration is manually removed. Similarly, clients that attempt to connect to a Domain Controller through a WAN link may have issues when SMB traffic over the WAN link is forced through a 3rd party SMB WAN Accelerator that does not support Kerberos or SMB Signing.

What about a Windows client that has the settings enabled, but the UNC shares are located on a network device that is not Windows based?

For Windows clients communicating with 3rd party SMB Servers, compatibility depends on the policy settings configured by the system administrator and the protocol version and/or optional protocol features supported by the 3rd party SMB Server:

· If the administrator has configured a path to require Integrity, but the 3rd party SMB server is an SMB1 server that does not support SMB Signing, Windows will disallow the connection (SMB Signing is an optional protocol feature in v1, but is required in v2+)

· If the administrator has configured a path to require Privacy, but the 3rd party SMB server does not support SMB Encryption, Windows will disallow the connection (SMB Encryption is only supported in SMB v3+)

· If the administrator has configured a path to require Mutual Authentication, but the 3rd party SMB Server does not support Kerberos (or the client is unable to find appropriate Kerberos credentials), Windows will disallow the connection.

How will the updated Windows clients behave when communicating with Windows Server 2003 or Windows Server 2003 R2 Domain Controllers or Windows Server 2003 or Windows Server 2003 R2 file servers (Non-Domain Controllers)?

By default all Windows domain controllers are configured to require SMB signing on all shares hosted on the Domain Controller via the Default Domain Controller policy. Updated or “hardened” clients while being protected will still be able to apply policy from a Windows Server 2003 or Windows Server 2003 R2 Domain Controller.

If you have shares hosted on Windows Server 2003 or Windows Server 2003 R2 (Non-Domain Controllers), then you must ensure the policy “Digitally sign communications – always”is enabled on these servers for the updated clients to be able to access the shares.

Will an updated Domain Controller experience issues when replicating the SYSVOL replica set when the partner is Windows Server 2003 or Windows Server 2003 R2 (or vice versa)

Same as above. Domain Controller(s) mixed between updated and non-updated will not experience FRS or DFSR replication issues related to the application of this update. SYSVOL replication uses RPC, not SMB.

What if we have disabled SMB signing requirements on the Domain Controllers?

That is against best practice and certainly not recommended. See “What about a client that has the settings enabled, but the UNC shares are located on a network device that is not Windows based?” FAQ earlier for expected behavior.

Configuring the policy as recommended in MS15-011when (if) the Domain Controllers are configured with SMB signing disabled could cause one to lose control over the machines through group policy (especially if the clients can’t access NETLOGON/SYSVOL – then they won’t get the new modified/reverted policies. The clients will need to have the update uninstalled or the registry is pruned manually).

What if an application only supports NTLM authentication and accesses data kept on the SYSVOL or NETLOGON share?

Once the client is updated & hardened as recommended in MS15-011, Specifically the Mutual Authentication = 1, Kerberos will be required to make a successful connection to the NETLOGON/SYSVOL shares.

If you OR your application will access shares by using the IP Address of the Server, that will use NTLM and will cause failures.

How do we get the Group Policy settings into the central store?

Copy the modified networkprovider.admx(l) files from the system on which the update is installed to the central store location.

The admx files are available on the machine in c:\Windows\policydefinitions and the corresponding adml files are available in c:\Windows\policydefinitions\<lang>\

(For en-us it is: C:\Windows\PolicyDefinitions\en-US)

What kind of wildcards are supported when configuring the new Group Policy Setting?

· \\<Server>\<Share> - The configuration entry applies to the share that has the specified name on the specified server.

· \\*\<Share> - The configuration entry applies to the share that has the specified name on any server.

· \\<Server>\* - The configuration entry applies to any share on the specified server.

· \\<Server> - The same as \\<Server>\*

Note: Is it not supported to use wild char combinations (or regular expressions) for path names like:

· \\*share*\*

· \\*\*share*

A specific server or share name must be specified. All-wildcard paths such as \\* and \\*\* are not supported

If we were going to add a file server to the UNC list, would we need to add both the FQDN and the Netbios name to be protected? For example would we need to add -

\\fileserver\* RequireMutualAuthentication=1, RequireIntegrity=1

\\fileserver.contoso.com\* RequireMutualAuthentication=1, RequireIntegrity=1

No, only one of the two configuration entries is necessary (Either FQDN or Netbios)

\\fileserver\* alone would protect accesses to \\fileserver, \\fileserver.fabrikam.com, and \\fileserver.contoso.com

\\fileserver.fabrikam.com\* alone would protect access to \\fileserver (because it might be \\fileserver.fabrikam.com) but not \\fileserver.contoso.com (because the latter is clearly not the configured UNC path).

If the new group policy setting is configured at the domain level and also at the OU level, which will take precedence?

The OU policy would override the domain policy. The order of precedence is given here:

First local, then site, then domain and the OU (LSDOU). That last one wins.

The policy is not cumulative. It overrides the existing ones. In this particular case, you will need to add the NETLOGON & SYSVOL shares to the OU policy as well.

Many Thanks to Supportability Program Manager Ajay Sarkaria for helping putting together this information.

Keith Brewer and Ajay Sarkaria

Mailbag: Hildebrand from Azure Land (Issue #8)

$
0
0

Hey y'all, Mark, Tom and Hilde back for another mailbag. Astute readers and or those that have access to a calendar will notice we may have missed a week in our regular posting schedule. I'm not one to point fingers but I will say that Tom was supposed to send over his questions before he left for vacation. You'll be able to read those questions in Issue #9 or whenever Tom sends them over, whichever comes first. In his defense I also got "busy" with "customers" and didn't have much ready to go. So we missed a week, it will probably happen again. On to this week's questions.

Sysvol migration required after AD upgrade?

ADFS 3.0 monitoring?

Confusion around connecting to Azure VMs?

Moving VMs between Virtual Networks in Azure?

Where are there some good test lab guides?

Stuff from the Interwebs

 

Question

I'm doing an AD upgrade from 2008 R2 to 2012 R2. My DFL/FFL is at 2008 R2. My Sysvol is still being replicated using FRS. When I get to 2012 R2 does my Sysvol automatically convert to DFS-R?

Answer

Nope nothing automatically. You would still need to go through the normal process. Ned Pyle wrote about this many moons ago and can be found here. If you are running at 2008 R2 DFL and are going through the migration before make sure you check out Greg's guide with some useful updates.

Question

I want to monitor my ADFS 3.0 environment and I can only find the SCOM mgmt pack for ADFS 2.0, is there something newer?

Answer

Yep, you can grab the ADFS 3.0 mgmt pack here. If you have Azure AD Premium you can also check out the snazzy new Azure AD Connect Health. It has all kinds of pretty graphs that managers like.

Question

I'm confused about connecting VMs in Azure via Cloud Services and/or Virtual Networks so they can easily communicate. Where can I find some clarifying information?

Answer

Here is a post that helps clear that up - http://azure.microsoft.com/en-us/documentation/articles/cloud-services-connect-virtual-machine/

-Hilde

Question

Ok, I didn't do it the way I should have - how can I move VMs around between Virtual Networks with minimal impact?

Answer

It will likely have some impact on your existing VM but here's a solid walk-through... http://blogs.msdn.com/b/walterm/archive/2013/05/29/moving-a-virtual-machine-from-one-virtual-network-to-another.aspx

-Hilde

Question

MAN OH MAN - how I would love some guidance about setting up some test/lab infrastructures to better understand. "Where's the beef?"

Answer

We have some good stuff here in the blog about setting up dev/test proof concept environments. Here is a great link for end-to-end scenarios - http://social.technet.microsoft.com/wiki/contents/articles/1262.test-lab-guides.aspx

-Hilde

Stuff from the Interwebs

-SNL 40th anniversary show was a few weeks back. As someone who has watched a lot of SNL I have two sketches that have seemed to slipped past everyone. Mr. Belvedere Fan Club and Japanese Game Showalways kill me.

-A cool behind the scenes look on how they did the Stay Puft Marshmallow Man in the original Ghostbusters.

-Bruce Campbell is talking details about the new Starz show Ash vs Evil Dead.

-Spider-Gwen is a thing! Also if anyone knows where I can get a hoodie like that please let me know I think I would need to buy some for friends and myself.

 

Mark "Spider-Gwen’s unofficial bf” Morowczynski, Tom “Shop Smart Shop S-Mart” Moser and Michael “Don’t Cross the Streams” Hildebrand

ADFS Deep-Dive: Onboarding Applications

$
0
0

I’m back with the onboarding of applications post I promised. Of all my ADFS work I’m performed over the last several years, the one reoccurring pain point that customers have is onboarding applications to ADFS. The reason this typically happens to because the ADFS admins don’t usually know what the application owners needs and vice-versa. Being the guy that can translate the requirements between the ADFS team and the application owners really is a valuable skill and that’s why I wrote this blog. My goal here is to get your well-versed in what is required by ADFS and what is required on the application side regardless of whether the application is on-premises, off-premises, SAML, or WS-Fed. If you need a primer on the differences between SAML or WS-Fed, please check out on of my prior posts:

http://blogs.technet.com/b/askpfeplat/archive/2014/11/03/adfs-deep-dive-comparing-ws-fed-saml-and-oauth-protocols.aspx

Breaking it Down

If you break down a browser-based SSO transaction, you will quickly see some of these things are required. If you read my previous ADFS Primer post, then most of this should feel familiar:

http://blogs.technet.com/b/askpfeplat/archive/2014/08/25/adfs-deep-dive.aspx

image

This is a fiddler trace of a typical SSO transaction involving ADFS:

Frame 1: I navigate to https://claimsweb.cloudready.ms. It performs a 302 redirect of my client to my ADFS server to authenticate.

Key Takeaway: For this initial redirection to occur, the application needs to know the ADFS login URL.

Frame 2: My client connects to my ADFS server https://sts.cloudready.ms/adfs/ls/?wa=wsignin1.0&wtrealm=https://claimsweb.cloudready.ms&wctx=rm%3d0%26id%3dpassive%26ru%3d%252f&wct=2013-12-09T08%3a05%3a07Z

Key Takeaway: Since ADFS may have multiple relying party applications, it needs a piece of identifying information to know which relying party application to invoke. Consequently, the application must send an application identifier. ADFS must also know whether this is a SAML or WS-Fed application. If the request is signed, ADFS must also have public portion of the application signing certificate but request signing is optional.

Frame 3: If the request is signed and the signature passes, I will authenticate to ADFS. The ADFS server will ensure I’m authorized for this RP application via the issuance authorization configuration on the RP and then process the claims via the Issuance configuration on the RP. The ADFS server will then send the client browser some HTML with a SAML token and a JavaScript that tells my client to POST it over to the original claims-based application.

Key Takeaway: For all of this to happen, you would need issuance authorization rules on the RP (if they apply), claims rules on RP, the public token encryption certificate (if applies), and lastly, ADFS needs to know what URL to have the client POST the token back to on the application side, this is called the Consumer Assertion Endpoint.

Frame 4: My client sends that token back to the original application: https://claimsweb.cloudready.ms. Claimsweb reads the ADFS identifier, verifies the signature on the token, decrypts the token (if applies), reads the claims, and then loads the application

Key Takeaway: For all of this to happen, the application will need the ADFS identifier, the public portion of the token signing certificate, it already has its own token decryption certificate, and then needs to know what claims are in the token.

 

Summary

So here is a breakdown of what is needed. The following table is a summary of all the things needed to build a relying party trust and what things the relying party trust owner will need as well.

Key:

X   Signifies who needs this key piece of information

<--> Signifies who is responsible for providing this information

A Blue column signifies whether this is a required piece of information

chart

Condensed Version

If you want a condensed version of this information, here is it. To create just about any replying party trust, regardless of platform, on-premises, off-premises, SAML, WS-Fed, whatever, you will need the following information from the relying party application owner:

  • Does the application support RP-Initiated Sign-on?
  • The application metadata if they have one. (Optional)
  • Is the application SAML or WS-Fed?
  • Identifier of the application. This can be a URL or URI.
  • A SAML request signing certificate if there is one. (Optional)
  • If they want to authorize that certain users can use this application or not. (Optional)
  • What claims, claim types, and claims format should be sent. (Optional)
  • If token encryption is desired, the public portion of the token encryption certificate. (Optional)
  • The URL/endpoint that the token should be submitted back to.
  • The supported secure hash algorithm, SHA-1 or SHA-256. Most SAML applications are SHA-1 while most WS-Fed applications are SHA-256.
  • Whether the application needs RelayState support. (Optional)

You will need to provide the relying party application owner with the following information:

The biggest problem you’ll run into when setting up your relying party trusts is rarely is there someone that knows exactly what is needed to configure both sides and even if there is, everyone calls these components something different. Consequently, we have created an ADFS Onboarding document that we recommend be used by customers. Not only does it help them document all relying party applications that will be created but also helps drive the conversation of what exactly is need from each party.

Click Here to Download Onboarding Document

Warning: Stop now, if you don’t want any further detail on each of these components. Continue to read on if you have some free time and are interested in understanding each of these. Smile

 

RP-Initiated Sign-on Support

Type: Required

Who Needs to Know This:  The ADFS owners

RP-initiated sign-on is typically a topic we reserve for advanced ADFS discussions but it is something that has to be known when you’re onboarding applications to ADFS. Simply put, RP-Initiated Sign-On is if the user can navigate to the application first to gain access to the application. I know this sounds like a simple concept and the answer should typically be Yes, but why do we need to ask this?

We have to ask because some SaaS application providers might be in a multi-tenant configuration and because of this, if you send all users from many customers to the same application URL, the application now needs to figure out where to redirect them for authentication, Microsoft calls this Home Realm Discovery (HRD). Some SaaS providers don’t know or want to be responsible for redirecting users from many customers to different ADFS environments. Perhaps it’s a legal issue or a technical issue but if you know this, then you’ll have to start thinking about something called IDP-initiated sign-on, which is simply providing your users with special ADFS URL’s so they kick off the SSO transaction with ADFS first, which then logs them into the application.

Another reason is ask this question is because if they don’t support this and if your users bookmark pages within the application, those bookmarks will fail to log them in. Hence the requirement for special ADFS URL’s that can log them in instead. We’ll cover this in another post down the road.

 

Federation Metadata

Type: Optional

Who Needs to Know This: Both parties, if possible.

If you can, use this to configure both sides. ADFS publishes a metadata that can be consumed by the some relying party trust application to configure it with all the parameters that it needs. From my experience, most SSO and SAML applications don’t support importing a metadata though. Nonetheless, you can access your ADFS metadata from https://<sts.domain.com>/federationmetadata/2007-06/federationmetadata.xml. Ifyour partner cannot access this URL for some reason, you can also download the .xml document from your browser and email it to them.

Some SSO applications may also have a metadata that you can import to create the relying party trust in ADFS. If it is available from a URL, you can just type that metadata into the RP creation wizard or import the downloaded .xml document:

image

From my experience though, many application vendors don’t support publishing a metadata. Another thing to note is that ADFS may not support all the options that are present in the metadata. If so, you will either have to strip those elements out of the metadata or manually create the relying party trust.

Key Takeaway: Ask the relying party trust owner if they have a metadata that you can import from a file or URL. Also let the relying party trust owner know that you have a metadata that is available at the above URL or can be emailed to them. These metadata files can configure both sides of the trust and make your life much easier.

 

ADFS Logon URL

Type: Required

Who Needs to Know This: Application Owners

If the application supports RP-initiated sign-on, the application owners will need to know the URL to redirect users to on ADFS so they can authenticate. The application will need the following information:

URL: https://<sts.domain.com>/adfs/ls/

Method: POST or Redirect

 

Application Identifier

Type: Required

Who Needs to Know This: ADFS Owners

After the login request has been sent over to ADFS, being that ADFS may have many configured relying party trusts, it needs to know which ones we’re trying to access. This is achieved by sending the ADFS server an “identifier” in the request. It can be a URL or URI, but both sides have to be configured with the same identifier.

The following are examples of how WS-Fed and SAML application will send their identifier in the logon request:

WS-Fed Sign-On Protocol:

This is probably a little deeper than I need to cover this but I really want you to understand how the relying party application includes this identifier. Many Microsoft applications including SharePoint, O365, or anything based on the Windows Identity Foundation (WIF) may use the WS-Fed Sign-in protocol. WS-Fed application will send a URL parameter called WTRealm indicating their identifier.

For example, I captured this following URL after going to https://claimsweb.cloudready.ms. Since this is a WS-Fed request, we’ll be looking for the WTRealm URL parameter:

https://sts.cloudready.ms/adfs/ls/?wa=wsignin1.0&wtrealm=https://claimsweb.cloudready.ms&wctx=rm%3d0%26id%3dpassive%26ru%3d%252f&wct=2013-12-09T08%3a05%3a07Z

On my ADFS server, you can see that my relying party trust for claimsweb has the same identifier:

image

SAML Sign-On Protocol:

The process for finding the identifier for a SAML application is little different and requires a decoding tool since the SAML request is Base64 encoded.

For example, I captured this URL after going to https://shib.cloudready.ms/Secure.

Copy the SAMLRequest value from the redirected URL:

https://sts.cloudready.ms/adfs/ls/?SAMLRequest=jZFRT4MwFIX%2FCun7KC3OjWaQ4PbgkqlkoA%2B%2BmAKdNCkt9hZ1%2F14GmkwfFl%2Fv%0APfc7p6cr4K3qWNq7Ru%2FFWy%2FAeZ%2Bt0sDGRYx6q5nhIIFp3gpgrmJ5erdj1A9Y%0AZ40zlVHISwGEddLotdHQt8Lmwr7LSjzudzFqnOuAYQyNLP1Kmb62gtdHvwWc%0AD6PSKOEaH8DgE5ni7CEvkLcZokjNT9AzhIM%2FBF4fACvAyNtuYvRSRSIiZXlN%0AwrlY0CriSxKGhNLDFeXhYjkfZAC92GpwXLsY0YCEM0JnQVQESxaEjCyekZd9%0AP%2BxG6lrq18stlJMI2G1RZLMp%2FJOwMAYfBChZnbpko7E9a%2Fcylv9UipJ%2FFbjC%0AZy6TZcfuB%2Bx2kxklq6OXKmU%2B1sOpEzEiCCfTye%2FfT74A%0A&RelayState=cookie%3A29002348&SigAlg=http%3A%2F%2Fwww.w3.org%2F2000%2F09%2Fxmldsig%23rsa-sha1&Signature=M0xoWQfcN3Yp94T2HiqIdJzEkxYqGc6hhopqi8xOI%2B2BtPSLufFDdQIF7z6Xjm6XdLq1MH9Av5xz2QWYs84ZYhlG3fHtZCjjaoI2wZqplRszHla%2BjtZoW20NGDepDsCRT0AKNkhe%2B4Yj3LshrM6EX5O3obx2Mypy8EcsoURkTF3kf1dwKqsGA3ka7ehbRmUQGJUXD0u4iFBog7YgkL4Q9FYMTanZeRo2X4%2FkAeNxT8ormKWJfYnAzg0F4Ku60zDd5N7jYu4XeyOsXDthEFI5H4WYucAprREl2hgSUI21J782kKzrslalIaJ5BKPIO50NPCIb5Sf6Zw4maLpZrFEfrw%3D%3

Go to https://idp.ssocircle.com/sso/toolbox/samlDecode.jsp

Paste in the SAML value and ensure that redirect is selected. It will spit out the following SAML Request XML and we can determine the identifier:

<samlp:AuthnRequest Version="2.0" ProtocolBinding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST" IssueInstant="2013-12-09T08:03:17Z" ID="_c9e91bb6135e72c9a8133122f42a3785" Destination="https://sts.cloudready.ms/adfs/ls/" AssertionConsumerServiceURL="https://shib.cloudready.ms/Shibboleth.sso/SAML2/POST" xmlns:samlp="urn:oasis:names:tc:SAML:2.0:protocol"><saml:Issuer xmlns:saml="urn:oasis:names:tc:SAML:2.0:assertion">https://shib.cloudready.ms/Shibboleth</saml:Issuer><samlp:NameIDPolicy AllowCreate="1"/></samlp:AuthnRequest>

Once again, on my ADFS configuration, you can see that my SAML relying party trust has the same identifier:

image

Key Takeaway: The identifier is really just a URL or URI value that has to be the same on both sides of the configuration so ADFS knows which RP to invoke for the logon request.

 

Request Signing Certificate

Type: Optional

Who Needs to Know This: The ADFS owners need this.

Rarely do you find that the requests will need to be signed but if so, the application owner will sign it with the private certificate and you’ll need their public certificate to check the signature. If the request do need to be signed, it’s typically always with SAML applications. As you can see here, my WS-Fed application doesn’t have a certificate configured, hence no request signing will be enforced here:

image

By default, the request signing certificate must pass revocation. If you’re having issues and you suspect the request signing certificate isn’t publically trusted, you can run the following command against the certificate, once you copy it to file to check it’s revocation status:

certutil –urlfetch –verify <RequestSigningCert.cer>

 

Issuance Authorization

Type: Optional

Who Needs to Know This: Both parties should discuss who should perform AuthZ.

While I usually recommend that AuthZ occur on the application side, I think it really depends on the nature of the application. Before any claims or tokens are issued back to the client’s browser, the ADFS server will perform authorization of whether they have access to the specific relying party application they are trying to access. This authorization can be based on specific groups you belong or other user claims information pulled from AD, LDS or a SQL database. You can build authorization rules based on the Exists, NotExists, or AND operators so these rules can be very complex:

For example, we could ensure that only CloudReady.ms\HRAdmins are the only group permitted to this this claimsweb.cloudready.ms application:

image

If this is my only authorization rule and I’m not in the HRAdmins group, ADFS will never get around to processing the claims and sending my browser a token because I’ll receive Access Deniedright up front:

image

 

Claims Issuance

Type: Required

Who Needs to Know This: The ADFS owners

If I do belong to the HRAdmins, ADFS now moves onto processing the claims that will be returned to my browser in a SAML token. The configuration of the relying party trust in ADFS and the application must be configured with the same claims information although if you send them more claims than the application requires, they’ll probably just ignore them.

image

 

SAML: Claims, Claim Type & Format

Type: Optional

Who Needs to Know This: The ADFS owners

Many SAML applications will want the claims sent with a certain claim type and format. The claim format is an additional piece of metadata that tells the application what type of information is being sent. I only have found this requirement with SAML applications and most require that the claim of email, UPN or SAMAccountName be sent with the NameID claim type and various claim formats.

The first claims rule would get the email attribute from Active Directory:

image

While the second claims rule would transform it into a NameID claims type with a format of unspecified although you can pick from the Outgoing name ID format which format the application desires:

image

 

Token Encryption Certificate

Type: Optional

Who Needs to Know This: The ADFS owners

Next, ADFS needs to know whether the application requires the SAML token be encrypted. If so, the application owner must provide you with the public portion of their token encryption certificate. Most applications don’t require token encryption since the token is protected by SSL and has a digital signature associated with it but if one is required, your relying party trust must have it configured:

image

 

Application Token Endpoints

Type: Required

Who Needs to Know This: ADFS Owners

Now, ADFS returns a SAML token to client’s browser and some JavaScript instructs my browser to post that token back to a URL on the application side. The endpoint is just the URL on the SSO application side that is listening and waiting for a SAML token. When you configure this on the relying party trust in ADFS, you must indicate whether it is WS-Fed or SAML application. Consequently, this is your opportunity to configure the token endpoints and the Sign-In Protocol Type:

image

You can see my claimsweb relying party trust is configured with WS-Fed and SAML endpoints. I configured both just for demonstration purposes as you would normally only have one or the other configured. The binding method for each is typically POST.

image

 

Secure Hash Algorithm

Type: Required

Who Needs to Know This: ADFS Owners

The relying party trust in ADFS must be configured with the correct secure hash algorithm. Most SAML applications will support SHA-1 while most WS-Fed applications will support SHA-256. Go to the properties of the relying party application in ADFS and then advanced tab and pick the correct hash algorithm from the drop-down:

SAML:Typically SHA-1

WS-Fed: Typically SHA-256

image

 

Token Signing Certificate

Type: Required

Who Needs to Know This: Application Owners

One of the first things the application does is check the digital signature of the token not only to ensure who the token came from but also ensure it wasn’t modified in transit. To properly do this, the application must have the public portion of the ADFS server’s token signing certificate. Every application must have a copy of the ADFS server’s token signing certificate.Some applications will accept this certificate in .cer format while others require .pem format. Applications based off of the Windows Identity Foundation (WIF) only need the thumbprint of the certificate pasted into their web.config. To export the token signing certificate from ADFS, open up the certificates container, go to the properties of the token signing certificate and then to the details tab and at the bottom, you see “Copy to File”:

image

Do not export the private key:

image

If they want it in .CER format, select the DER encoded binary X.509. If they want it in .PEM format, select the Base-64 encoded X.509.

image

 

ADFS Identifier

Type: Required

Who Needs to Know This: Application Owners

Next, after the signature passes, the application will check the issuer to ensure it came from the right identity provider so it checks the identifier value in the token.

If I search the token that ADFS sent the application, I will find an issuer attribute and it identifiers the name of the ADFS server that sent the token.

<saml:Assertion xmlns:saml="urn:oasis:names:tc:SAML:1.0:assertion" IssueInstant="2013-12-09T08:13:11.633Z" Issuer="http://sts.cloudready.ms/adfs/services/trust" AssertionID="_aacc5dc2-64fb-44c5-bc03-a144faa8efe4" MinorVersion="1" MajorVersion="1">

In this case, the issuer was http://sts.cloudready.ms/adfs/services/trust. The SSO application must have this ADFS identifier in its configuration.You can view your ADFS identifier by left clicking on the Service container at the root of the ADFS console and then right click>Edit Federation Service Properties:

image

image

Mega Takeaway: The lack of https on this ADFS identifier doesn’t matter because it is just a string value that must match on both sides. I know a lot of engineers that couldn’t configure SSO because they accidentally typed https on the application identifier side and couldn’t figure out why it didn’t work.

 

RelayState

Type: Optional

Who Needs to Know This: The ADFS owners

RelayState is some application state that needs to be maintained throughout the SSO transaction. You’ll need to ask the application owner whether they will need RelayState support. If so, you’ll need to ensure that all ADFS servers and ADFS Proxy/WAP servers have this enabled:

https://technet.microsoft.com/en-us/library/jj127245(v=ws.10).aspx

All ADFS 2.x servers and ADFS Proxy servers:

  • %systemroot%\inetpub\adfs\ls\web.config

ADFS 2012 R2 Servers:

  • %systemroot%\ADFS\Microsoft.IdentityServer.Servicehost.exe.config

 

ADFS Logout URL

Type: Required

Who needs to know this: Application owners

Officially logging out of the application isn’t necessarily required but for your deployments, it should be. You’ll need to provide the application owners with your logout URL. The logout method is different depending on whether the application is WS-Fed or SAML. If the application is WS-Fed, just provide them with the following URL:

For SAML applications, this gets a little more tricky. SAML application are supposed to logout by sending the user to ADFS with a SAML Logout Request (SLO) like:

This is an sample SLO request and not intended to be used.

https://<sts.domain.com>/adfs/ls/?SAMLRequest=hZJLS8QwFIX3gv%2BhZD9t0ql9hJkBYVQGxvGJCzdySW600CQ1NwX993bqExe6y318h3sOWRDYrpdb%2F%2BiHeI3PA1JMXmznSE6TJRuCkx6oJenAIsmo5M3x%2BVbmKZd98NEr37EfyN8EEGGIrXcs2ayX7GJ3sr042%2BwecqFNI6BuqrKqq8YobiptShBFw1XJOVe1KItCs%2BQOA438ko1yowjRgBtHEVwcW1wUM17NRH3LhRRzyct7lqxHT62DOFFPMfYyyyhSqjo%2F6ICgX1NLGWhDWUcZWx0eJMlib0VO6mH1yfQpvoDtO0yVt5lG60VmMYKGCGn%2F1C%2Byn9S3zG6MYbNObi73j6sButa0GL5v%2BU%2BXJac%2BWIh%2FR7vvtHpmplUZAzhq0UW2%2BgrZNLlSop4XOOU7x0bxxhR1bhTWnM8rKPMjgdWHj%2FezRx%2Fv9a9fsnoD

Key Takeaway: You’ll find many SAML application owners just recommend that using the WS-Fed logout URL you see above. While this is a workaround that typically works, this is NOT the correct way to perform SAML logout.

 

Advanced: SAML & AuthnContextClassRef

Type: Optional

Who Needs to Know This: The application owners needs to know what is supported.

One of my customers had a SAML application that worked while users were in the corporate network but wouldn’t work when they were outside the corporate network going through their ADFS proxy. It took me a while to figure this one out.

When the application redirects the user to ADFS, it can indicate which authentication type they want it to enforce in the request. For example, a SAML application can send a parameter in the SAMLRequest that requires ADFS to perform integrated Windows authentication by sending a AuthnContextClassRef of urn:federation:authentication:windows like:

image

The only problem with this is what if the user is outside the corporate network and is going through their ADFS Proxy/WAP for access to the application? – The ADFS proxy/WAP can’t perform integrated Windows Authentication and can only perform Forms-Based Authentication (FBA). Consequently, the ADFS Proxy was bombing out each time for this application while users were external. We had to configure the SAML application to send two authentication types in the AuthnContextClassRef of the SAML Request – one for integrated Windows authentication and one for username/password authentication separated by semi-colon:

urn:federation:authentication:windows;urn:oasis:names:tc:SAML:2.0:ac:classes:PasswordProtectedTransport

Note: This setting was required in the configuration of Cisco Jabber.

David “I write long blogs for some reason and don’t know why, damn, there I go again, ok I’m done” Gregory.

Common Troubleshooting Issues Encountered When Configuring MBAM 2.5

$
0
0

Hey! Bill Spears here. I'm a Microsoft Premier Field Engineer based in North Carolina and I specialize primarily in Windows Deployment and Client technologies. After completing many MBAM deployments and helping a client or two troubleshoot various MBAM setup issues, I wanted to share some of the most common things that I run into on a regular basis and point out how to troubleshoot and resolve those issues in order to achieve a successful MBAM setup.

Note that everything necessary to achieve a successful MBAM deployment is all documented on MSDN at the link below. If you follow these guidelines to ensure you have met all the prerequisites, created the correct Active Directory groups and users accounts, installed the MBAM components as described in the documents, created the correct group policies, and followed the guidelines described in each document, then your MBAM implementation should go smoothly and be up and running in no time.

Deploying MBAM 2.5

http://msdn.microsoft.com/en-us/library/dn645316.aspx

But what if things aren’t working? Now what? Hopefully these tips will help you overcome some of the common pitfalls that many people run into when deploying MBAM. After successfully deploying the server components of MBAM, which will most commonly be distributed among separate servers for SQL/SRSS, IIS and optionally SCCM integration, the most common problem encountered will be ensuring that the MBAM clients are properly communicating with the server in order to adhere to the MBAM group policies given to them, escrow their recovery keys, and report compliance status. In order to accomplish this, all we need to do is install the MBAM client on the machine and apply the MBAM group policy settings to the machine.

A good first step would be to check Gpresult to ensure that your policy is applied. Detailed instructions on which policies are necessary are outlined in the following MSDN document:

Planning for MBAM 2.5 Group Policy Requirements

http://msdn.microsoft.com/en-us/library/dn645338.aspx

If the policy successfully applied, you will see the settings in this location in the registry:

HKLM\Software\Policies\Microsoft\FVE\MDOPBitLockerManagement

In order to verify that MBAM Client software was properly installed, you can check Services to ensure that the following service is running:

image

Once the MBAM Client is installed, the MBAM Event log will be the place to find all the answers. This will be located here:

Event Viewer – Applications and Services Logs – Microsoft – Windows – MBAM (Admin and Operational)

A common failure would be that we are unable to reach the remote endpoint, such as in the example screenshot below:

image

“An error occurred while sending encryption status data” errors may specify “The remote endpoint was not reachable” Or “Access was denied by the remote endpoint”.

There are several reasons that the MBAM client may be having trouble reaching the endpoint. My first step would be to visit the registry key mentioned earlier (HKLM\Software\Policies\Microsoft\FVE\MDOPBitLockerManagement) and copy the value from KeyRecoveryServiceEndPoint (this is what you configured in your group policy) and paste this URL into an Internet Explorer window. If you get a page not displayed error, then let’s verify that you have correctly set the URL.

http(s)://<MBAM Server Name>:<the port the web service is bound to>/MBAMRecoveryAndHardwareService/CoreService.svc.

Example: http://mbamserver.contoso.com:80/MBAMRecoveryAndHardwareService/CoreService.svc.

So things to ask yourself are:

1 – Should it be http or https? (Did you supply a certificate when you installed MBAM)

2 – Did you specify FQDN or Hostname when you installed MBAM?

3 – Are you using the default port (80 or 443) or did you change this during MBAM setup wizard?

4 – Any other typos in the URL?

If you are getting prompted for credentials when you paste the URL into Internet Explorer or if you are seeing Access Denied by remote endpoint in your event log, then we would want to check the following:

1 - Is your SPN properly set? The following TechNet document explains how to use the setspn command. Also be sure to take into account if you are using hostname or FQDN.

MBAM 2.5 Server Prerequisites for Stand-alone and Configuration Manager Integration Topologies
https://technet.microsoft.com/en-us/library/dn645331.aspx

2 - Have you set delegation on your Web Service Pool Application account?

Go to Active Directory Users and Computers – Find your MBAM Web Application Pool Account – Right Click – Properties – Delegation Tab – Select “Trust the user for delegation to specified services only” – “Use Kerberos only” – Add – Browse to your Application Pool Credentials – Select your http SPN. See screenshot below:

image

3 - Does your URL fall under your Intranet Zone? For example, if your URL uses servername.contoso.com and you do not have an entry for *.contoso.com in Internet Explorer (Internet Options – Security – Local Intranet – Sites – Advanced), Windows will think this URL is on the internet, which would break Kerberos.

4 - Is your Web Service Pool Application account a member of your MBAM Database Read/Write group? Complete explanation of required Active Directory group and user accounts needed for MBAM are described in the following TechNet document:

Planning for MBAM 2.5 Groups and Accounts
http://msdn.microsoft.com/en-us/library/dn645328.aspx

Hopefully, this blog will save you some time if you find yourself trying to figure out how to troubleshoot your MBAM 2.5 deployment. Remember, to always check the MBAM Event Log as your first point of troubleshooting as this will lead you to the correct troubleshooting path.

Happy Encrypting.

-Bill Spears

Mailbag: Starting To Get The Hang Of This (Issue #9)

$
0
0

Hey y’all Mark, Tom and Hilde back for another mailbag Friday. Keep the questions coming and we’ll keep answering them. This week we are getting back into the Hyper-V pool and always some ADFS goodness. Let’s get into it.

FREE Security & The Cloud Virtual Event

Domain Admin credentials while installing ADFS

.NET versions and support life cycle

Hardened OS with Hyper-V cluster

Querying VMs and determine if they are running in Azure

Stuff from the Interwebs

 

Question

Is there any free Security and the cloud events taking place I need to know about?

Answer

There happens to be some right around the corner. March 25th 2015 is a online virtual event. You can register here.

Question

I run a tight ship with my Domain Admin credentials. If ADFS needs DA to install it must be changing something in AD. What is it?

Answer

Two things. First, We create the DKM container to protect the keys that allows sharing of token signing & token decryption certs when you are using self-signed certs. Second, also set the SPN on the service account with HOST/adfs.contoso.com for windows integrated authentication to work.

Question

I am trying to get a handle on .NET versions and support lifecycle - got any tips?

Answer

Here is a FAQ for .NET versions and OS support: http://support.microsoft.com/gp/Framework_FAQ

Question

I want to run Hyper-V on a Cluster but I'm running into issues with our 'hardened' OS build. Any insight?

Answer

I recently worked a couple of tripping points for Hyper-V and Clustering with some common hardening steps:

  • The "Create symbolic links" User Right is often restricted and set to <blank> or no one.
    • For a Hyper-V host, the following needs to have that user right:
    • “NT VIRTUAL MACHINE\Virtual Machines”

clip_image001

 

  • The "Deny access to this computer from the network" User Right is often set to include the "Local account" group to restrict local accounts from accessing the computer remotely. There is a non-administrative local account created by Failover Clustering and it needs this right (due to the Failover Cluster Virtual Adapter that provides cluster communications).

 

  • You CAN restrict this user right to local accounts that are also local admins via a new group added to 2012 R2 called “Local account and member of Administrators group”

clip_image002

clip_image003

Question

We have a large deployment of Azure VMs domain-joined to our on-prem AD. How can I query VMs and determine if they are running in Azure?

Answer

Here are a couple of methods...

1) Use the script here to query for a specific DHCP option that is used in Azure -

2) If your looking for something a bit more 'light weight', you can query for some aspect of the VM Agent (assumes the VM Agent is installed on the guest).

  • Query for "Windows Azure" services on the VM:

 

  • Query for the existence of this folder on the VM: "C:\WindowsAzure\"

 

Stuff from the Interwebs

-There is a Mexican wrestling league that has 3, that’s right 3, different groups of Teenage Mutant Ninja Turtles feuding with each other.

-Marvel’s Avengers: Age of Ultron trailer came out if you missed that.

-It’s almost baseball season here in America which means teams are at spring training. Will Ferrell is playing all 9 positions in 8 games.

-Daylight savings started this past Sunday which explains why everyone is sort of in a bad mood. John Oliver on Last Week Tonight, which is probably my favorite show on Sundays, asks “How is this still a thing?”

 

Mark “perpetually tired” Morowczynski, Tom “farm people” Moser and Michael “Cowabunga” Hildebrand

SHA-1 Deprecation and Changing the Root CA’s Hash Algorithm

$
0
0

Hi, Rick Sasser here, with what was intended to be a quick blurb on security that back references one of my original posts on Choosing a Hash and Encryption Algorithm for a new PKI? and somehow turned out to be the labor equivalent of about a week, counting everyone who chipped in on it, and a lot of back and forth on Crypto.

So, that said, I need to extend some thanks to our PKI team and the people that work on our PKI. Specifically Vic Heller, Larry Talbot, Sergey Simakov, Roger Grimes, Phil Hallin, Wes Hammond, Chris Ayres and Laura Robinson all had contributions. IMHO, we have the best PKI Product in the world, and the people that work on it are amazing folk (and amazingly tolerant of my questions).

I received a PKI Infrastructure request about increasing the Crypto on downlevel Certificate Authorities recently. Essentially, I was asked if you could increase the crypto on a lower tier hierarchy Certificate Authority. The short answer is you CAN, but it is not a matter of simply spinning up a new certificate authority and submitting a new request. Aside from the question of CAN there is the question of SHOULD.

Let’s provide a little context first:

Why is this blog post important?

Any entity/object/account/OS is only as secure as the things that control it. A corporation’s public key infrastructure is usually trusted by the entire client based joined to Active Directory. An Enterprise Certificate Authority by default publishes its certificate to some very, very key locations in Active Directory. Those locations are:

· CN=Certification Authorities,CN=Public Key Services,CN=Services,CN=Configuration,DC=…

· CN=Enrollment Services,CN=Public Key Services,CN=Services,CN=Configuration,DC=…

· CN=NTAuthCertificates, CN=Public Key Services,CN=Services,CN=Configuration,DC=…

· CN=AIA,CN=Public Key Services,CN=Services,CN=Configuration,DC…

Your clients trust your PKI without any reservation (exception being qualified subordination). The extent to which they trust them is exemplified by the red text.

From Guidelines for enabling smart card logon with third-party certification authorities.

The smart card logon certificate must be issued from a CA that is in the NTAuth store. By default, Microsoft Enterprise CAs are added to the NTAuth store.

· If the CA that issued the smart card logon certificate or the domain controller certificates is not properly posted in the NTAuth store, the smart card logon process does not work. The corresponding answer is "Unable to verify the credentials".

· The NTAuth store is located in the Configuration container for the forest. For example, a sample location is as follows:LDAP://server1.name.com/CN=NTAuthCertificates,CN=Public Key Services,CN=Services,CN=Configuration,DC=name,DC=com

· By default, this store is created when you install a Microsoft Enterprise CA. The object can also be created manually by using ADSIedit.msc in the Windows 2000 Support tools or by using LDIFDE. For more information, click the following article number to view the article in the Microsoft Knowledge Base: 295663 How to import third-party certification authority (CA) certificates into the Enterprise NTAuth store

· The relevant attribute is cACertificate, which is an octet String, multiple-valued list of ASN-encoded certificates. After you put the third-party CA in the NTAuth store, Domain-based Group Policy places a registry key (a thumbprint of the certificate) in the following location on all computers in the domain: HKEY_LOCAL_MACHINE\Software\Microsoft\EnterpriseCertificates\NTAuth\Certificates This is refreshed every eight hours on workstations (the typical Group Policy pulse interval).

To summarize, the certificate authorities in the NTAUTH store are TRUSTED FOR AUTHENTICATION. That means they can issue certificates I can use to logon into the domain.

Additionally Certificate requirements when you use EAP-TLS or PEAP with EAP-TLS documents that

You can configure clients to validate server certificates by using the Validate server certificate option on the Authentication tab in the Network Connection properties. When a client uses PEAP-EAP-MS-Challenge Handshake Authentication Protocol (CHAP) version 2 authentication, PEAP with EAP-TLS authentication, or EAP-TLS authentication, the client accepts the server's certificate when the certificate meets the following requirements:

· The computer certificate on the server chains to one of the following:

o A trusted Microsoft root CA.

o A Microsoft stand-alone root or third-party root CA in an Active Directory domain that has an NTAuthCertificates store that contains the published root certificate.

These are probably some of the more obvious places where the trust of PKI in an Enterprise Active Directory is illustrated, but think of this as well – your clients trust your PKI for SSL. Impersonate any web site? Not a problem. Your clients will trust your PKI for https://mybank.com. No problem.

In case it isn’t clear, your PKI is the cornerstone of your security infrastructure. It isn’t a place where one can scrimp on care/feeding/design.

Collision Attacks

Public Key Infrastructures depend on layering cryptographic primitives. Hashes and signing. A certificate is a piece of data with a signed hash. A collision occurs when two pieces of data generate the same hash value. The nature of hashing means that there will be collisions. The real trick is to generate a meaningful collision attack, which usually means appending some arbitrary data to the data I actually want. Let’s look at an example:

This is a file, rick.txt

Rick Sasser

PFE Rocks!

My key is Bill.

This is the hash of that file.

C:\Users\rsasser>certutil -hashfile rick.txt

SHA1 hash of file rick.txt:

77 87 7e 84 08 a1 78 e2 2b 86 c2 75 e8 4a fe e6 f4 63 c4 68

It isn’t enough just to generate a piece of data that has the same hash value. The nature of a fixed length hash guarantees that if the size of the data exceeds the size of the hash value that there will be some piece of data that generates the same hash. (See hair counting example at the fun link here). The real trick is to be able to APPEND some data to predictably manipulate the hash. If you looked at rick.txt and it was meaningless binary data, it would not have value. However if it was

Rick Bergman

PFE is the Bomb

My key is Ted

<Binary Data to manipulate the hash>

If I can make this generate the same highlighted hash value, then that is VERY useful. This type of attack means that I can mint certificates that appear to be issued from a higher level certificate authority because the hash of the certificate matches and seems to chain to a proper authority.

Using weak crypto is what allows these types of attacks – as an example, Flame leveraged an MD5 collision.

Back to the Question

Now that I’ve set the stage for why this is important, let’s look at a thousand words:

image

Figure 1

Figure 1 is a diagram that everyone familiar with Public Key Infrastructures should be familiar with. Root Certificate Authorities are more trusted, and less available, and issuing Certificate Authorities are less trusted, and more available. As a general rule, most Public Key Infrastructures I have seen use decreasing crypto strengths further down the chain.

Without any change, the result of installing this is illustrated below in figure 2. The CA’s signing algorithm is SHA-512. The CA’s certificate is SHA-1 signed.

image

Figure 2

Now, here’s where things get a little counter-intuitive. The signing algorithm of the root CA is, well, largely unimportant. There is zero cryptography used for a root trust. Instead it is an ACL trust.

All certificates issued under a root are derive trusted via signature cryptography. The hash algorithm used for all CA certificates and the end certificate is relevant. Since the root’s key is used to sign the top most CA, the root’s key strength is relevant for that CA’s signature. The same applies for each lower CA in the chain.

So from a crypto perspective, I would be in good shape if my SHA-1 self-signed Root signed the certificates it issued with a newer hash algorithm.

Right?

Now we're in the CAN vs SHOULD realm. It’s important to remember that like all things security, this is a piece of the puzzle. RSA keys are being deprecated at lower lengths. So while I’m going to discuss changing the signing algorithm of the root CA, it is not the end of this discussion, by any means. If you’re using a 1024 bit RSA key on your self-signed SHA-1 root, it is entirely possible that you might be having a similar conversation about deprecated crypto two or three years from now.

Aside from crypto concerns, some browser vendors have also announced that their browsers will deliver warnings if SHA-1 certificates exist anywhere else in the chain. So while you CAN do this, I emphasize, it is a delaying action. One of the contributors had this to say "There’s security issues and there’s app compat issues, and they aren’t always the same."

Changing the Signing Algorithm of the Root CA

Here is where I tell you that you cannot spin up your Windows Server 2003 Root CA, change the hash algorithm, create a new subordinate and shut it down. Changing the hash algorithm can only be done if the key is stored via a Cryptography Next Generation Key Storage Provider (CNG KSP), such as the Microsoft software KSP: Microsoft Software Key Storage Provider, which means a minimum of Windows Server 2008. For information on migrating a key to the new KSP look here. Now, that’s for the Software KSP. A production PKI should be following best practices and use an HSM. At this point, you need to refer to your vendor and TEST THOROUGHLY.

Presuming that your key IS stored in a CNG KSP

certutil -setreg ca\csp\CNGHashAlgorithm SHA256

This command will not take effect until the CA is restarted. If you have a problem restarting your CA after making this change, I note that the registry entry is case sensitive.

Now I can issue SHA256 certs from a SHA-1 root (and it's worth noting that if you dig around on a few https pages, you'll find that this is what the major vendors are doing). The resulting SubCa Certificate after I submit a new request looks like:

image

So now I'm good to go. Right? No. You changed the signing algorithm of the Root CA. The CRL once republished looks like:

image

So if you're trying to maintain down-level compatibility for clients that don't necessarily understand SHA2 algorithms, you'll need to change the signing algorithm BACK to SHA1 prior to issuing a CRL.

That looks like:

PS C:\windows\system32\CertSrv\CertEnroll> certutil -setreg ca\csp\CNGHashAlgorithm SHA1
SYSTEM\CurrentControlSet\Services\CertSvc\Configuration\2008R2ROOTCA-CA\csp:

Old Value:
CNGHashAlgorithm REG_SZ = SHA256

New Value:
CNGHashAlgorithm REG_SZ = SHA1
CertUtil: -setreg command completed successfully.
The CertSvc service may need to be restarted for changes to take effect.
PS C:\windows\system32\CertSrv\CertEnroll> net stop certsvc
The Active Directory Certificate Services service is stopping.
The Active Directory Certificate Services service was stopped successfully.

PS C:\windows\system32\CertSrv\CertEnroll> net start certsvc
The Active Directory Certificate Services service is starting.
The Active Directory Certificate Services service was started successfully.

certutil -crl

CertUtil: -CRL command completed successfully.

And again, if you browse a couple of publicly trusted roots that are issuing SHA-256 or better certificates, you'll see that there are a couple of SHA-1 signed CRLS.

It is worth noting that trusting a CRL relies on derive trusted via signature. So presuming viable collision attacks exist, reverting back to SHA1 for CRL signing may not meet auditing needs.

Summary

Ok, so that was a lot. Let me summarize:

 

  • You CAN change the signing algorithm of a SHA-1 Root MSPKI and issue a SHA-256 intermediate and be cryptographically secure.
  • You must change it back if your client base cannot tolerate SHA2 signed CRLS (and then you might not be cryptographically secure).
  • While it's possible to tack on a SHA2 SubCa to your internal PKI, it is entirely PROBABLE that it is nota good idea.

Additional Considerations

I was going to call this section recommendations, but that’s a loaded gun. So here are some “things to consider”.

· Get a PKI Health Check from Premier Field Engineering. They can help you solve the common issues that plague most Public Key Infrastructures and provide a good starting point if you determine you need to design a new one.

· Design a Suite B PKI from the ground up. Microsoft Services can assist you with that if you need help. I don’t recommend that your first Public Key Infrastructure be a production design. Leverage an expert. Public Key Infrastructures are (were, I’m not so sure this is a great idea anymore) long lived security implementations with expiries in the 10 – 20 year range. It is entirely probable that your design will live on longer than your employment and be used for purposes you did not envision. Leverage an expert.

· I was at a security presentation recently where a noted personage stated that “Malware is increasingly signed”. You should control your client’s trust base. In other words, domain joined computers should not be able to add CA’s to their trust store. More on this later presuming that blog attempt does not turn into this blog attempt.

· A Public Key Infrastructure, crypto aside, is no good without good processes and procedures in place. It does no good to require Administrators to use Smart Cards or Virtual Smart Cards when the issuance policy for those smart cards isn’t tightly controlled. If you issue them to everyone without any governance, you end up with Security Theater. (Something that looks like security and provides a false sense of)

Rick Sasser

You Use Storage? We Want To Hear From You

$
0
0

Hey y’all, Mark here asking all our great readers for a real quick favor. Friend of the blog/the internet’s punching bag Ned Pyle on the Windows Server team has just posted a quick 12 question survey looking for feedback around storage solutions. You use storage don’t you? Of course you do! To be clear this doesn’t have to be Microsoft storage solutions, just any storage in general. So really, let’s be honest you have no excuse. Please take a few minutes to run through this or pass it along to someone who you think would love to really tell Microsoft what they think. What you write could help influence future storage solutions which is pretty cool if you ask me.

Mark “1 IOU” Morowczynski


Getting started with the Graph API with the Graph Explorer

$
0
0

Hi Folks. Lakshman Hariharan here with a post on a cool tool from our good friends on the Azure team called Graph Explorer. In a nutshell, the Azure AD Graph API provides programmatic access to Azure AD through REST API endpoints. Applications can use the Graph API to perform create, read, update, and delete (CRUD) operations on directory data and objects. For example, the Graph API supports the following common operations for a user object:

· Create a new user in a directory

· Get a user’s detailed properties, such as their groups

· Update a user’s properties, such as their location and phone number, or change their password

· Check a user’s group membership for role-based access

· Disable a user’s account or delete it entirely

Since I am not a programmer even if one were to apply the most generous interpretation of the word, this feature called Graph Explorer that I came across recently peeked my interest. Graph Explorer, as the name suggests, allows you to explore or browse your Azure AD with absolutely no programming skills required. Several blogs abound discuss what Graph Explorer is, so I intend to use this post to show you how you can, if you have an Azure AD tenant setup, start using Graph Explorer. The post is broken down into four steps.


Step 1: Log in to your Azure AD tenant using Azure Active Directory Powershell
Step2: Create the Service Principal (MsolServicePrincipal) and allow access to read and modify data
Step 3: Login to Graph Explorer
Step 4: Run queries using Graph Explorer

So that being said, let’s get started. Before you can follow along this step by step, here are few things you will require

1. An online Azure AD tenant setup with at least a handful of users populated either via DirSync or AADSync from your on-premises Active Directory environment.

2. An online Service Principal (MsolServicePrincipal) that has permissions to access your online Azure AD tenant.

Step 1: Log in to your Azure AD tenant using Azure Active Directory Powershell

My online Azure AD tenant is called lhazure.com so I used the Connect-MsolService cmdlet to connect and authenticate to Azure AD using an account that is a Global Administrator

Step2: Create the Service Principal (MsolServicePrincipal) and allow access to read and modify data

a. Once logged in using an account that is a Global Administrator, execute the following PowerShell cmdlet to create the new Service Principal

New-MsolServicePrincipal -DisplayName GraphExplorer -Type symmetric

This will result in something similar to the following screenshot

Since I didn’t specify a value for the symmetric key, one was automatically generated for me.

clip_image002

Important: Make a note of this key and the AppPrincipalID because you will need it to log in to Graph Explorer. Also make a note of the ObjectID since you will need it to provide the Service Principal rights to Azure AD.

b. Execute the following cmdlet to give the Service Principal you created in the previous step rights in Azure AD. At the risk of stating the obvious, replace the value for RoleMemberObjectID with the value of the ObjectID created by you. This should return a successful result.

Add-MsolRoleMember -RoleName "Company Administrator" -RoleMemberType ServicePrincipal -RoleMemberObjectId ee4d6241-9b84-4a64-af08-b7d429090497

Step 3: Login to Graph Explorer

Open Internet Explorer and navigate to https://graphexplorer.cloudapp.net. This will result in landing at the page depicted in the screenshot below

Under “Resource” , enter the following, replacing <yourAzureADTenant> with the actual name of your Azure AD tenant. In this case I am interested in getting a list of users,

https://graph.windows.net/<yourAzureADTenant>/users?api-version=2013-04-05

When replaced with my Azure AD tenant of lhazure.com it looks like the following screenshot

clip_image004

Now click “Get” on the right of the Resource URL.
This will bring you to the login prompt where you will enter the AppPrincipalID and Symmetric key generated in Step 2.

As you can see, you also have the option of using the Demo Company as well but in this case I am demonstrating using an actual Azure AD tenant.

clip_image006

Once successfully logged in, you will see output similar to the following screenshot.

clip_image008

Step 4: Run queries using Graph Explorer

For a list of common Graph API queries refer to this article. Now let’s walk through a few examples using lhazure.com.

In this first query I am interested in seeing the properties of a user named John Doe that has a UserPrincipalName of johndoe@lhazure.com. So I enter the following request:

https://graph.windows.net/lhazure.com/users/johndoe@lhazure.com?api-version=2013-04-05

This results in the following output. Note some of the properties highlighted.

{
  "odata.metadata": "https://graph.windows.net/lhazure.com/$metadata#directoryObjects/Microsoft.WindowsAzure.ActiveDirectory.User/@Element",
  "odata.type": "Microsoft.WindowsAzure.ActiveDirectory.User",
  "objectType": "User",
  "objectId": "1c2260b0-41a6-4e32-a5ea-eb7f4ce46103",
  "accountEnabled": true,
  "assignedLicenses": [],
  "assignedPlans": [],
  "city": "Fictionland",
  "country": null,
  "department": null,
  "dirSyncEnabled": true,
  "displayName": "John Doe",
  "facsimileTelephoneNumber": null,
  "givenName": "John",
  "jobTitle": null,
  "lastDirSyncTime": "2015-03-07T17:01:32Z",
  "mail": null,
  "mailNickname": "johndoe",
  "mobile": null,
  "otherMails": [],
  "passwordPolicies": null,
  "passwordProfile": null,
  "physicalDeliveryOfficeName": null,
  "postalCode": null,
  "preferredLanguage": "Fictional Language",
  "provisionedPlans": [],
  "provisioningErrors": [],
  "proxyAddresses": [],
  "state": "FI",
  "streetAddress": "123 ABC Lane",
  "surname": "Doe",
  "telephoneNumber": null,
  "usageLocation": null,
  "userPrincipalName": "johndoe@lhazure.com"
 

}

If I am interested in only returning the Street Address for John Doe I use the following query

https://graph.windows.net/lhazure.com/users/johndoe@lhazure.com/manager?api-version=2013-04-05

{
  "odata.metadata": "https://graph.windows.net/lhazure.com/$metadata#Edm.String",
  "value": "123 ABC Lane"

}

If I am interested in querying what groups John Doe is a member of then I run the following query. As you can see John Doe is a member of the group All Full Time Employees.

https://graph.windows.net/lhazure.com/users/johndoe@lhazure.com/memberOf?api-version=2013-04-05

{
  "odata.metadata": "https://graph.windows.net/lhazure.com/$metadata#directoryObjects",
  "value": [
    {
      "odata.type": "Microsoft.WindowsAzure.ActiveDirectory.Group",
      "objectType": "Group",
      "objectId": "7ffb6db2-e41c-4b67-8170-f959a1d3f2ca",
      "description": null,
      "dirSyncEnabled": null,
      "displayName": "All Full Time Employees",
      "lastDirSyncTime": null,
      "mail": null,
      "mailNickname": "06f92982-41f9-4c96-b6a8-865ed4e2b82c",
      "mailEnabled": false,
      "provisioningErrors": [],
      "proxyAddresses": [],
      "securityEnabled": true
    }
  ]

}

Well, that’s it from me, for now. Hope you find this post as useful and the feature as cool as I did. Happy exploring...

Lakshman Hariharan

How to Force a Diagnostic Memory Dump When a Computer Hangs

$
0
0

Matthew Reynolds here. My job is to make Windows sing (figuratively) in large enterprises.

If you have a machine which freezes you may need to generate a memory dump in order to find the cause. If you can generate the memory dump before calling Microsoft support you might speed up your diagnosis.

Use this technique if…

· The machine becomes unresponsive (but doesn’t crash to a blue screen) such that you cannot use other diagnostic tools

· The problem is likely to happen again in the future so you have a chance to configure the machine for next time

If you are thinking to yourself now, “what about live remote kernel debug?”, or “what about subtle differences between binary versions”, or “page file sizes are a many-nuanced topic” you are not wrong—you are just reading the wrong post. Exhaustive documentation exists at https://support.microsoft.com/en-us/kb/969028 and linked friends. These cover many more options, edge cases, virtualization and so on. I am writing this post because I recently found that my customers and I needed a quick “try this first” reference for ordinary PCs and servers (https://youtu.be/pjvQFtlNQ-M).

Step 1: Configure the Automatic (or Kernel) memory dump setting and page file

Of the various memory dump styles “Kernel” is often the best balance between size and usefulness.

Starting with Windows 8 / Server 2012 the “Automatic” option is a great way to get a Kernel memory dump. The automatic option is described here. http://blogs.technet.com/b/askcore/archive/2012/09/12/windows-8-and-windows-server-2012-automatic-memory-dump.aspx. Essentially you just choose the Automatic options for both memory dump configuration and page file size.

For Windows 7 / Server 2008 R2 use “Kernel” option instead with either system managed page file size or page file size > size of RAM.

image

Other dump modes such as Mini or Full might be used in consultation with a support engineer.

Step 2: Trigger the crash dump

Option A – NMICrashDump (good for remotely managed server class hardware)

Some server hardware provides the ability to trigger a crash (to get a memory dump) using a hardware interrupt. Typically this would be triggered using a hardware level remote management interface. 

This approach is described here: https://support.microsoft.com/en-us/kb/927069.

Essentially you set the NMICrashDump registry value and then use the hardware specific remote management interface to trigger the crash.

Option B – CrashOnCtrlScroll (good for laptops and PC / workgroup-server class hardware)

“CrashOnCtrlScroll” (https://msdn.microsoft.com/en-us/library/windows/hardware/ff545499(v=vs.85).aspx) is a technique where the keyboard driver and kernel conspire to crash the machine (to get a memory dump) when a magic key sequence is detected. This is like a Windows Internals version of up, up, down, down, left, right, left, right, B, A… (http://en.wikipedia.org/wiki/Konami_Code).

image

Some keyboards and KVMs prevent the default Control + Scroll Lock + Scroll Lock sequence from working. Where the heck is Scroll Lock on my tiny tablet keyboard?

Fortunately you can change the magic keys. The CrashOnCtrlScroll article linked above alludes to this but leaves much of the implementation to the reader’s imagination. I typically start with examples that others have figured out like http://random-tutorials.blogspot.com/2012/08/manual-crash-dumps-on-windows.html which looks as follows in my registry. Be careful. Control + D + D as configured here is much more likely to be hit accidentally than Control + Scroll Lock + Scroll Lock

image

Step 3: Retrieve the file and get it to an expert for analysis

Copy or move the memory dump file (located by default at %SystemRoot%\memory.dmp) as needed. If the original hang was blocking boot or logon you may have to use an alternative boot path such as Safe Mode to get there. In my world the target audience for the memory dump is usually an escalation level expert deep inside Microsoft support: https://support.microsoft.com.

In case you decide to have a go at debugging it using windbg.exe or other tools (https://support.microsoft.com/en-us/kb/315263) keep in mind that the cause of your crash is already known. You triggered it manually. I stress this because many debugging tools or guides (e.g., !analyze) assume that you are trying to learn the cause of the crash and will simply report that the crash was triggered by whichever method you used.

Instead your goal is to use the memory dump to find the cause of the unresponsiveness which began prior to the crash. This is going to involve looking for locks, IRPs, critical sections, hung threads, etc. If only there were a cheat code…

Up, up, down, down, left, right, left, right, B, A (and call us)!

-Matthew “Glamour Shots” Reynolds

Mailbag: Opening Day (Issue #10)

$
0
0

 

Hey y’all Mark and Tom here. Bet you thought we’d miss this week too.  We’ve been a bit busy over here so that would mostly explain it. That and I’m in two fantasy baseball leagues this year. My nerdiness extends into sports as well thank you very much. Tom was doing something nerdy as well…..probably. Anyways keep sending the questions and we’ll keep answering them. Let’s jump in.

 

ADFS in Azure

ADFS Login page customization

Automatically joining a workplace join device

Password not required

Stuff from the Interwebs

 

Question

We want to host ADFS servers in Azure. Is there any documentation around this? Can we do this?

Answer

Yes you can do this and here is some links to get you started. https://technet.microsoft.com/library/dn509539.aspx

 

Question

We want to customize our ADFS login page. How do we do this?

Answer

You’ll be using powershell to do that https://technet.microsoft.com/en-us/library/dn280950.aspx and if that doesn’t meet your requirements you can take a look at https://technet.microsoft.com/en-us/library/dn636121.aspx

 

Question

I’m starting to use workplace join a lot and we want to take away the steps where the user has to manually join the device. Is there a way to automatically do this on a domain joined device?

Answer

Yes you can do this. For Windows 8.1 you can use Group Policy to set this configuration (https://technet.microsoft.com/en-us/library/dn720812.aspx) For Windows 7 you’ll need to download a package and it runs as a scheduled task (https://technet.microsoft.com/en-us/library/dn609827.aspx)

 

Question

I noticed a bunch of user accounts in my domain have "password not required" set. What gives? Should I fix it?

Answer

Yes, you should.

Certain provisioning software (including dsadd) will create the accounts with the user account control attribute set to 0x220 hex, or 544 decimal. That indicates PASSWD_NOTREQD and NORMAL_ACCOUNT. The default value for a standard user created via ADUC, with no other options enabled would be 0x200, or 512 decimal.

While having this set isn't the end of the world, as users will still have to enter a password in the UI while changing the password, it IS possible that an administrator could reset the user's password to blank, not requiring a password at all for logon. Obviously we don't want that.

Fixing this is pretty easy with PowerShell. If you want to discover all of the users, you can consult this handy TechNet KB for the list of values: https://support.microsoft.com/en-us/kb/305144/

After, we need to construct the LDAP filter… we can use a bitwise AND to find out of the UAC attribute contains the value we're looking for:

Get-Aduser -Filter {UserAccountControl -band 0x020}

And that should return all of the accounts with password not required set. You'll probably want to scope that down to a specific OU, as the above syntax will get ALL accounts. There might be a valid reason to leave it in place. If we pipe that to Set-ADUser with a few switches, we can remove the value and security will stop complaining.

 Get-Aduser -Filter {UserAccountControl -band 0x020} | Set-Aduser -PasswordNotRequired:$false

Stuff from the Interwebs

-True Detective season 2 teaser trailer just appeared and it’s awesome like you’d expect. Watch season 1 if you missed it.  

-Hockey playoffs are here for both professional, college and high school. However Minnesota takes their high school hockey extra seriously with the all hockey hair team.

-Also the NHL needs to get rid of the "Loser Point". Tom and I both co-sign on this decision.  

-Baseball has just started, the best time of year. Listen to Domingo, a 7-time Infielder of the Year and 6-time Outfielder of the Year award winner (two years overlapping when he played both SS and LF in order to hit twice in the lineup), get you prep'd on Opening Day. Watch his other videos unless you are Semi-Pro or worse….Sunday League.

 

Mark “that one got too much of my bat” Morowczynski and Tom “cage bombs” Moser

Becoming an WPA Xpert Part 12: Timing User Login Credentials (Sometimes it IS the user)

$
0
0

Hi everyone here is Randy Reyes with a much needed updated to our slow boot slow login (sbsl) series. It’s been a while since our last entry and I ran into an interesting customer question.

Customer: Can we see how long takes an employee to type their user name and password?

Randy: Thanks to WPT the answer is yes.

The customer provided me with the trace from the last known time the user log in.

So let’s get to it.

The Before

PreSMSS

SMSSInit

WinlogonInit

ExplorerInit

Post Boot

3.973

7.433

45.502

0.998

18.800

Boot to Post Boot Activity ended: 72.734 Seconds and 734 Milliseconds = 1 Minute and 12 Seconds

image

Now you might be saying to yourself, 1 min and 12 seconds is not too bad. Since I only received the trace and no other information I didn’t have any idea in how much memory, CPU or disk speed are in this particular host. I decided to check the specs.

In order to start looking at the specs we go to the tab Trace, then System and then General.

image

Next, Storage

image

What if I told you it was a SSD (solid state drive)? Would you consider 1 minute and 12 seconds to be an optimal value? I’ve discussed some optimal times in a previous post, “Becoming an Xperf Xpert Part 7: Slow Profile Load and Our Very First Stack Walk”. Based on the hardware specs it looks like this machine should be booting faster.

The major delay in the boot trace can be identified in the Winlogon Phase (45 seconds). Many operations occur in parallel during WinLogonInit. On many systems, this phase is CPU bound and has large I/O demands. Services like PnP and Power, network subsystem, Computer and User Group Policy processing, CAD (CTRL+ALT+DEL) screen and credentials input can all lead to a delay. Good citizenship from the services that start in this phase is critical for optimized boot times.

To start we are going expand the System Activity graph group and we are going to add the graph Generic Events using table only.

image

After arranging the tables (Provider Name, Task Name) and the golden bar, the first issue detected was under Microsoft-Windows-Winlogon provider. The Task Name Display Welcome Screen aka CTRL+ALT+DEL was available to the user at 8.764 seconds of the trace. But he enters the combination in the keyboard at 18.055 of the trace.

Subtracting these times with get 9.295 seconds just waiting for the user to press CTRL+ALT+DEL.

image

Next issue detected in this particular trace is located under the Task Name Request Credential. Looks like the user entered the user name and password in 3 different times. First try was at 18.692 seconds of the trace at 39.59, again at 40.951 to 48.160 and finally at 48.958 to 51.012.

image

Looks like either the username, the password or one of the two were incorrectly typed and the access was denied.

At this point I explain the customer between the 9.295 seconds spent waiting to press CRTL+ALT+DEL and 32.392 seconds with possible wrong typed credentials. This will probably be the reason of the long delay for the user.

This solution was a simple one request the user to log in again with proper credentials, the results are in the picture below.

The After

Boot to Post Boot Activity end: 39 Seconds and 373 Milliseconds

image

We probably have a bit more work to do to continue to optimize but we are heading in the right direction.

All the previous SBSL articles can be found at http://blogs.technet.com/b/askpfeplat/archive/tags/sbsl/

If you are really excited and want to run this tool on Window 10 Preview here is other blogs from good friend Yong Rhee

WPT: Updated version of “Windows Performance Toolkit” from Windows 10 Technical Preview ADK or SDK

http://blogs.technet.com/b/yongrhee/archive/2015/03/21/wpt-updated-version-of-windows-performance-toolkit-10-technical-preview-from-the-adk.aspx

Randy “Why does this keep happening to me” Reyes

How to Manage Surface Pro 3 UEFI Through PowerShell

$
0
0

Hi, Kyle Blagg here. I’m a Premier Field Engineer who works with enterprise customers for everything Surface. Recently the Surface Engineering team released a firmware update that enabled some new capabilities in the UEFI that are of significant importance for a lot of customers. We now allow you to enable/disable features like the Front and/or Rear Camera, Wireless, Bluetooth, Network Boot as well as some other nifty features.

If you’re trying to deploy or manage hundreds, thousands or even tens of thousands of Surface Pro 3 devices, the last thing you want to have to do is manually set a password in the UEFI or manually modify those settings for all of your devices. As a result of the Surface Engineering team’s hard work, you can now utilize a Powershell script to control the UEFI settings.

What are the requirements?

First, let’s discuss the requirements:

· Surface Pro 3

· UEFI Firmware v3.11.760.0 (Download Here or download via Windows Update)**

· Surface Pro 3 Firmware Tools MSI (Download Here)

· Administrative Rights on your Surface

** This version of UEFI should already be installed if you use Windows Update. If you use WSUS/SCCM for updates, then you'll need to push out the latest drivers/firmware by using our new MSI (Link)

Now that we know the requirements, now what?

Now let’s get into the details. On our TechNet site (Link) we have some documentation and some sample scripts of how to identify and configure the settings. We’ll cover some of the same information here to provide a good base, but also provide some suggestions to make the process easier.

Before we can leverage any of the PowerShell scripts, we need to install the Surface Pro 3 Firmware Tools MSI on the device that you wish to configure. You can push out that MSI through your normal software distribution processes (i.e. System Center Configuration Manager).

image

If you’re installing it locally, just continue following through the Install prompts to complete the installation. If you need to do a silent install, you can get the supported switches via command line by running: “Surface Firmware Tool.msi” /? . That will give you all of the options available.

In our example, let's suppose we want to install it silently via command line without the installer forcing a restart.

image

 

Now that we have the requirements installed, now what?

Now that we have the Surface Firmware Tool installed, let’s see what we can do with it. Go ahead and open up the Powershell ISE to begin developing your script that we’ll use to configure your Surface Pro 3 devices.

The first thing that we’ll need to do is load the Extension that will allow us to access the UEFI options. We do that by running the command below:

[System.Reflection.Assembly]::Load("SurfaceUefiManager, Version=1.0.5483.22783, Culture=neutral, PublicKeyToken=20606f4b5276c705")

If your device is already configured to use an Administrator Password, you’ll need to provide the current UEFI Administrator password. If you don’t have a password currently assigned, then this option will be ignored if you try to run it. You’ll just need to run the line below and substitute 1234 with your currently configured Password.

[Microsoft.Surface.FirmwareOption]::Unlock("1234")

At this point, you should now have access to the UEFI via Powershell, but now what? Thankfully, we can now access that information via a simple PowerShell script. If you’ll take a look at the TechNet page, you’ll see a few script samples to give you some ideas of what you can do. One thing that I like to do when scripting in Powershell is creating Functions so it’s easy to execute it on demand. Here’s what that would look like if you decide to go down that road:

 
FunctionGet-UEFIOptions
{
    # Get the collection of all configurable settings
    [Microsoft.Surface.FirmwareOption]::All() |Foreach {
        [PSCustomObject]@{
             Name              =$_.Name
             Description       =$_.Description
             CurrentValue      =$_.CurrentValue
             DefaultValue      =$_.DefaultValue
             ProposedValue     =$_.ProposedValue
             AllowedValues     =$_.FriendlyRegEx
             RegularExpression =$_.RegEx
             }
        }
}

If I execute that function in PowerShell, I can get all of the available options and their allowed values. In order to keep things short, I’ve only provided a partial screenshot of the available options.

image

So how do I interpret the data that it gives me? In the screenshot, you can see an option for Password and TPM. We can see that the allowed values for a Password is that it has to be alphanumeric and must be between 4 and 20 characters in length. TPM can be enabled or disabled by setting the value to either 1 or 0.

Now that we know our options, how do we actually configure the options?

Now that we know what we can set and the values that we need to set, how do we actually set them? I’m glad that you asked. There’s a command for that too. The TechNet article shows you a way of being able to set the password so we’ll leverage that, but what if we want an easy to use Function that we can use for all of the different UEFI Options and minimize the amount of scripting that we have to do. One thing you may notice between my scripts and the sample scripts on the TechNet site is the lack of the loading of the extension and the password as part of the function. That is because those are the first two lines of my PowerShell script. That way those steps are completed as soon as the script is executed rather than be called each time I try to set a setting.

Wouldn’t it be great if you could set the password and other options using a PowerShell function using parameters? Here’s how:

FunctionSet-UEFISetting
{
  param(
        [Parameter(mandatory=$true)]$Setting,
        [Parameter(mandatory=$true)]$Value)
      
       $UEFISetting=[Microsoft.Surface.FirmwareOption]::Find($Setting)
       $UEFISetting.ProposedValue ="$Value"
}
 

Let’s take a look at what we’re doing. We’ve created a Powershell Function that allows you to set the UEFI options by using Parameters. The function has two parameters that are mandatory in order for the UEFI Options to be set correctly. The first would be the actual name of the Setting and the second would be the Value that you want to Apply. Earlier I showed how to get all of the available options. Once we run that we’ll see that one of the fields returned is Name. That is what we’ll use as the Setting Parameter. One of the other fields returned is Allowed Values, these will be what you’ll use as the Value parameter.

Here’s what it will look like if you want to set many of the current options available on the SP3:

Set-UEFISetting-Setting"Password"-Value"Password"
Set-UEFISetting-Setting"FrontCamera"-Value"00"
Set-UEFISetting-Setting"TPM"-Value"0"
Set-UEFISetting-Setting"PxeBoot"-Value"FE"
Set-UEFISetting-Setting"SideUsb"-Value"FE"
Set-UEFISetting-Setting"DockingPorts"-Value"00"
Set-UEFISetting-Setting"FrontCamera"-Value"00"
Set-UEFISetting-Setting"RearCamera"-Value"00"
Set-UEFISetting-Setting"WiFi"-Value"00"
Set-UEFISetting-Setting"Bluetooth"-Value"00"
Set-UEFISetting-Setting"Audio"-Value"00"
Set-UEFISetting-Setting"SdPort"-Value"00"
Set-UEFISetting-Setting"AltBootOrder"-Value"2"
 

After you run the commands above, you’ll need to restart for the settings to be applicable. If you accidently apply the wrong setting or need to revert back to the Default Values, there is a sample script on the TechNet page to show how to do that.

So there we have it, an easy to use PowerShell function to be able to modify the UEFI values for the Surface Pro 3. Feel free to add additional logic and/or error handling to your script. Kudos to the Surface team for adding this new functionality.

-Kyle Blagg

Viewing all 501 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>