Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

Windows RT Tablets, x86 applications and Windows Server Remote Desktop Services. A better together Microsoft Experience.

$
0
0

Hello, Jeff ‘the dude’ Stokes back with a post to help you fully utilize that BYOD experience. That of course would be the Windows Surface RT and Windows Surface 2 tablets. They have a place and that place is Modern Apps. But what about legacy x86 apps you say? You can run those too via RemoteApp, and as Montell Jordan would say…this is how we do it.

How does one enable a mobile workforce with affordable devices and simple to construct backend environments, one might ask? This post is a primer, a guide for a proof of concept, a tantalizing tale of tranquil steps aka a guide for you to set this up in a test lab. So read and learn as the dude guides you through this step by step. A lab we will build, a proof of concept Remote Desktop Services node, with Remote Applications. If you don’t have a Windows RT Tablet, fear not, you can reproduce the steps in a Windows 8.1 VM as well.

Installing the RDS Server

1. Install Windows Server 2012. Run Windows Updates. Pick everything. Install. Reboot. Do the normal stuff (time zone, name, domain join, etc.).

2. Enable the RDS Role. This is an easy step, but an important one. It goes a little something like this:

image

3. We add a role and feature, through the wizard of course!

image

4. and then hit next. For our lab/proof of concept/playground, we just pick “Quick Start”.

image

5. This is not a VDI Post (nor a Love Song) so pick Session-based desktop deployment.

image

6.  And then we need to of course verify we are installing this role and feature set on PICKLE! Well, that’s my name for the demo, you picked your own I’m sure for your lab. I also named my lab based on the movie I watched most recently...

image

7. No funky rights needed here, because the wizard is going to take care of it for us! That’s something the PG did an awesome job on. Standing up a (simple) VDI or RDS setup is quite easy in this wizard, kudos to them!

image

8. Check the box for “restart automatically if required” and click “deploy” and away we go…

image

It’ll reboot and then continue in progress…if you run into issues the event log is a good place to start, but I haven’t had a wizard deployment fail yet myself.

image

9. You’ll know it’s done when you can see this screen. Do note, the bottom, where it tells you the link to access your RDS farm. You can click it right there in the Wizard. You’ll also need it later if you want to test connectivity with other folks, etc.

image

10. By the way, normally you’d need licensing, etc setup. I’m not really wanting to get into licensing in this post though, sorry. After all, this is a proof of concept. So instead I get this:

image

Note: Configuring RDS for High Availability, Security, Internet-Facing Gateways, etc, is beyond the scope of this guide. Thank you.

11. Now that PICKLE has the role we can go into the management console for it. See the green plus signs? We can click those and easily add a server for that role. It is quite a slick setup really. A RD Gateway is mainly used to publish RDS apps to the Internet, Extranet or Intranet, so we don’t get into that with this post either. Again, simple, easy setup here.

image

12. Note that almost everything is wizard based or guided setup. The only thing we need to do, well, we don’t HAVE to, but the only thing I’m going to do here is make sure my apps are published:

image

13. Note it made Domain Users the default user group for this demo. It also was kind enough to publish 3 apps…but how do we get to them on a Surface RT (or anything else for that matter)?

Connecting With a Domain Joined Client

Let us start simple. For our non-Surface/Mobility friends, I’ll first connect from a domain joined machine, a VM running Windows 7 x64 SP1.

1. The first time a user hits the logon page, they are prompted to run the ‘Microsoft Remote Desktop Services Web Access Connector’ add-in. That’s fine, it’s needed.

image

2. Then you simply logon to the webpage using a Domain User account and select if you want the browser to be able to save credentials (the radio button at the bottom):

image

3. After doing so a balloon will appear that notes you are connected to work resources. This should be a common sighting for you with this exercise.

image

4. And then your web page will look like this. It’s the default suite of applications, just to demo that the thing works for you. Nothing too harmful, nice easy apps. Any x86 app should work though from my experience.

image

5. Launch an app, and voila you get a prompt, awfully similar to the mstsc window….note the publisher is unknown because we haven’t trusted the certificate.

image

6. After you connect, you are running the app, but remotely, as the icon on the taskbar tells us… to exit out, simply click the X like you would any other window. Easy enough right?

image

Connecting With a Surface RT/Surface 2

Ok so great. We’ve got remoteapp working. How does mobility come into this, specifically whats the angle on Windows Surface RT? Well, the limitation (or advantage) of RT is not running legacy x86 applications. But if you need long battery life + x86 legacy apps, publish the application in RDS and then connect from RT.

1, So logon to a Windows RT (or if you don’t have an RT tablet yet, a non-domain joined Windows 8.x x86/x64 install can be used here) and install from the store, the RDC app!

image

2. Now that you have it installed, there is a work around needed to trust the server (you don’t have a domain join so the client doesn’t know who pickle is). So open Internet Explorer and go to the IIS address, and download the certificate so you can install it on the machines’ certificate store:

image

3. Warning, danger Will Robinson! Untrusted Cert (No, for a demo/poc, I did not get a real SSL cert)

image

4. Go to view the certificate and on details, download it by copying it to a file.

image

 

5. Pick the top option as seen below:

image

6. I then save it somewhere easy to recall, in this case, on the desktop.

image

7. And viola, we’ve exported the cert successfully.

image

8. Now we run the command certutil –f –urlfetch –verify “cert.cer” > certverify.txt and we can see the URL is correct, should work, etc, in the text file.

image

9. Here is the results, we can see the cert info, looks good to me (I am not a cert guy, but bear with me, if you have to troubleshoot certificates, this is a way to do it).

image

10. Now, install the certificate. Just right click the .cer file and select “Install Certificate”.

image

11. Here is a certificate import wizard, remember! Local machine here folks. Not Current User.

image

12. Now, don’t let it guess, place the certificate into the “Trusted Root Certification Authorities” store.

image

13. Tada! Its done.

image

image

14. Now launch the Modern RDC client!

You place the web site name of your RDS farm into the modern app:

image

15. Then you are prompted for credentials (recall you aren’t domain joined here):

image

16. At which point you connect and get this!

image

17. So you click the OK button and are rewarded with a selection of awesome apps! The same list as we saw in the website right, on the desktop? Click an app you want to run.

image

image

We’re now running Calc remotely in RT/8.x non-domain joined from our Server in a RDS session. YAY!

Conclusion

So, one might ask, ‘Dude, I followed all these steps, and I see this, but why? What for? How? Huh?’ I’ll tell you, we just stood up a proof of concept RDS farm, we connected from non-domain joined assets securely to domain apps (in this case, Calc…which can calculate gas mileage did you know?!)

image

Now some have been awfully critical of the RT OS, saying it can’t run x86 / x64 applications so what’s the point. The point is a new model of application design, Modern Apps. But if you have an application, a legacy application, that you need to run, and you want longer battery life and a thin device and touch and all that, you can still run your legacy app provided you have some network connectivity. That’s not a bad deal in my opinion.

Some follow on questions. What about a real PKI solution? The dude is not a PKI guy, but yes, you could do the same and trust the CA I imagine. Or a real SSL cert. You would still have to import a certificate in any situation. Does Intune do this? Looks like it does. See http://technet.microsoft.com/en-us/library/jj884158.aspx for more information.

What about licensing, can we do this without paying more? We look to need RDS CALs for this, but check with a licensing specialist for your account. You may already own the licenses to roll this out and not even know it! See this for more information though. http://download.microsoft.com/download/3/D/4/3D42BDC2-6725-4B29-B75A-A5B04179958B/WindowsServerRDS_VLBrief.pdf

Jeff “I got 99 problems but legacy apps aren’t 1” Stokes.


Alternative Microsoft Training Options, Many Free!

$
0
0

Written by Dan Cuomo & Scott Simkins | Microsoft Platforms Premier Field Engineers and 1st Time AskPFEPlat Bloggers

Hi Fellow IT Pros,

All too often we hear a common phrase from our customers, “I need to get spun up on…” IT professionals often struggle to keep up with, or try to stay ahead of, the ever evolving trends to keep their environments and careers healthy. Training budgets are shrinking and sending staff to brick-and-mortar training facilities to digest an enormous stream of information within a brief period of days could deplete an entire training budget very quickly. In addition, it is also difficult (and sometimes nearly impossible) for many to schedule time to attend a class during their normal work week. The good news is that there is an increasingly available wealth of high quality, on-demand, training options. Many of the options we mention here are completely free while the others can be obtained through our Premier and/or MSDN offerings which we will discuss as well.

One of our goals as Microsoft Premier Field Engineers (PFE) is to help make all the offerings more widely known by educating people on where to educate themselves. We understand that, sometimes, it is difficult to know where to begin. We also know that lots of folks want to be hands-on, see more demos, deep-dive on specific topics, learn from experts, and have access to lab systems they can configure for their own training needs without emptying their wallets to do so. That is why we are trying to be vigilant about spreading the word and letting as many people know about the robust set of learning options that they can begin accessing today, so long as they have a PC, an internet connection, and a desire to learn. No matter what your preferred learning style, this article has something for you. So, let’s get started!

Microsoft Virtual Academy

One of our personal favorites is the Microsoft Virtual Academy (MVA). This site provides high quality, expert training from Microsoft employees like our own Microsoft PFE, Milad Aslaner, (shown in the image below) and partners to include a variety of technologies from Windows 8 Deployment to Windows Phone Development. The majority of this training is provided through videos and live events and is typically accompanied by a transcript, documentation, and assessments at the end of each module. You can even create custom training plans to help you work through topics and track your progression. If you’re on-the-go and want to take a presentation with you, the site allows you to download content in a variety of formats and play them on a range of devices. MVA is a “provider” to degreed.com so you can update your micro-credentials once you’ve completed a course.

Here’s the best part… The MVA is completely free and requires only a Microsoft Live ID (also free) to access the content.

image

Channel 9

Channel 9 is another great resource for technical training. Channel 9 provides shows, blogs, forums, projects and information on upcoming and past events (like TechEd). This site is a great place to get up-to-date content on your preferred topics. It also allows for the download of content in a variety of formats. They even provide modern apps through the windows store for Channel 9 and the TechNet Edge show. According to their site, “Channel 9 is a community. We bring forward the people behind our products and connect them with those who use them.”

image

TechNet Virtual Labs

If you want to try out the technologies and see how to actually perform a task or work with the solutions, check out the TechNet Virtual Labs. The labs provide a risk-free virtual environment for testing purposes. Typically, you can complete each lab in 90 to 120 minutes and there is no complex setup or installation required. When the labs open up, they include a detailed lab guide. Best of all, there’s no limit to the amount of attempts for the lab and you don’t have to tear-down and rebuild your own systems. Instead, you can tear-down ours. We recycle, so go for it!

image

Azure Benefits

Azure is cool…really cool. We are talking “almost as cool as a ninja,” folks…almost. If you want to give it a go, you can try it for a month, for free. Of course, you should probably do a little reading prior to starting the trial so you can get the most out of it. So, we recommend you start out in the Documentation Center. If you’re looking for the offline version of Azure, get your documentation here –> Azure Pack.

But wait, there’s more! If you’re an MSDN subscriber you most likely have free Windows Azure credits as part of your subscription that are reloaded each month while your subscription lasts. In order to take advantage of this benefit for MSDN, you just need to activate it through your account settings under the MSDN subscription section. If you’re tossing and turning at night trying to figure out which version of MSDN to purchase, you should check the Compare the MSDN benefits page to see how many credits come with each subscription level.

image

Education as a Service (EaaS)

Suppose one of your New Year’s resolutions was to “NYR 173: Eliminate scheduling headaches.” Problem is, your calendar typically looks like a multi-colored wall of unavailability. Enter Education as a Service which is one of the numerous benefits available to Premier Customers as an on-demand portfolio of training courses. It’s a library of on-demand videos featuring Microsoft Engineers who are subject matter experts in their field. They discuss and teach a range of IT technical subjects designed to increase your IT staffs depth and breadth of knowledge. If you’re interested in adding this to your Premier contract, please speak to your Microsoft Technical Account Manager (TAM) who can arrange your subscription to our library of content. Instead of bringing a trainer to you, this training is available remotely and at the convenience of each of your colleagues. Go ahead and check the box next to #173.

image

Chalk-Talks and Brown Bags

Chalk-Talks and Brown Bags sessions are a great way to get personalized instruction on a topic of your choice. If you’re a Premier customer and are able to find time for a group from your team to attend some onsite training, a Microsoft Premier Field Engineer (PFE) can come to you to provide instruction in a variety of settings and formats. For example, one of our joint customers decided they wanted a bi-weekly, hour long session of custom PowerShell “brown-bags” delivered over Lync with screen sharing. If you’re interested in finding out more about what’s available and have a Premier contract, please speak with your Microsoft TAM.

Supplementary Learning Materials

There are a variety of other great learning materials out there too. Most of you are probably aware that Microsoft Press creates books and references for different skill levels across the range of technologies. Did you know that they also provide a gallery of free eBooks and a blog to keep everyone updated on the latest info? Speaking of blogs, there are numerous technology blogs led by Microsoft employees in their respective fields. Ask DS (Directory Services), SQL Team and MVP, System Center Team Blog, Windows Blog, and of course, our own Ask PFE Plat(Platforms) are a small list of the variety of other blogs available. For an additional list of Microsoft Team Blogs, check here.

For those of you who would either want to share some of your knowledge or troubleshoot some of your more challenging issues, check out the TechNet Forums.

If you’re looking for a topical launching point for some our key technologies, look no further than the Survival Guides. Using the survival guides can be far simpler than searching the internet and provides a means of finding what you need on a topic, especially if you don’t know exactly what you’re seeking. You might even find related information to pique your interest and lead you down an educational journey.

Another trove of knowledge can be found in the Infrastructure Planning and Design Guides. This series helps to clarify and streamline the design process for a variety of technologies.

Whether you’re building applications for the desktop, the Web, the cloud, or for mobile devices, MSDN Magazine provides up-to-the-minute, comprehensive coverage of Microsoft technologies. MSDN Magazine connects you with the industry’s leading voices on establishing practical solutions to real-world problems.  You can still get a physical copy or “go-green” like us and our virtual labs.

In addition, MSDN and TechNet both provide a “flash newsletter” that can be customized to your liking.

TechNet Flash: The biweekly newsletter delivers the latest alerts of new resources to help you be more successful with Microsoft products and technologies. When you subscribe, or if you already receive TechNet Flash, you can customize your own version to receive the information that's most important to you.

MSDN Flash: MSDN Flash delivers critical developer news to you in one information-dense, compact newsletter. Stay up to date with the latest development news from Microsoft by subscribing today. Learn about the latest resources, SDKs, downloads, partner offers, security news, and national and local developer events. Every other week you'll get an email containing pointers to all of the new articles, samples, and headlines from MSDN Online, the MSDN Library, the Knowledge Base, the Developer Centers, and other Microsoft websites. In addition, look for announcements of Microsoft and industry events, training opportunities, chats, and webcasts

Finally, an oldie but goodie. Don’t forget about “the TechNet Library which contains technical documentation for IT professionals using Microsoft products, tools, and technologies.” One of my favorite features of TechNet (and a golden nugget for all those who made it to the end of the post ;-) is the export capability. If you want to “organize a custom set of articles with only the information you really want” check out J.C. Hornbeck’s article. You can export this custom set of articles in translated or its original language, specify the example code language (for example, PowerShell, HTML, C#, etc.), and create and title relevant chapters.

Summary

No matter what your technology, there is an à la carte menu of training options that is waiting to be consumed. We encourage you to tell your co-workers about them, talk about them with your friends, put up flyers, or share a link to this blog, so your pals can see for themselves. Thanks for reading.

-Dan Cuomo & Scott Simkins

Becoming an Xperf Xpert Part 8: Long Service Load, Never Jump to Conclusions

$
0
0

Hi everyone Randolph (Randy) Reyes with another SBSL blog post. In this particular engagement we were able to be more proactive, our job was to check and verified a Windows 7 image prior of mass deployment all around the world.

When arriving onsite we notice that 1,000 Windows 7 machines were already deployed and 7,000 are schedule in 3 weeks.

So let’s get to it.

The Before

PreSMSS

SMSSInit

WinlogonInit

ExplorerInit

Post Boot

3.037

8.067

144.139

17.896

74.000

Boot to Post Boot Activity ended: 247 Seconds and 539 Milliseconds = 4 Minutes and 7 Seconds

Looking at this number definitely our boot time is not with in good values using traditional hard-drive (non-SSD).

What’s consider good value?

Here is an article previously posted with those values. “Becoming an Xperf Xpert Part 7: Slow Profile Load and Our Very First Stack Walk”

http://blogs.technet.com/b/askpfeplat/archive/2014/02/03/becoming-an-xperf-xpert-part-7-slow-profile-load-and-our-very-first-stack-walk.aspx

The major delay in the boot trace can be identified in the Winlogon Phase. Many operations occur in parallel during WinLogonInit. On many systems, this subphase is CPU bound and has large I/O demands. Services like PnP and Power, network subsystem, Computer and User Group Policy processing, CAD (CTRL+ALT+DEL) screen and credentials delay. Good citizenship from the services that start in this phase is critical for optimized boot times.

After adding the graph for Boot Phases and also Generic Event we can see that most of the Winlogon Initiation Phase time is been consume by the Subscriber Profiles under the Microsoft-Windows-Winlogon provider. In the picture below we can see when the Subscriber Profile started at 30.772 and ended at 154.169 of the trace.

image

Before jumping into any conclusion of why Profiles are taking too long to load, I decided to look all the different Graph’s that Windows Performance Toolkit provide to us.

The graph that caught my eye is Services. So let’s add the Services to the analysis view.

image

Choosing to highlight the start and stop time of Profiles we can see a service call SVCHost32.exe starting before Profiles. Looks like Profiles initiation is been extended by the Service SVCHost32.exe.

Services = Services can declare dependencies or use load order groups to ensure that they start in a specific order. Windows processes load order groups in serial order. Service initialization delays in an early load order group block subsequent load order groups and can possibly block the boot process.

Long delays during any service initialization can increase the time that the boot transition requires. If you can do so, set services to demand start or trigger start. Demand-start and trigger-start services start after the boot process is complete and therefore reduce overall boot time.

Important reminder services “300 or less milliseconds should take to initialize”

After adding the graph for Boot Phases and also Services “Display Graph and Table” we can see that the service SVCHost.32.exe started at 28.350 and ended 125.809 in the trace.

image

Wow that’s a long time for a service to initialize, it should only take around 300 Milliseconds. If we add the Service Initiation time and Container Initiation Time we get 97.458 (1 Minute and 37 Seconds)

I know what you are thinking, this is a critical service for the company and no matter what we need to leave it alone. :) Here is a conversation I had.

Randy: Why is this service taking this long? Do you know what software is using this service?

Customer: Yes, this is an internal application that needs to be in every system in the environment

Randy: What is this software supposed to do? Is the software needed when user is booting or do we use it after the user logs in?

Customer: (Thinking) Honestly not sure I will need to ask the programmers.

15 Minutes later…..

Customer: The software scans user data that is copied to the file servers and the version installed in the image is including a new update. The system didn’t take this long before this update was deployed.

Randy: Sounds like this service is not necessary to start at boot time and also we need to check if the update is the cause of this long delay in the service.

At this point for testing purposes we open the Services Microsoft Management Console, searched for the service SVCHost.exe and changed the Startup Type from Automatic to (Automatic Delayed Start)

The Fixes

Step 1 = Startup Type from Automatic to (Automatic Delayed Start)

The After

Boot to Post Boot Activity end: 68 Seconds and 004 Milliseconds = 1 Minutes and 8 Seconds

image

(Note: In this particular engagement we still have other areas that will improve boot time in this particular systems but those will be posted in another blog post.)

At the end of this engagement I was really happy! Not only because we shorted the boot time in the systems, but also because I got an e-mail from the company programmer two days later.

Programmer: Thank you so much for the work you did. After your visit and the recommendations in the SVCHost.exe service, we detected a bug in the application. At this moment the bug is fixed and service is no longer affecting boot time our systems.

Before

1,000 Computers x 4 minutes and 4 seconds= 4,000 minutes every working day

Potential full environment

7,000 Computer x 4 minutes and 4 seconds = 28,000 minutes every working day

After

1,000 Computer x 1 minute and 8 seconds = 1,000 minutes every working day

Recommended Articles

Troubleshooting Windows Performance Issues Using the Windows Performance Recorder

http://blogs.technet.com/b/askpfeplat/archive/2013/03/22/troubleshooting-windows-performance-issues-using-the-windows-performance-recorder.aspx

Here is another blog from a good friend and fellow blogger Mark Morowczynski that also talk about another Services Delay if you missed it.

Becoming an Xperf Xpert Part 4: What Did the WDIService Host Ever Do To You?
http://blogs.technet.com/b/askpfeplat/archive/2012/09/24/becoming-an-xperf-xpert-part-4-what-did-the-wdiservice-host-ever-do-to-you.aspx

Also want to send a special thank you to, Yong Rhee, Mark Morowczynski and Antonio Vasconcelos.

Randy “How much boot time can you save today?” Reyes

Connect an On-premises Network to Azure via Site to Site VPN and Extend Active Directory onto an IaaS VM DC in Azure

$
0
0

As we all know, the Cloud is here, it's here to stay and its benefits are forcing businesses to consider it. We are in a transition period in Information Technology and I'd say we're far down the road to nearly every IT infrastructure having some sort of Cloud interoperability, service or connection. These changes are evolving the IT Pro career, too. We all know, you can't stop progress, so embrace the changes and evolve your skillset to include "cloud technologies."

I know, I know … I can already hear you: "Nice…now I have yet another thing to ramp up on and maintain for my skillset." This is true, but it has always been true in IT. We gotta keep learning/re-learning in this field we've chosen.

My own cloud journey has run the gamut … a while ago, I tried to ignore it because I didn't understand it. Then, I dabbled in a few Azure services/scenarios. After attending a recent internal training event with a lot of content for Windows Azure, I've jumped into the deep end of the pool and I'm trying to stay afloat in this rapidly evolving space.

Lately, I've been exploring some interesting Azure-blend ideas:

  • Domain Controllers on Azure-hosted VMs (using "Infrastructure as a Service" VMs (IaaS – spoken 'EYE-as')
    • DCs for DR - hosted in an Azure tenant? Maybe.
  • Hyper-V Recovery Manager (an Azure based service) and DR management
    • Use an Azure-based service to help orchestrate your VM DR failover between your HQ and one (or more) of your regional datacenters
    • Store your DR plans online in OneDrive for Business (formerly SkyDrive Pro) so that critical information is available both on-site and online in the event of a DR (real or exercise)
  • Backup your systems to Azure (using integrated Windows Server backup >> Azure services) instead of shipping tapes off-site to a data storage warehouse
    • How much faster could you recover data if you didn't have to call back tapes?

One scenario I worked through recently was a cross-premises Site to Site VPN to Azure but only now did I take the time to document it and write it up.

Herein, I share with you my 'dummies guide' to setting up a site-to-site VPN between Azure and an on-premises site (corporate network or basement lab in my case). After I get the VPN up and running, if you're still reading, I cover a high-level process of creating an Azure IaaS VM and promoting it to be a replica DC for an on-prem Active Directory environment.

Let's roll…

I created the Visio below to help those visual learners (like me) and I've also included a "worksheet" to help you plan/prep for this endeavor.

  • NOTE - There are many screen shots of the CURRENT Azure UI in the post but this Cloud business changes frequently, so the UIs and features/specs may - and likely will - change.

Figure 1: Site To Site VPN Visio Diagram

Pre-notes

  • ALL PUBLIC IP ADDRESSES HERE ARE JUST NUMERIC VALUES and are only included to help illustrate the steps and components of this walk-through
  • Azure costs can add up
    • I did this all via credits included w/ my MSDN subscription
    • Review the free trial details below - be sure to watch how you setup the trial so you don't automatically roll over into an unexpected credit card bill
    • Check out the Azure pricing calculator and have a look at the 'sliders' for estimating charges in Azure - http://www.windowsazure.com/en-us/pricing/calculator/?scenario=full
    • Generally, Azure charges for data going out of Azure (egress), but not data coming into Azure (ingress) – keep that in mind
      • Virtual Networks charge per hour for data egress (per the calculator above):
        • "Setting up a Virtual Network is free of charge. However, we do charge for the VPN gateway that connects to on-premises. This charge is based on the amount of time that connection is provisioned and available."

Worksheet

On-prem RRAS Router – public IP Address (VPN Device IP Address entered in the Azure Local Network setup): _ 23.23.23.22 __

On-prem RRAS Router – private IP Address (used as the gateway for on-prem systems): __ 192.168.0.20 ___

On-prem IP Address Spaces (and added to AD Sites): _____ 192.168.0.1/24 _______

On-prem DC/DNS Server name and IP Address (as will be entered in the Azure Local Network setup): _192.168.0.22 : ONPREM-DC-01 ___

On-prem AD Site name: __ ONPREM __

Site Link name, cost and replication interval/schedule: _OnPrem-Azure : 100/15 min_

Azure Local Network Name: __OnPrem-PNet-192Dot___

Azure Virtual Network Name: __Azure-VNet-01__

Azure Virtual Network IP Address Space(s) and Subnet Name(s): __10.10.10.0/27 : Subnet-1 __

Azure Virtual Network Gateway IP Subnet: _10.10.10.32/29 __

Azure Virtual Network Gateway IP Address: _ 23.23.23.23 _

Azure IaaS VM DC name(s) and IP Addresses (discovered after they rec'd IP from Azure DHCP): __AZURE-DC-01 : 10.10.10.4___

Azure AD Site name: __AZURE __

Azure AD IP Subnet(s): __10.0.0.0./24__

Now, on to the rest of the blog …

On most days, I have a typical home network with a cable modem, then a wireless router, and then PCs.

Before I started this work, I filed a change-control request with my family because they were going to suffer an 'enterprise-wide' Internet outage while I did this. I by-passed all of my typical home network gear for this lab and plugged one leg of a dual-homed physical server (2012 R2) directly into my cable modem and let it pull a public-facing IP. I did an IPCONFIG on the router/server and made a note in my worksheet of the public IP.

  • SECURITY NOTE– this act has some significant security implications and is NOT a recommended practice. This was an isolated learning experiment, though, and there was no other connectivity beyond these two transient physical systems.
  • NOTE - Azure Site to Site VPN connectivity requires a non-NAT'd public IP

Next, per my worksheet planning, I configured the private leg of my dual-homed physical server with a static IP address of 192.168.0.20 but left the gateway blank. With gateways, as with Highlanders, there can be only one.

  • I used the Windows 2012 R2 RRAS system ONLY as a router.
    • Apart from RRAS configurations, I didn't RDP or perform other management tasks of the Azure or on-prem systems in my lab from this system
    • I don't know enough about routing or routers to even know what sort of functional network connectivity I had/didn't have
    • It was a headless, non-domain joined stand-alone system.

 

Before getting too far, I'll try to clarify some Azure terminology. The vocabulary is often my first stumbling block when I'm learning something new and Azure was no different.

What does Azure mean by 'Local Network?' What is a 'Gateway subnet?' What does a 'DNS Server' in an Azure Virtual Network refer to and what should I put in there? I heard Azure doesn't support static IP addressing – how do I assign IPs to my systems? How do I define other NIC settings for Azure-based VMs? Can I use WINS? No, you can't use WINS. J

Allow me to try to simplify/clarify/paraphrase:

An Azure Virtual Network consists of numerous settings, including DNS Server(s), Local Network(s), VPN settings and TCP/IP v4 address space(s).  I think of an Azure Virtual Network as a sort of 'branch office network' in my mind.

Create a Virtual Network

Let's go through and cover each section of creating a Virtual Network in Azure, starting with DNS Servers.

Sign in to the Azure portal and Click "NETWORKS" then "DNS SERVERS" then "REGISTER A DNS SERVER" to get started:

Figure 2: Getting started with Azure Virtual Network "DNS SERVERS"

DNS SERVERS

  • In order for effective and fault-tolerant name resolution to work between on-prem systems and Azure IaaS VMs, you should define multiple DNS servers within your Azure Virtual Network:
    • Local, on-prem DC/DNS server(s)
      • You should already know what their names and IPs are (per the planning worksheet above)
    • Remote, Azure-based DC/DNS servers that are in your Azure Virtual Network
      • Once you create Azure-based VM DC/DNS server(s), they'll get an IP from Azure DHCP. After that, you can edit the DNS server list in the Azure Virtual Network, adding them in as needed.
      • The IaaS VMs only process newly added/removed DNS servers to their vNIC settings upon re/boot of the VM.
      • Failure to include one or more Azure-based DC/DNS servers may (likely will) impactname resolution for your Azure-based systems if/when the VPN isn't connected/working
  • Don't worry if you can't connect to these at the moment – these are just part of the 'definition' of the Virtual Network
  • NOTE - Azure IaaS VMs do not support static IP addressesbut they DO support 'persistent' IP addresses (think "an Azure-based DHCP reservation" - which last as long as the VM is provisioned)
    • However, all that persists is the IP address. Other TCP/IP settings directly assigned to the vNIC will not persist – which is why you must define your DNS servers within the Azure Virtual Network
  • Why is this unique DNS server definition needed? A good question.
    • One reason is VM servicing and portability in Azure. There can be cases where your VM is torn down and re-assembled from the VHD file. When that happens, the vNIC from the original VM is gone (along with its TCP/IP settings) and a new vNIC is plug'n'play'd into the new VM.
    • When that new VM comes up, the new vNIC doesn't have any awareness of the prior vNIC settings. Azure automation assigns your new vNIC the same IP you had before and it sets the DNS server values you've defined here on the new vNIC.
    • If you want more detail on this, have a look here: http://msdn.microsoft.com/en-us/library/windowsazure/jj156088.aspx#bkmk_BYODNS
  • I used a naming standard which matches the AD computer account names for my DNS server entries – i.e. "OnPrem-DC-01"

Figure 3: Register a DNS SERVER in the Virtual Network Wizard

 

Figure 4: On-prem DNS Servers are defined. I can edit this list later and add in my IaaS DC/DNS servers and IP addresses to provide DNS local to the Azure Virtual Network for fault tolerance in the event the VPN has issues. You can add up to 9 entries here.

 

LOCAL NETWORK

  • An Azure Local Network is an Azure-based reference to your on-prem IPv4 address space and is used to automagically create routing rules from Azure to the "on-prem side" of the VPN.
  • The ADD A LOCAL NETWORK wizard begins with a field for a name for your local network in Azure. Choose a descriptive name from your naming standard (I called mine "OnPrem-PNet-192Dot")
    • As with anything in IT, give some thorough considerations to your naming standards/conventions for various Azure elements
  • The Local Network definition also includes the public IP of your on-prem VPN end-point/router/server
    • It is called the "VPN DEVICE IP ADDRESS" in the Azure UI
    • In my example, per my worksheet, this is my imaginary value - 23.23.23.22
  • Next, you get to enter an address space(s) for your local on-prem IPv4 network(s)
    • In my example, per my worksheet, this is the 192.168.0.1/24 address space.
  • There you have it - a 'Local Network' in Azure

Figure 5: Add a LOCAL NETWORK

 

Figure 6: Local Network details

 

Figure 7: Specify the on-prem network address space(s)

 

Figure 8: An Azure Local Network has been defined

VIRTUAL NETWORK

  • A Virtual Network in Azure combines the other elements discussed above (DNS Servers and a Local Network) with a few other settings and establishes an IPv4 network for your use in Azure-land
  • As with the Local Network above, enter a descriptive name based on a consistent naming standard for your Azure Virtual Networks
    • I called mine "Azure-VNet-01"
  • Check the box to enable "site-to-site VPN" and select the Local Network you defined/created above from the drop-down list
  • You assign the Virtual Network a non-routable (private) IPv4 address space.
    • Click the "Starting IP" drop-down and notice this can ONLY be a 10., 192. or 172. address space.
    • Then, you create one or more subnets within that address space and give the subnet a name
      • I used 10.10.10.0/27 and called it "Subnet-1" (these were defaults on this page)
      • This is where VMs that you create in Azure will live and be DHCP'd via automatic Azure mechanisms (including the DNS servers you defined above).
    • The UI here won't let you over-lap with the on-prem address space that you defined (which makes sense if you think about basic TCP/IP networking)
  • Click the green "add gateway subnet" button to assign the Virtual Network a "gateway subnet" which is a small subnet which will automatically acquire a public-facing "gateway IP" (later, when you create the gateway)
    • I used 10.10.10.32/29 for my gateway subnet and called it "gateway" (again, these were defaults on this page)
  • For more information about these Virtual Network settings and such, see this link to a great Azure doc

Figure 9: Create a Virtual Network

 

Figure 10: Name the Virtual Network and create an Affinity Group in a regional Azure Datacenter

 

Figure 11: Add the DNS Servers you defined before, enable site-to-site connectivity, select the Local Network

 

Figure 12: Establish the Address Space, create a subnet within the Address Space and define the Gateway Subnet

 

Figure 13: Address space, subnet and gateway subnet review. The "Network Preview" graphic helped me visualize all the pieces.

 

Figure 14: An Azure Virtual Network has been created

 

Once my Local Network, DNS Servers and Virtual Network were all defined/created in the Azure portal, I clicked CREATE GATEWAY with Dynamic Routing (below).

  • Several minutes of "behind the scenes magic" happens up in Azure-land (about 10 minutes in my experiences).
  • This includes setting up static routes in the Azure Virtual Network so it can 'find a path' to the local, on-prem IP subnets (which were defined in the 'Local Network' portion of the Azure Virtual Network).

Figure 15: Create the Gateway – I chose Dynamic Routing

 

Figure 16: VPN Gateway is being created

 

Figure 17: VPN Gateway created but not yet connected (IP Address is for illustration purposes only)

 

Ta-da! The gateway was created (the blue GATEWAY in the graphic above) but note, it shows "DISCONNECTED" on the OnPrem side. Before the connection will work, I need to configure the on-prem Windows Server as a VPN RRAS router.

  • You can download a PowerShell script from the Azure portal that will set up the 2012/R2 RRAS aspects for you
  • Download the file onto the WS 2012 R2 router/server – note, it has a .CFG extension
  • Open it with Notepad and review it

    • Notice the "plain text" passphrase/key for the connection – this is the equivalent of the 'password' for the VPN connection, as well as public IP addresses. Take care to secure this file.
  • I changed the extension of the file to .PS1 and then ran it from an elevated PowerShell console on the WS 2012 R2 router/server
    • The script installed the Remote Access Role/Features/services and configured everything for me (including static routes to my Azure Virtual Network)
  • I opened up the RRAS console and reviewed the settings/statistics and noticed:
    • The connection for the VPN showed as 'disconnected'
    • The connection with the gateway IP showed just a few bytes of traffic
    • A static route for the 10. subnet (the Azure Virtual Network address space I created) pointing to the gateway IP

After the Gateway has been successfully created and the WS 2012 R2 RRAS server is configured, I chose to CONNECT the Gateway

  • This also took a few minutes and I refreshed the portal a time or two before it hooked up and connected.

 

Figure 18: VPN is connected and data is passing. HOO-HAH!!

 

Once the Azure VPN gateway connected to my on-prem 2012 R2 VPN/router, I reviewed/refreshed the RRAS console again. At that point, it showed "Connected" with substantial incoming and outgoing traffic.

If you're following along at home and this is all working for you, congratulations!!

Trust me … this ain't no Little Feat (shout-out to one of my favorite bands)

 

Figure 19: VPN Interface "Connected"

 

Figure 20: Incoming and outgoing traffic

 

Figure 21: Static route settings

 

BONUS BLOG CONTENT

Once I connected my on-prem environment to Azure via VPN, I built out a replica DC on an Azure IaaS VM.

A few links with some GREAT content with much more detail for this:

I don't walk through each step of the VM build process here (it's really very simple) but notice the VM was created on the IP subnet in Azure that I created within the Azure Virtual Network back at the top of this post ("Subnet-1").

Figure 22: New VM attached to the 10.10.10.0 "Subnet-1" created in the Virtual Network above.

 

Once the VM was provisioned, I used the Azure Portal to connect to the new workgroup VM (below) …

 

Take a moment and think about this…

  • Within about an hour, I created an Internet-scalable Cloud-based extension to my on-prem network and spun up a licensed, patched WS 2012 R2 Server.
  • Does anyone else see that as magical, or is it just me?
  • Remember how long it used to take to get a subnet allocated, a physical server spec'd out, priced, budget-approved, ordered, shipped, built, racked, IP'd, OS built, and logged in? I recall MONTHS.
  • I just did it in about an hour. That, my friends, is the speed of cloud technology.

Since I had VPN connectivity at this point, I could have connected directly via RDP from an on-prem system using the VM's private IP address (which can be obtained from the Azure Portal):

               

Recall, when the IaaS VM re/boots, it picks up its "Azure reserved" IP address and the DNS servers I defined in the Azure Virtual Network.

  • Since the VM wasn't domain-joined yet, it wouldn't successfully register in my on-prem DNS yet so I was not able to resolve the Azure VM by name from my on-prem systems
  • However, DNS resolution was working from Azure back to on-prem and I was able to resolve my on-prem DNS records from the Azure VM.

With DNS working from the Azure VM to on-prem, I successfully joined the IaaS VM to the on-prem domain and rebooted it.

  • Do an IPCONFIG /ALL on the IaaS VM to view/verify/validate the desired DNS server settings
  • After the domain-join and reboot, I saw a dynamic registration for an A record in my on-prem DNS for my IaaS VM
    • At this point, I had DNS resolution working in both directions and I could have used the FQDN to RDP to it from on-prem (or vice-versa)

    Figure 23: DNS console showing the Azure IaaS VM successfully registered in DNS

It was really quite exciting to get this all set up and working ... but we ain't done yet!

Next, I added the AD DS Role to the IaaS VM and promoted it into AD as a replica DC in the HYBRID.LAB forest (my demo/lab).

I had already created the proper AD Sites, Site Link and Subnets for my Azure world and my on-prem world in my lab AD forest configuration.

As a result of that pre-work, during the promotion wizard, I was able to choose the target AD Site for the new Azure-based DC, as well as select the on-prem DC for initial replication:

Figure 24: Selecting the destination AD Site (Azure) for the new Azure IaaS VM DC

 

Figure 25: Selecting the source DC (OnPrem-DC-01) for initial AD replication

After the DC promotion and reboot, I saw my shiny new DC up in Azure-land (AZURE-DC01):

Figure 26: AD Sites and Services Console showing the on-prem and Azure DCs, AD sites, subnets and Site Link

 

Figure 27: Active Directory Users and Computers Console showing both the on-prem and Azure IaaS DCs

 

Whew.

There are a lot of new concepts here. If you're struggling, give yourself a break ... take a deep breath; hum a tune; go chase some squirrels or something.  Then come back to this - it took me more than just a bit of patience before it all clicked and I got the VPN dance to succeed.

Like I said at the outset, the Cloud is here; the time is now to start thinking about how Cloud technologies will fit into your IT career and the solutions you design/operate/support.

Thanks for reading and I hope you found it useful.

Cheers!

Another Troubleshooting Adventure: More Real Life Memory Pool Leaks

$
0
0

Hello all,

Jesse Esquivel here again with another post I hope you find useful. This post is a great complement to Jerry Devore's post on diagnosing a leak in non paged pool using event viewer, poolmon, and perfmon! Today I’m going to talk about analysis of a leak in paged pool kernel memory and the tools and methods used to diagnose and find root cause of the issue. A memory leak occurs when software (drivers) make kernel memory allocations and never free them, over time this can deplete kernel memory. Though paged pool depletion takes considerably more effort on an x64 based system it’s not impossible or unheard of for it to happen and cause a server to go down or into a hard hang state. Sometimes a low virtual memory condition can cause the operating system to become unstable and hang.

The Victim

The server we are looking at here is a virtual machine running 2008 R2 SP1. After about 90 hours or 4-5 days the server would become unresponsive, go into a hard hang state, and the services it was hosting would be unavailable necessitating a reboot to restore functionality. Rinse and repeat until doomsday. Like clockwork, this would happen every 5-7 days and the server would need to be rebooted again.

First things first. Since it’s in a hard hang state you actually can’t get into the server until after it was rebooted so the investigation started when the server wasn’t actually exhibiting the problem…yet. All we have to go on was the fact that the box would go belly up almost weekly like clockwork. Task Manager. A great place to start and get a very quick at a glance view of the health of the server. Having seen this strange behavior before I suspected a leak in kernel memory but alas we are not in the business of speculation. My suspicion zeroed me in on Kernel memory consumption, and handle count as seen here (this is just a shot of a random vm for reference):

image

I took note of the values for Kernel memory and the overall handle count of the system. Since everything appeared to be operating normally at the moment I decided to do some post mortem investigation. I reviewed the event logs around the time just before the server was rebooted. What I found was an entry in the System event log for Resource-Exhaustion Detector, event ID 2004. This indicated that the server was low on virtual memory (kernel) during the time it was in the hard hang state. This appears to back up my suspicion of an issue with Kernel memory on the box being depleted.

In a few hours I checked back with the server and saw that the paged pool kernel memory had increased, as well as the overall handle count for the system. To see what process has the highest handle count we can add the “Handles” column to task manager by clicking View | Select Columns and then sorting by it.

image

Lsass.exe had the most number of handles of any process, but was still a fairly low count. As a general rule of thumb, any process with a handle count higher than 10k should be investigated for a possible leak. I took note of the system time and the number of handles for lsass.exe. At this point we have a server with increasing paged pool consumption, increasing overall handle count, and Event ID 2004s in the system event log during the time the server was in a hard hang. These are all classic indicators of an issue with kernel memory depletion.

Now to find out what is consuming paged pool. There are two tools that we will use to analyze kernel memory consumption, Poolmon and Windows Performance Recorder.

Tools of the Trade

Poolmon

Poolmon is a tool that can be used to view kernel memory pools and their consumption. A great explanation of poolmon and how it works can be found here. In the aforementioned poolmon link, note the explanation on how pool tags work – a pool tag is a four-letter string that’s used to label the pool allocation and should be unique to each driver that is making the allocation (keep this in mind – more on this later). Poolmon is included in the Windows Driver Kit. Essentially using poolmon we can identify which pool tags are making the paged pool allocations and are not freeing them. Here is a sample screen shot of poolmon:

image

Windows Performance Recorder

The Windows Performance Recorder is part of the Windows Performance Toolkit whose latest version is available in the Windows 8.1 SDK. It’s a very powerful tool that can be used for performance analysis, debugging, and even for diagnosing memory leaks! Windows performance recorder takes event trace logs based on profiles. The pool usage profile can be used to log pool consumption in WPRUI:

image

Data gathering and Analysis

Now armed with poolmon and WPR, we can set the logging we need to investigate further. I set poolmon to continuously log on the server in question. Here is what the poolmon data would look like for this scenario at the start of the logging, we can see that the system has only been up for a few minutes, has an overall 25k handle count, and is at 120MB of paged pool consumption so all is well at the moment.

image

Now we wait for some time to pass as we log the increases in paged pool kernel memory. After some time while the server is still responsive I login and see the handle count for the lsass.exe process has increased steadily with the paged pool consumption, we’ll say it had ~26,000 when I checked it. Doomsday arrives and the server goes into a hard hang. It’s rebooted and we review the poolmon logs after it comes back up. After 89 hours uptime we have 64,592 handles on the system, and wait for it…we are at 2.9GB of paged pool consumption! Note the pool tag with the highest paged pool consumption is “Toke” with a note in the mapped driver column of “nt!se Token objects.” These are security token objects – which are created and managed by the lsass.exe process.

image

There is a lot going on here with this data, here’s how to make sense of the columns in Poolmon. We aren’t interested in legit allocations that are eventually freed, what we are interested in is outstanding allocations or allocations that have not yet been freed. The Diff column gives us exactly that, it tells us how many outstanding allocations we have, in this case 1,394,724 to be exact. The “Diff” column is actually the “Allocs” column minus the “Frees” column. So Alloc (127664567) – Frees (10771843) = Diff (1394724). For the Toke row, the Bytes column shows that this pool tag has consumed 2.3GB of paged pool. So of the 2.9GB total paged pool that is depleted on this system, 2.3GB of it are outstanding allocations made by the “Toke” pool tag.

The plot thickens

So now we know we have a problem with paged pool kernel memory depletion. We know that the Toke pool tag is the top consumer of paged pool. But why? We know that the lsass.exe process has a very large number of leaked handles. We can certainly draw the line and connect the Toke pool tag to the leaked handles in the lsass.exe process (since they are security token objects), but how can we verify this? Thanks to Mark Russinovich for creating the very useful handle.exe utility. Handle.exe will display all open handles for all processes on the system. As an example, if we run handle –s it will give us a count summary of all open handles, note the Token handles.

image

So let’s confirm that all of these leaked handles in lsass.exe are in fact for security token objects. Again we wait some time where we have enough leaked handles in lsass but the server is still operational in order to investigate. You can do a summary count by process with handle.exe, here it shows we have 24,238 handles in lsass.exe to security token objects:

image

We can also use handle.exe to dump all of the handles for the lsass process:

image

As you can see we have a large number of handles to security token objects which confirms what these leaked handles are for. So we have now confirmed the leaked handles in lsass are for security token objects, which match up to the Toke pool tag as the top consumer of paged pool. At this point we now need to find out why lsass is leaking so many handles for security token objects? Note the identity for the Token objects in the above screen shot, this is a domain service account for a third party software that runs under this identity. Turn off the software and the leak goes away. Is the third party the culprit? First instinct is yes it is, but there’s more than meets the eye here. This software is running against every server in the enterprise, but only this one leaks. This is enough to rule out the software for now and we keep digging.

Switch gears to Windows Performance Recorder

Since we have this powerful tool now in our arsenal we fire it up and begin logging pool usage with it. To start an ETW trace first we need to install the Windows Performance Toolkit on the server or optionally install it on another machine or workstation and copy the “C:\Program Files (x86)\Windows Kits\8.0\Windows Performance Toolkit\Redistributables” folder to the server and install the appropriate MSI file. Then we launch WPRUI as an administrator, click the pool usage profile, set the options like so and start logging:

image

Here is an excellent post on how to setup WPR to log pool and create custom profiles, in fact it’s the one I used to fire up the tracing! Now we wait for some time to pass as we log the increases in paged pool kernel memory. Since we are now armed with a lot more data and have confirmed there is a leak I stop the trace after about 20 minutes. The data is located in the location that you saved the ETL file:

image

Note: There are differences in the Save and Cancel button in WPRUI. The save button above actually saves the ETL but the trace continues to run. In order to actually stop the trace on the former screen above you need to click cancel.

We use the Windows Performance Analyzer (WPA) to view the data in the event trace log (ETL) file. You can copy the file off of the server to a machine that has internet access and load WPA to analyze it there. Be sure to click Trace | Load Symbols in order to view stack data. Please see the MSDN WPR blog post on how to configure symbols, your analysis machine will need internet access.

image

There is a lot going on in WPA when you open an ETL, and it can be overwhelming at times. It’s easy to get lost in the good amount of data in there! I’ve arranged my data according to the good folks over on the MSDN blog. Arranging your data in the table is key to analyzing it in WPA. In the graph explorer drag over the cumulative count by paged, Tag graph to the right pane for analysis. Everything that is important to you should be to the left of the gold bar in the table and in the order you want to sort it by. Here we have Type, Paged, Pool Tag, Stack, and size.

image

Couple of things here, first note the graphical representation of paged pool allocations – a steady increase that never drops. Now we start from the left inside the table. AIFO = Allocated Inside, Freed Outside – this is what we want, basically these are outstanding memory allocations that are never freed or are freed outside of the WPR trace. The leak is in paged pool so we move forward. Next column is Pool Tag, and thanks to poolmon we know it’s “Toke” so we key on that. The stack column is empty for some reason in this trace so we won’t be able to see the functions that are called to allocate pool (we’ll look at a different graph later). Then I added the size column, take note the highlighted size value of 1664. If you remember our poolmon data from earlier we know that the outstanding allocations are 1664 bytes:

image

So this data correlates to past behavior that we have seen with poolmon. Back to the WPA graph. Notice the count column and row 4. We have 6,189 outstanding allocations. Count column row 8 shows that we have 6,043 allocations at a size of 1664 bytes. Essentially the majority of the outstanding allocations are of size 1664 bytes, which indicates that these are likely the leaked security token objects. Let’s look at some more data in the analyzer pane. Expand the System Activity collection in the graph explorer pane on the left. Click on the “Stacks” graph and drag and drop it onto the right pane:

image

In the right pane at the top, click the “Display graph and table button:

image

In the middle pane just to the left of the gold bar right click and select the following columns: Provider Name, Process Name, Stack, Event Name, Count Pool: Allocate, Count Pool: Free, Version.

image

Now to the left of the gold bar arrange the columns in the following order starting from left to right: Provider Name, Process Name, Stack, Event Name, Count Pool: Allocate, Count Pool: Free, Version. Remember we want everything that is important to us to the left of the gold bar so that we can sort on it the way we like. This is a very important rule of using WPA and xperfview! So now we have everything arranged like so:

image

From the left we have Provider Name, Process Name, Stack, Event Name, and Count Pool: Allocate. We expand into the stack to find ntkrnlmp.exe!SepDuplicateToken and ntkrnlmp.exe!SepDuplicateToken <itself>, followed by 580 pool allocations in the event column. Walking down into the stack you see the vicious pattern repeating itself with no end in sight:

image

Indeed we can see that instead of re-using existing security token objects, a new one is created each time something is connecting to the machine. Correlating this with the data we’ve found from the dumping all of the lsass.exe handles we know that each time the service account authenticates to the server a new security token object is created, coupled with a frequent connection interval you can see how this can get out of hand quickly!

The Culprit

Lsass.exe binaries up to date. Check. Third party software up to date. Check. Increasing the third party software connection interval only puts off the inevitable. Third party software is agentless, it is merely authenticating to the box to gather information at an interval. We know that some driver is making these pool allocations, so we set out to review all software installed on the system, specifically asking questions about any security focused software since after all we have leaked token objects via lsass.exe. After talking with the administrators for a bit it was revealed they were running some security software. They turned off the software and the leak vanished, even with the high connection rate of the third party software. We engaged the vendor and provided the data to them. Don’t be afraid to engage other third party vendors and send them your ETL file so that they can take a look at it with their private symbols. They found the problem was in one of their drivers that was causing the leak and sometime later provided an updated driver. Due to the nature of these types of software’s it can be difficult to find the smoking gun or driver that is bringing the pain, surely to the untrained eye it looked like a leak in the lsass.exe process, especially since the Toke pool tag was the top consumer and ntkrnlmp.exe was making the pool allocations. So if debugging dumps is not your thing, there are plenty of tools to put in your arsenal to help you diagnose a pool memory leak. Having a diverse toolset can allow you to gather different types of data, correlate and re-enforce data points, and uncover things that you normally wouldn’t with just one tool. Using the right tools to uncover forensic data can sometimes lead you to the culprit without actually having to debug a dump!

Until next time!

Jesse "Hit-Man" Esquivel

How to Build Your ADFS Lab on Server 2012 Part 3: ADFS Proxy

$
0
0

Hey y’all Mark and Tom back after a small hiatus. We (well mostly Mark because this was his responsibility) got distracted by other things like “helping customers”, “the Olympics” and “I’m tired I’ll do it tomorrow, what’s on Netflix streaming?”. Hopefully so far you’ve followed along and have an ADFS server built and have a sample Web SSO so you can start messing around or if your manager asks, “learning” about ADFS. By now the security group has probably gotten wind of this experiment and come to you with “Hey man, these ADFS servers can make claims about anyone, they are just like a DC, no way we’re putting these in the DMZ.” Technically this person is correct. You will want to protect your ADFS servers the same way you protect a DC. If you wouldn’t put a DC in your DMZ (hint you probably shouldn’t) then same goes for ADFS. However you can now sit back and tell your security group “Don’t worry, we got an ADFS Proxy”.

Installing the ADFS Proxy Role

When you see how simple this is you will be ashamed it took me this long to write this post. You’ll have a machine that can be domain joined or doesn’t have to be as long as it can talk to the ADFS server on port 443. Since this is a test lab, I do have my ADFS Proxy joined to the domain. Start by click Add Roles and Features in Server Manager.

image

Click on Active Directory Federation Services

image

Accept all the requirements by click Add Features.

image

Select Federation Services Proxy and hit next.

image

You’ll then click finish let it install.

Configuring the SSL and DNS

The next thing you are going to need to do is put an SSL cert on your ADFSProxy. This SSL name should ALSO match the SSL cert on your ADFS server. This shouldn’t be too much of a shock when you think about what role it’s providing. For external users or applications that are external, they are going to resolve the same ADFS service name, in my case sts.markmorow.com and it will connect to the proxy instead of the internal ADFS server. Which brings us to DNS. It’s up to the administrator to configure the internal DNS servers to point at the internal ADFS server and the external DNS servers to point proxy.

image

This is the error message you’ll get if you thought you were slick and skipped that last paragraph about SSL and DNS.

image

Right click the default website, edit bindings, Add port 443 and select a certificate. For this lab you can use an internal cert as well but in the real world you will want to use a public cert.

image

After the SSL and DNS has been configured we are now ready to run the ADFS Proxy Configuration wizard.

image

It’s a pretty short wizard as you can see, click Next.

image

You’ll add the federation server you want to connect to, remember your DNS configuration from before so you should be pointing to the INTERNAL adfs server, for me that will be sts.markmorow.com. You’ll then enter your domain credentials to establish the trust.

image

Click Next

image

Alright you are done. The proxy is configured. A good way to “simulate” the proxy in a lab environment is to change your host file to resolve that name to the proxy instead of your internal adfs server. So after this post you should have a fully functional test lab that includes an ADFS server, ADFS Proxy and sample claims app. Don’t worry folks we are just getting started on this topic, if you have specific ADFS things you’d like to see covered or any comments let us know below.

Mark ‘didn’t even medal’ Morowczynski and Tom ‘coach’ Moser

Becoming an Xperf Xpert Part 9: Where’s My Network? (With Stack Walk)

$
0
0

Hi everyone Randolph (Randy) Reyes with another SBSL blog post. In this particular engagement we went onsite to deliver a workshop in how to utilize the Windows Performance Toolkit (XPERF) to detect slow boot and slow logon issues.

The Before

PreSMSS

SMSSInit

WinlogonInit

ExplorerInit

Post Boot

8.165

2.454

63.305

1.950

11.400

Boot to Post Boot Activity ended: 87 Seconds and 273 Milliseconds = 1 Minutes and 27 Seconds

If we expand this out for their main office of 3,000 Computers we get.

3,000 Computers x 1 minute and 30 seconds= 3,500 minutes every working day

Now you might be saying to yourself, 1 min and 27 seconds is not too bad. What if I told you it was a SSD (solid state drive)? Would you consider this to be an optimal value? I’ve discussed optimal times in a previous post, “Becoming an Xperf Xpert Part 7: Slow Profile Load and Our Very First Stack Walk”

http://blogs.technet.com/b/askpfeplat/archive/2014/02/03/becoming-an-xperf-xpert-part-7-slow-profile-load-and-our-very-first-stack-walk.aspx

Those values don't line up with what we expect to be seeing so its time get to the bottom of this.

After adding the graph for Boot Phases and also Generic Event we can see that most of the Winlogon Initiation Phase time is been consume by the Subscriber Profiles under the Microsoft-Windows-Winlogon provider. In the picture below we can see when the Subscriber Profile started at 34.167 and ended at 68.002 of the trace.

First question you need to ask yourself. Why Profile is taking that long load?

Are we using Roaming Profiles or do we have a local profile in this Windows 7 System?

What’s going on with my Profiles?

image

To be able to understand what’s going on we need to turn into “Stack Walking”

We need to make note of the Process “Winlogon (732)” and Thread (760) so we can do wait analysis against this thread to see what is taking almost 33 seconds.

Also make sure to click in the Trace tab to Load Symbols, if you don’t load the symbols you will see the call stack recorded, but the details of what the code was executing will be hidden. (Call stacks will show DLL names followed by question marks).

image

Taking the process and thread information, we’ll use it in the “CPU Usage Precise” graph (based on Utilization by Process, Thread).

I recommend the order NewProcess, NewThreadID, NewThreadStack, Wait (us), ReadingProcess, ReadyingThreadID. SwitchInTime(s). These can be added or removed from the Line # menu.

After adding the graph “CPU Usage Precise” graph (based on Utilization by Process, Thread) we expanded process Winlogon (732) and inside this process we follow the thread (760)

In the Graph below we can see the CallNotificationSubscriber with a single call that took 33 seconds (Profiles)

image

Here we can see that Winlogon is waiting in Svchost.exe operations.

We can observe that the switching time in svchost.exe 696 was 68.002 seconds (exactly the time “Profiles” ended, so we are on the right track.

Using the same graph “CPU Usage Precise” graph (based on Utilization by Process, Thread) we expanded process svchost.exe (696) and inside this process we follow the thread (2812)

Remember we are looking for a delay in time of 33 seconds

image

Looks like we are waiting on some kind of network communication operation (rpcrt4.dll is the Remote Procedure Call (RPC) API, used by Windows applications for network and Internet communication).

Now, we need to follow the thread 1596, inside the svchost.exe (696).

image

Once we arrive to “Idle (0)” we are at the end of the stack.

Looks like the profsvc.dll (Provides User Profile Services service = Data and Components) waited 30 seconds for Network to be ready.

Now the big question is why profsvc.dll is waiting 30 seconds for network?

It could be that we were not able to contact a Domain Controller since this is a domain joined machine, we should.

When a machine is a domain joined, we depend on NLA to tell us whether it has domain connectivity or not so that we do not miss the opportunity to apply policy at boot or user logon even when the machine is connecting from home. When NLA cannot determine whether it has domain connectivity or not then we wait for ~30s to allow for network to come up.

The Fix

Step 1 = for testing purpose we connected system to LAN and system booted with in the expected values. (30 Sec less)

The After

Boot to Post Boot Activity end: 46 Seconds and 643 Milliseconds

image

(Note: In this particular engagement we still have other areas that will improve boot time in this particular systems but those will be posted in another blog.)

Before

3,000 Computers x 1 minute and 30 seconds= 3,500 minutes every working day

After

3,000 Computer x 46 seconds = Around 900 minutes every working day.

Recommended Articles

Here is another blog from a good friends and fellow bloggers Yong Rhee, Mark Morowczynski and Charity Shelbourne

Troubleshooting Windows Performance Issues Using the Windows Performance Recorder

http://blogs.technet.com/b/askpfeplat/archive/2013/03/22/troubleshooting-windows-performance-issues-using-the-windows-performance-recorder.aspx

WPT: Using WPT (Xperf) to enable Storport ETW Performance Logging

http://blogs.technet.com/b/yongrhee/archive/2012/11/24/using-wpt-xperf-to-enable-storport-etw-performance-logging.aspx

Becoming an Xperf Xpert Part 2: Long Running Logon Scripts, Inconceivable!

http://blogs.technet.com/b/askpfeplat/archive/2012/07/02/becoming-an-xperf-xpert-part-2-long-running-logon-scripts-inconceivable.aspx

Randy “Dude, where’s my network?” Reyes

How to Build Your ADFS Lab Part4: Upgrading to Server 2012 R2

$
0
0

Hey y’all Mark and Tom back again with part 4 in our most likely never ending ADFS series. It’s going to end up being as long as Game of Thrones when it’s all said and done, I can feel it. At this point you should have an ADFS server, an ADFS Proxy and a Web App all in your test environment. Now, you will notice that all of this stuff was built on Windows Server 2012. You might even have wanted to ask us, “Hey man, you guys picked Server 2012 instead building out your lab on that sweet new ADFS 3.0 on Server 2012 R2. I followed along but built mine on Server 2012 R2. You guys must be crazy or something.” Yea we are crazy alright, crazy like a fox. Sometimes we have a plan, other times it looks like we have a plan. This one we actually did have a plan. We knew there are some folks out there already running ADFS 2.0 on Server 2008 R2. We thought maybe they want to upgrade to the new hotness that is ADFS 3.0, so we planned an upgrade post for this series. You’re welcome. If you are on ADFS 2.0 on Server 2008 R2 or ADFS 2.1 on Server 2012 you can follow along to get that lab up to the latest greatest. Let’s dig in.

Pre-Reqs and High Level Process

You’ll want two machines set up both Server 2012 R2, one that is domain joined which will be the ADFS server and one that is not. This will be the Web Application Proxy (WAP) or what the ADFS Proxy has now become. We’ll get to this in a bit.

From the process standpoint we can break this down into a few key steps.

  • Export certificates from the current ADFS servers.
  • Export the configuration from the current ADFS servers.
  • Install ADFS 3.0 on the new server
  • Import the configuration on the ADFS 3.0 server
  • Install the Web Application Proxy (WAP) server
  • Test and make DNS changes to point to the new ADFS 3.0 infrastructure

Certificates on the ADFS Server

We first need to take a look at the different certificates we have. As we recall we have 3 certificates, the Service communication, the token-decrypting and the token-signing cert.

image

If you’ve been using our previous guides you’ll most likely have something as above. A service communication certificate from your PKI and 2 self-signed certs. We will now want to export out our Service communications cert. If you used a 3rd party cert on any of the other certs, you’ll want to export those as well. Our lab is using self-signed.

To export the certificates open an MMC Console, Add Certificates, select Computer Account, Local Computer and Click Ok. It should look similar to this.

image

Next expand Personal, Certificates and right click the Certificate you want to export, pick All Tasks and select Export. Select next and should be at this screen.

image

You’ll want to confirm the “Yes, export the private key” check box is selected. If it’s greyed out and you are using an internal PKI it’s possible you requested the certificate without the option to allow the key to be exportable. This is why we have a test lab, people. Simply request a new cert, select the key to be exportable and set this new certificate as the SSL binding on the site as well as the Server Communication certificate. Don’t feel bad, the dates on my certs are different for the exact same reason.

Now click Next, set a password on the certificate to export and click Next.

image

Select an output directory and click Next.

image

Then click Finish. You should get this message.

image

(Hooray!)

Then go ahead and copy this over to your new ADFS 3.0 server. We’ll need this certificate in just a bit.

Exporting the ADFS Server Configuration

It’s time to back up our current ADFS server. You’ve probably wonder how do I do that. Well there is a built in PowerShell script on the 2012 R2 media. Simply go to \Support\ADFS and run the Export-FederationConfigration.ps1 –Path “Path to where you want it to go” and the script will take care of the rest. It also gives you some really important info you’ll need when you are setting up your new ADFS server, the farm name and the service account it’s going to be using.

image

ADFS 3.0 Install

This shouldn’t be too much of a surprise but you’ll go to Server Manager and Add Roles and Features wizard. You’ll want to select Active Directory Federation Services as shown below.

image

Then click Next on the features install.

image

Click Next on the ADFS screen.

image

image

Once it’s done you’ll need to configure the service. Server manager should indicate you still need to do this as seen below.

image

At this point you would be inclined to say oh I have an existing farm, I want to join this to the existing one. Sorry, it doesn’t work that way. What you will actually be doing is standing up a new ADFS 3.0 farm. Select “Create the first federation server in a federation server farm” and hit next.

image

If you aren’t logged in with an account that has Domain Admin privileges you will need to enter an account and then hit next for it to connect.

image

Now it’s time to get that certificate ready. Click the import button, browse to that certificate, put the password you set on it in and you should be good to go. Few things to also notice. Here is where you are going to set the name of the federation farm name and it should match what you exported from the old farm. Also note that we are setting up an SSL cert without any IIS. Let that sink in. Click Next.

image

Now it’s time to put in the service account you are running ADFS under. This was also told to you during that PowerShell export. Once you’ve got that in, hit Next.

image

Time to enter in what type of database we want to use. In the previous version our only choice was WID unless we wanted to do this whole install via PowerShell, where you were able to configure SQL Server. Now you can do either through the GUI. Since this is a test lab we are going to stick with WID. Click Next.

image

One last stop if we need to make any changes. We can also see how this would be configured via PowerShell by hitting the View script button. All looks good, let’s hit Next.

image

It will now do a pre-requisite check. If you get a pass, click Configure. If you didn’t, it’s time to troubleshoot. If you’ve been following along you should be ok.

image

This is what you should get when it’s completed. Looks good.

image

Ok so let’s take a second and make sure ADFS 3.0 is working as expected before we move on. To do that on the ADFS 3.0 server, open IE and go to https://localhost/adfs/ls/IdpInitiatedSignon.aspx.

If everything is working you should get a nice page like the following. You can ignore the cert error for now as well.

image

Import the ADFS Configuration

At this point we have a nice, brand new, clean ADFS 3.0 instance. That’s great but we don’t want to really set up all those relying party trusts. Time to import that previous export. Copy over that export folder to the ADFS 3.0 server. Much like a great magician, we show you that there is nothing in the Relying Party trusts other than the defaults.

image

Now let’s run that sister script, Import-FederationConfiguration.ps1 –Path “Where you put the export folder”. It’s located also in the 2012 R2 media under \Support\ADFS\. You should get something like that screen below.

image

And now your Relying Party trusts should look something like….

image

Bam! All that came over magically. Our lab is pretty simple but to see more of what you can export and import check out this TechNet article. At this point let’s take a step back and think about what we have going on. Currently all our DNS is pointing to the ADFS 2.1 infrastructure. We can update our host file on our test client to point at the new ADFS 3.0 infrastructure. Once we are convinced it’s all working we can move on but note we haven’t affected our ADFS 2.1 at all. It’s still exactly the way it was before we started on this.

Installing the Web Application Proxy (WAP)

The WAP has lots of great new features which are well beyond the scope of this post (told you the series will be long). Much like the ADFS proxy, we can domain join this or leave it in a workgroup. For this test lab, I’ll have mine domain joined. At this time we will want to export the certificate on the ADFS proxy and import it on to this WAP server before we get started. You would do this the same way described above for the export and just double click the certificate and follow the quick prompts to import it. Also much like the ADFS Proxy we will make sure DNS or using our host file points to the back end ADFS 3.0 server. Be careful if you are using DNS and are pointing to the ADFS 2.1 server. You want the WAP to point to the ADFS 3.0 not the 2.1 ADFS server. Make sense? Don’t worry you got this. Startup Server Manager and click Add Roles and Features Wizard and click “Remote Access”.

image

Hit Next at the features page.

image

At the Remote Access screen hit Next.

image

Add any features it needs for the install.

image

Now select Web Application Proxy and hit Next and finish the confirmation dialog box.

image

Once it’s all done you’ll need to do some quick configuration on the WAP to hook it up with ADFS. This should look familiar.

image

Now, just like the ADFS Proxy (notice how I keep stressing that), you’ll need to enter in the ADFS service name. This should be the name that is on the certificate you exported. We will want to make sure this name resolves to the ADFS 3.0 server as well. You’ll also need to enter some credentials for the ADFS server so it can complete the configuration. Once you’re done hit Next.

image

We are almost done; it’s a pretty short configuration process, similar to the ADFS Proxy. Select that certificate you exported and hit Next.

image

Take a last look and hit Configure.

image

At this point you are all set up. You can once again update your host file to point to this new ADFS WAP. You should get a screen much like this.

image

Notice DNS really controls who is hitting what. So we are able to completely test our ADFS 3.0 using host files to confirm it is working as expected before we make the big DNS changes to cut it over, big for a test lab, of courseJ. Update your DNS records to point to the correct ADFS servers and you should be all set. You just completed your very first ADFS upgrade.

Let us know in the comments of these post are helpful and what types of ADFS stuff you want to learn more about. Tom and a few of us got some ideas but we’d love to hear from you. Until next time.

Mark “was late to the Red Wedding” Morowczynski and Tom “knows nothing like Jon Snow” Moser


Exploring Windows 8.1 Update – Start Screen, Desktop and Other Enhancements

$
0
0

Welcome to Ask PFE Plat's coverage of the next chapter in Microsoft's on-going refinement of Windows. As with 8.1, we continue to evolve and improve the OS and last week, at the Build 2014 conference, we released "Windows 8.1 Update" to MSDN. On Tuesday, April 8th, the Update will be released to the Windows Update Catalog, Windows Update and WSUS channels.

Our prior post on the "dot 1" update to Windows 8 RTM from October of last year sparked great conversation – in fact, it was our most-commented post; we (PFEs and Microsoft as a whole) appreciate the feedback and discussion. Several of us have been chomping at the bit to bring you details of the changes coming with this Update. This is a "roundtable post" with discussion from myself (Michael Hildebrand), Jeff "The Dude" Stokes, Kyle Blagg, Mark Morowczynski and who can talk about Windows 8 without talking to Joao Botto?

Let's roll…

The Update, as we'll call it here, is actually a series of packages that install collectively and provide UI and functionality improvements (many geared towards keyboard/mouse users), a big IE feature-add as well as some heavy-lifting internal changes to Windows boot structures and memory/resource awareness and management. For additional information, check out the following:

This post will focus mostly on the UI changes - there may be future AskPFE Platforms posts that dive into some of the other aspects of the Update.

Before we get ahead of ourselves, let's cover a few heads-up/FAQ comments:

  • It will likely change your system's current behavior:
    • For starters, unless the device is a tablet, a system with this update will boot to the desktop by default
      • You can still choose one way or another but I was surprised when I rebooted and was taken to my desktop instead of my Start screen
    • See the chart further down in the post for a clear list of what default settings are changed and how
  • It does NOT include theStart menu that you may have seen/heard about at the recent Build conference. That is some exciting near-future stuff, which demonstrates our on-going commitment to deliver on customer feedback (such as your comments on this very blog)
  • It is defined as an "Important – Security update" in our Windows Updates framework
  • It is a cumulative update to Windows 8.1 that includes all previously released security and non-security updates
  • It is a required update to keep your Windows 8.1 device current
  • Windows 8.1 is a prerequisite (vs Windows 8 RTM)
    • Windows 8.1 media/WIMs/TechNet ISOs/Store bits/etc will be slipstreamed with this Update in the near term
  • KB2919442 is a pre-requisite update - released in March 2014 – you'll need this before 2919355 will be recognized
  • Additional info can be found in the KB – which obviously, you should read
  • The Assessment and Deployment Kit (ADK) has been updated to accommodate the changes to Windows with this Update

To get the Update, make sure you are running Windows 8.1 and then hit Windows Update or the Windows Download Center.

Let's start with Start

Now that you have everything installed and your reboots are done, sign in and make your way to the Start screen. You'll notice a few things that folks have asked for:

  • Power
    • Many of you have adapted with a variety of ways to turn off or reboot your PC, but now, there is a simple power UI on Start. Handy-dandy!
      • A quick note on 'Sleep' - Have you tried the 'Sleep' functionality recently? On prior generation devices, I never had much luck with Sleep. It seemed like it took longer to go to sleep than to shut my old laptop down, and resuming seemed to take longer than a full boot up. However, with the newer devices, SSDs and Windows 8, I find sleep a much-improved experience and very close to an 'instant on' solution that I prefer most of the time to a typical shutdown.
    • However, on some devices you may not see the Power button. On devices that are not tablets like laptops, desktop PCs and All-in-Ones, the Power button should appear on the Start screen after installing the update. On most tablets you will not see the Power button on the Start screen as they have connected standby and a physical power button that lets you quickly shut down or put the device to sleep. However, on tablets larger than 8.5-inches without connected standby, you will see the Power button on the Start screen.
    • For more information please see- http://blogs.windows.com/windows/b/windowsexperience/archive/2014/04/10/some-tips-and-tricks-for-using-the-windows-8-1-update.aspx
  • Search
    • Again, many of you have likely learned to "just type" on the Start screen for searching but we've added a clickable Search icon that will 'fly-out' the Search box – this helps visual people like me. I often wondered where my typing was going in the early days of Win8.

     

  • If you're using a mouse, you can now right-click Tiles and get a familiar "context" style menu to manipulate them.

 

Some of the early Windows 8 feedback was that folks didn't like booting to the Start screen. With Windows 8.1, you could choose - boot to Desktop or boot to Start.

That flexibility remains unchanged in this Update but going forward, there are new defaults – as mentioned before, Windows 8.1 Update now boots to the desktop as the default unless the device is a tablet form factor.

There is a new group of Tiles that are added to a new user's Start screen for some of the more commonly-used settings/locations:

  • This PC (a.k.a. "My Computer")
  • PC Settings
  • Documents
  • Pictures

NOTE:

  • You will *NOT* see these added to your established Start screen – only new profiles get these
  • Windows RT users only get the "PC Settings" Tile

 

The Apps View

Let's review some of the changes that you'll find in relation to the Apps view.

First off, when you install a new application(s), now there is an indicator and count at the bottom of the Start screen.

 

Click the arrow or swipe-up into the Apps view. Those new Apps are highlighted and have a bright blue NEWnext to the title (this will go away once you click/open the new App).

 

The columns have been made wider and spacing increased to account for applications with longer names.

 

You can sort your Apps your way and now, clicking on the headings will zoom-out so you can jump through your installed applications quicker.

 

You can subtly shrink the icons in the Apps view to fit even more Apps into the screen, if you so desire:

  • From within the Apps view, bring up Settings > Tiles
  • Slide the "Show more apps in the Apps view" slider to Yes

 

This can be VERY helpful if you have a lot of Apps installed:

 

Desktop Changes and Integration with Modern Apps

Now that we've covered the Start screen and App view changes, let's flip to the Desktop and talk about some of the big changes there.

One recurring request for Windows 8 has been to facilitate a better "connection" between the Modern UI and Windows Store Apps with the traditional Desktop. In 8.1, the ability to show your Desktop background on the Start screen helped.  This Update furthers the integration between the Desktop and Modern UI/Apps.

For starters, you likely already noticed the bright green icon in your Taskbar for the Store after installing the Update….

Yes, friends, you can now pin Store Apps to your Taskbar.

 

But wait - there's more! Not only can you pin Store Apps to the Taskbar; now, running Store Apps can show up on your Taskbar, just like a traditional Desktop App would.

You can see the App's thumbnail and operate any controls, such as skipping/pausing songs in the updated Xbox Music App, and close the App, too.

Cool?

 

If you'd rather not have Modern Apps taking up that precious real estate on your Taskbar, once again, we provide you the choice of configuration.

  • Right click the Taskbar and select Properties – you'll see the following new option (highlighted below) which you are free to select/deselect:

After the Update                                                                    Before the Update (for comparison)

     

    

 

Another option to consider – drag your Taskbar up and make it 2x tall – you'll have more space for a bevy of Apps - Modern and/or traditional

 

Modern Apps get "minimize" and "close" buttons as part of this Update. I am still a frequent keyboard/mouse user and these two controls make multi-tasking among my open apps more intuitive with how I'm used to working.

In order to see these, hover the mouse pointer at the top of the App.

Also, there is a right-click context menu for splitting the running App across the right or left half of your Desktop (along the lines of the "snap" feature)

DUDE!? – where's my MINIMIZE?!

Depending on your settings, you may not see the "Minimize" option for Modern Apps.

Above, I showed how to configure Taskbar settings so Modern Apps are displayed on the Taskbar:

That checkbox also determines if you'll see the Minimize "bar" (checked) or not (unchecked) for a Modern App.

NOTE:

  • You'll get the close "X" button either way
  • You can pin Modern Apps to the Taskbar regardless, too

One more aspect of the tighter integration between the traditional Desktop and the Modern UI/Apps - the Taskbar can be brought up while using a Store App or on the Start screen.

When the "Show Windows Store apps on the taskbar" option is checked, drag your mouse downward along the very bottom of the screen. The Taskbar will slide up and you can use it to switch between Apps, access the Start button, etc.

  • You can see the Taskbar on the Start screen regardless of the "Show Store apps on the taskbar" setting

 There are some additional changes to OS behavior that may catch you by surprise - here's a chart to help clarify:

 

Default Behavior and Settings

Device Type

Before Windows 8.1 Update

After Windows 8.1 Update

Tablet

  • Boots to Start Screen
  • Closing App takes user back to Start Screen
  • Pictures, Music and Video files open with Modern App
  • Boots to Start Screen
  • Closing App takes user to the previously used App.
  • Pictures, Music and Video files open with Modern App

Non-tablet

  • Boots to Start Screen
  • Closing App takes user back to Start Screen
  • Pictures, Music and Video files open with Modern App
  • Boots to Desktop
  • Closing App takes user to the previously used App.
  • After closing all Apps the user ends in the Desktop
  • Pictures, Music and Video files open with Desktop applications

 

App-specific Changes

The Update introduces changes to some of the in-box Apps such as:

  • Internet Explorer 11
    • There are changes to both the Modern and Desktop versions
    • A future post here will dive into the details of these changes
  • SkyDrive
    • Updated throughout the OS to reflect the new name, OneDrive 
     ---->
    • OneDrive has recently-enhanced features, such as manual Sync and Taskbar icons to provides the status of synchronization

Modern UI

We've added helpful additions to some of the Modern UI screens

Disk Space

  • Easily keep tabs on the amount space that your Apps take up and uninstall them right from here (click "See my app sizes")

 

Rename your PC and/or change domain membership with a tap or a click:

 

The WiFi 'fly-out' returns

  • A context menu provides convenient controls for managing WiFi network connections
  • This was removed in Windows 8.1

 

Touch and touch-keyboard:

  • "Tap and a half" is a new, more intuitive touch gesture for touch-pads - allowing you to tap twice but hold second-tap to highlight text or an object, and then drag and drop it. 
  • Hide or bring up the touch keyboard:

     

Well folks, there you have it. Say hello to the Windows 8.1 Update.

Give it a go!

 

-Blog post updated with new information at 3:00 PM 4/10/14

 

– Five Merry Men (Mark Morowczynski, Kyle Blagg, Joao Botto, Jeff Stokes and Michael Hildebrand)

How LastLogonTimeStamp is Updated with Kerberos S4u2Self

$
0
0

Introduction

Hi! My name is Richard Sasser, or Rick, as I prefer, and I’m a Microsoft Certified Master for Active Directory and I work on the Platforms DSE team. I do a lot of security related work, and consult frequently on Public Key Infrastructures and Authentication issues. I don’t blog as often as I should, but I’m trying to correct that – Shameless plugs here:

http://blogs.technet.com/b/rsasser/archive/2012/11/21/create-virtual-machines-quickly-using-mdghost-part-1.aspx

http://blogs.technet.com/b/askpfeplat/archive/2013/04/22/choosing-a-hash-and-encryption-algorithm-for-a-new-pki.aspx

Before you read this blog, you should be familiar with the contents of Ned Pyle’s blog here:

http://blogs.technet.com/b/askds/archive/2009/04/15/the-lastlogontimestamp-attribute-what-it-was-designed-for-and-how-it-works.aspx

Issue

I ran across an interesting scenario with LastLogonTimeStamp and in my research, I turned up manycases in our database, and I thought I would dig in.

Essentially, there is a situation where LastLogonTimeStamp can be updated even if the user has not logged on.

This is an artifact of a Kerberos Operation known as Service-for-User-to-Self or,S4u2Self,” in which a client/service can request a ticket for a user that is only useful for things like determining Access Checks or Group Membership. This can cause confusion about the relative staleness of an account, in addition to other security concerns. Wouldn’t it be nice if you could discover the source of the request so you could address your security concerns and possibly remediate updating stale accounts?

Technical Details

S4u2Self is documented here:

http://msdn.microsoft.com/en-us/magazine/cc188757.aspx#S2

S4U2Self

The S4U solution in this case is for the server to go through the motions of Kerberos authentication and obtain a logon for the client, but without providing the client's credentials. Thus, you're not really authenticating the client in this case, only making the rounds to collect the group security identifiers (SIDs) for the client. To allow this to occur, Windows Server 2003 domain controllers accept a new type of Kerberos request, where the service requests a ticket from the client to itself, presenting its own credentials instead of the client's. This extension is called Service-for-User-to-Self (S4U2Self).

If the client and the service are in separate domains, this requires a bidirectional trust path between them because the service, acting on the client's behalf, must request tickets from the client's domain.

While the wire-level details are all rather complicated, the service developer need only call one function to start the ball rolling: LsaLogonUser. In spirit, this is similar to calling LogonUser as I have shown earlier, but without needing to provide the client's password. The result is a token that the service can use with functions like AccessCheck and CheckTokenMembership, as well as the new AuthZ family of authorization functions. This allows the service to perform access checks against security descriptors for objects that it manages.

To protect the client, LsaLogonUser normally returns a token with a special restriction. The token will have an impersonation level of Identify, which means that the service will not be able to open kernel objects while impersonating the client using this token. However, for services that are part of the trusted computing base (TCB)—for example, a service running as SYSTEM—LsaLogonUser will return a token with an impersonation level of Impersonate, allowing access to local kernel objects using the client's identity. This prevents an untrusted service from using an S4U2Self ticket to elevate its own local privileges.

You can test this by logging into a file server and performing an effective access check on a file. For example, right click a folder, select properties, select security, select Advanced and go to the effective access tab.

I’m going to focus on the “AccessCheck” function mentioned above - more information about it can be found here:

http://technet.microsoft.com/en-us/library/cc772184.aspx

This generates the appropriate S4U2Self conversation. This exchange is documented here:

http://msdn.microsoft.com/en-us/library/hh554517.aspx

1.     Perform a Kerberos S4U2Self service ticket request using the S4U2self KRB_TGS_REQ/KRB_TGS_REP protocol extension as specified in [MS-SFU] section 3.1.5.1.1.1.

The userName MUST be set to the user name obtained in step 2.

The userRealm MUST be set to the domain name of the obtained in step 2.

The chksum MUST be set as specified in [MS-SFU] section 2.2.2.

The auth-package MUST be set to "Kerberos".

http://msdn.microsoft.com/en-us/library/cc246102.aspx

Using the TGT to the TGS in the user's realm, Service 1 requests a service ticket to itself. The S4U2self information in the KRB_TGS_REQ consists of: padata-type = PA-FOR-USER (ID 129), which consists of four fields:userName, userRealm, cksum, and auth-package. Service 1 sets these fields as follows: The userName is a structure consisting of a name type and a sequence of a name string (as specified in [RFC4120] section 6.2). The name type and name string fields are set to indicate the name of the user. The default name-type is NT_UNKNOWN. The userRealm is the realm of the user account. If the user 's realm name is unknown, Service 1 SHOULD use its own realm name. The auth-package field MUST be set to the string, "Kerberos". The auth-package field is not case-sensitive.

Let’s look at an illustration of this in action.

SCENARIO - I logged in to a machine as myself and then used the “Effective permissions” tab in Windows Explorer to generate an effective access token for a user named “Vic” for a file on the server.

While I’m doing that, I have a network trace running on the client.

· The pertinent details are visible via the following NetMon filter

“Kerberosv5.TgsReq.KdcReq.PaData.PaData.PaData.PaDataType.AsnInt==129”

· UserName – vic…

· Realm = RS…

image

And can be located in a trace with the following filter:

Kerberosv5.TgsReq.KdcReq.PaData.PaData.PaData.PaDataType.AsnInt==129

Tracking down the source of the S4u2Self Request:

Now that I am armed with the appropriate knowledge, it is time to start tracking this stuff down:

The first place to start is with the account’s metadata, so we can understand where last logon timestamp was updated. So let’s take my lab for an example where I’m going to pick on Vic’s account because he has not logged into my server in a while:

Repadmin /showobjmeta <DCNAME> <ObjectDN>

repadmin /showobjmeta localhost "CN=Vic,OU=Administrator Accounts,DC=ad,DC=r,DC=net”

This is Vic’s User account. One of the many ways we can see that Vic hasn’t logged in to my server in a long, long time is that the DSA for this AD Environment has been deleted. Also, attributes are showing a last update of 1/2012. (I have omitted much of the attributes for brevity’s sake). Note that LastLogonTimeStamp for Vic is old as well.

Loc.USN Originating DSA Org.USN Org.Time/Date Ver Attribute

======= =============== ========= ============= === =========

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 unicodePwd

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 ntPwdHistory

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 4 pwdLastSet

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746362 2012-01-18 16:57:43 1 primaryGroupID

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 objectSid

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746455 2012-01-18 17:32:49 1 adminCount

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746362 2012-01-18 16:57:43 1 accountExpires

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 lmPwdHistory

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 sAMAccountName

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 sAMAccountType

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 userPrincipalName

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 objectCategory

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746444 2012-01-18 17:20:34 1 lastLogonTimestamp

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746450 2012-01-18 17:20:36 5 msDS-LastSuccessfulInteractiveLogonTime

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746444 2012-01-18 17:20:34 1 msDS-FailedInteractiveLogonCountAtLastSuccessfulLogon

I subsequently logged in to a machine as myself (R*) and then used Explorer to generate an effective access token for Vic for a file on that server.

http://technet.microsoft.com/en-us/library/cc772184.aspx

Now Note Vic’s last logon timestamp in the metadata:

Loc.USN Originating DSA Org.USN Org.Time/Date Ver Attribute

======= =============== ========= ============= === =========

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 unicodePwd

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 ntPwdHistory

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 4 pwdLastSet

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746362 2012-01-18 16:57:43 1 primaryGroupID

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746446 2012-01-18 17:20:34 2 supplementalCredentials

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 objectSid

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746455 2012-01-18 17:32:49 1 adminCount

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746362 2012-01-18 16:57:43 1 accountExpires

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746445 2012-01-18 17:20:34 3 lmPwdHistory

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 sAMAccountName

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 sAMAccountType

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 userPrincipalName

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746361 2012-01-18 16:57:43 1 objectCategory

82572 Default-First-Site-Name\TITAN 82572 2014-01-17 16:24:57 2 lastLogonTimestamp

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746450 2012-01-18 17:20:36 5 msDS-LastSuccessfulInteractiveLogonTime

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746444 2012-01-18 17:20:34 1 msDS-FailedInteractiveLogonCountAtLastSuccessfulLogon

· If you’re curious, here’s Vic’s lastLogonTimestamp prior to this access check:

12463 Default-First-Site-Name\SUPERMASSIVE\0ADEL:6439a528-0957-48e0-b37d-426bf9ee446b (deleted DSA) 13746444 2012-01-18 17:20:34 1 lastLogonTimestamp

From the metadata dump above, I can see on which DC the lastLogonTimestamp attribute change was made (“Titan”), so I can surf the security event log of Titan. Obviously, to catch these events I need the advanced audit policy enabled PRIOR to the events. You do have proper auditing enabled on your DCs, don’t you?

Account Logon:  Kerberos Service Ticket Operations      Success and Failure

http://technet.microsoft.com/en-us/library/dd772667(v=WS.10).aspx

Now there’s a bit of a rub here which I had to know - the audit event WILL NOT show the request for the user “Vic.” It will contain the account that made that made the request for the information (usually a service account), which is why I needed the trace. However, the event data can tell you if a particular application or user requested a ticket, and I can correlate the metadata update time with the event id time I’m looking for.

This event can then be correlated with Windows logon events by comparing the Logon GUID fields in each event. Also, important, the logon event occurs on the machine that was accessed, which is often a different machine than the domain controller which issued the service ticket.

Log Name: Security

Source: Microsoft-Windows-Security-Auditing

Date: 1/17/2014 4:24:57 PM

Event ID: 4769

Task Category: Kerberos Service Ticket Operations

Level: Information

Keywords: Audit Success

User: N/A

Computer: Titan.ad.r.net

Description:

A Kerberos service ticket was requested.

Account Information:

Account Name: r@AD.R.NET

Account Domain: AD.R.NET

Logon GUID: {f0886d68-26f2-d3c2-9d1f-1e790f212956}

Service Information:

Service Name: r

Service ID: R\r

Network Information:

Client Address: ::ffff:192.168.1.200

Client Port: 49502

Additional Information:

Ticket Options: 0x40810008

Ticket Encryption Type: 0x12

Failure Code: 0x0

Transited Services: -

This event is generated every time access is requested to a resource such as a computer or a Windows service. The service name indicates the resource to which access was requested.

The logon event occurs on the machine that was accessed, which is often a different machine than the domain controller which issued the service ticket.

Ticket options, encryption types, and failure codes are defined in RFC 4120.

The logon event referenced is logged on the machine used to generate the request and DOES contain the account that the S4U2Self extension was requested:

Log Name: Security
Source: Microsoft-Windows-Security-Auditing
Date: 1/21/2014 11:47:19 AM
Event ID: 4624
Task Category: Logon
Level: Information
Keywords: Audit Success
User: N/A
Computer: 2K8R2NPS.ad.rsasser.net
Description:
An account was successfully logged on.

Subject:
Security ID: S-1-5-21-3250969430-3741033745-72471029-1108
Account Name: r
Account Domain: R
Logon ID: 0x25B9B

Logon Type: 3

New Logon:
Security ID: S-1-5-21-3250969430-3741033745-72471029-1643
Account Name: vic
Account Domain: R
Logon ID: 0x6C395E
Logon GUID: {30ca5978-a4f4-d0c3-de5a-8a5e6b1f7bad}

Process Information:
Process ID: 0x654
Process Name: C:\Windows\explorer.exe

Network Information:
Workstation Name: 2K8R2NPS
Source Network Address: -
Source Port: -

Detailed Authentication Information:
Logon Process: Authz
Authentication Package: Kerberos
Transited Services: -
Package Name (NTLM only): -
Key Length: 0

This event is generated when a logon session is created. It is generated on the computer that was accessed.

The subject fields indicate the account on the local system which requested the logon. This is most commonly a service such as the Server service, or a local process such as Winlogon.exe or Services.exe.

The logon type field indicates the kind of logon that occurred. The most common types are 2 (interactive) and 3 (network).

The New Logon fields indicate the account for whom the new logon was created, i.e. the account that was logged on.

The network fields indicate where a remote logon request originated. Workstation name is not always available and may be left blank in some cases.

The impersonation level field indicates the extent to which a process in the logon session can impersonate.

The authentication information fields provide detailed information about this specific logon request.
- Logon GUID is a unique identifier that can be used to correlate this event with a KDC event.
- Transited services indicate which intermediate services have participated in this logon request.
- Package name indicates which sub-protocol was used among the NTLM protocols.
- Key length indicates the length of the generated session key. This will be 0 if no session key was requested.

Summary

LastLogonTimeStamp might not always be updated by an actual Logon. S4u2Self requests for access checks can update the attribute. In order to track down the requests that are updating the account, you need to dump the metadata for the account, locate the DC that updated the attribute and parse the logs for the 4769 Kerberos Service Ticket Operation made at the same time. The machine making the request will log a 4624 Logon Event.

Introduction to Active Directory Federation Services (AD FS) AlternateLoginID Feature

$
0
0

With all of this great talk on the Windows 8.1 Client Update we here at ASKPFEPLAT didn’t want to leave out one important Active Directory Federation Server (AD FS) feature released with the Windows Server 2012 R2 Update.

My Name is Keith Brewer and I am here to introduce this AD FS login enhancement feature called AlternateLoginID.

This blog is designed to discuss this new feature so you can test in your own AD FS lab. Don’t have a Lab? Head here to get started

AlternateLoginID is a feature introduced with Windows Server 2012 R2 Update that facilitates login to AD FS with an administratively defined user attribute.

Many customers would like their end users to only know about their email address for ease of use when signing in. However, within a customer's AD DS environment, administrators may be unable to ensure that user UserPrincipalName (UPN) and email addresses match. In addition, in some AD DS environments UPNs are not publically routable and that may pose challenges for some SaaS providers.

In Windows Server 2012 R2 Update, AD FS provides the capability for an administrator to enable user sign in via an alternate login ID that is an attribute of the user object in AD DS.

Once configured AD FS will prefer to locate the user account within AD DS by the defined alternate attribute first instead of UPN.

End users can still sign in to applications that rely on AD FS using any form of user identifier that is accepted by Active Directory Domain Services (AD DS). These include UPN (johndoe@contoso.com) or domain qualified SamAccountName (Contoso\johndoe or contoso.com\johndoe).

To support this feature, when a user authenticates successfully via the value of AlternateLoginID, a new claim will be entered into the claims pipeline. There is a new claim type for alternate login ID, described as follows:

http://schemas.microsoft.com/ws/2013/11/alternateloginid

· Display Name: Alternate Login ID

· Description: Alternate login of the user

Prerequisites

· The AD FS Service Account must have read permission to the Canonical Name attribute for all users in the directory.

o By default, Authenticated Users have this permission and should be sufficient for the AD FS Service Account.

o If multiple forests are identified in “LookupForests’ these permissions are required for all users in each forest.

· Only supported on 2012 R2 with the Windows Server 2012 R2 Update applied

o This update must be applied to all federation servers

· The alternate login ID attribute must be indexed

o This is required for PowerShell to complete the configuration

· The alternate login ID attribute must be in the global catalog

o This is required for PowerShell to complete the configuration

· The alternate login ID attribute must be UPN format compatible

o Schema Details for UPN:

http://msdn.microsoft.com/en-us/library/windows/desktop/ms680857(v=vs.85).aspx

o The attribute value must be in UPN format (Prefix@suffix), or domain qualified samAccountNames (NETBIOS Domain\SamAccountName or DOMAIN FQDN\SamAccountName). This is due to that format being required on the AD FS Sign-On Page.

· The value of the alternate login ID attribute must be unique across all the users

o If multiple forests are identified in “LookupForests’ the value of the alternate login ID must be unique across all forests

Enabling AlternateLoginID

Install Windows Server 2012 R2 Update

Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 Update April 2014

http://support.microsoft.com/kb/2919355

Enabling AlternateLoginID via PowerShell

The AlternateLoginID feature is enabled via PowerShell, as follows:

More information on this Powershell Command is here

Set-ADFSClaimsProviderTrust -Target Identifier "AD AUTHORITY" -AlternateLoginID <attribute> -LookupForests <forest domain>

image

Verifying that AlternateLoginID is enabled

PowerShell is also used to verify the AlternateLoginID feature, as follows:

Get-ADFSClaimsProviderTrust –Identifier “AD Authority” | ft AlternateLoginID,LookupForests

image

Disabling AlternateLoginID

The AlternateLoginID Feature is disabled via PowerShell, as follows:

Set-ADFSClaimsProviderTrust -Target Identifier "AD AUTHORITY" -AlternateLoginID $NULL -LookupForests $NULL

image

How AlternateLoginID Works

Conversation Flow:

ADFS

1.) Credentials are provided to the AD FS Server

The AlternateLoginID lookup only occurs for the following scenarios

· User signs in using AD FS form-based page (FBA)

· User on rich client application signs in against username & password endpoint

· User performs pre-authentication through Web Application Proxy (WAP)

· User updates the corporate account password with AD FS update password page

2.) Is AlternateLoginID Enabled

If enabled the AD FS Server will issue an LDAP Search against all defined lookup forests targeting enabled user objects with a matching user supplied value in the defined AlternateLoginID attribute.

This query will return the user objects SamAccountName, CanonicalName & DistinguishedName attributes.

3.) Was Value Found

If no then attempt to logon user with supplied credentials

4.) Was Value Found for a unique user object across all lookup forests

If no then there was a conflict, throw an error, Logon Failure

5.) Are the values for Canonical Name and SamAccountName properly formatted

If no, Throw and Error, Logon Failure.

If yes, build logon credentials from results of the LDAP query to logon the user.

Best Practices

When selecting the attribute that will serve as the AlternateLoginID, administrators should consider the following:

· The value of the AlternateLoginID attribute must be unique across all users in all defined lookup forests

· The value for AlternateLoginID attribute should be unique across all users AlternateLoginID attribute as well as all users UserPrincipalName attribute.

· Users should not be able to update/edit their own (or other users) value for Alternate Login ID attribute

When considering AlternateLoginID it is important to carefully consider each relying party trust and how they may / may not be affected by this change. If current relying party’s claims rules depend on UserPrincipalName value, the rules may need to be updated with the value of the defined AlternateLoginID attribute.

If multiple lookup forests are defined, administrators should consider placing a global catalog for each forest in close proximity to the AD FS Servers.

Test lab overview

In this test lab, AD FS is deployed with:

· One Server that is configured as a Domain Controller (any supported server operating system is fine for the domain controller for the purposes of this lab)

· One Member Server running Windows Server 2012 R2 with KB2919355 that is configured as an Active Directory Federation Server

· One standalone server running Windows Server 2012 R2 with KB2919355 that is configured as an Web Application Proxy (WAP) Server

· One Member Server that is configured as a Web Server and host a relying party claims aware application (any supported server operating system is fine for the Web Server for the purposes of this lab)

· One standalone client computer (any supported client operating system is fine for the internet client for the purposes of this lab.)

This lab configuration is originally based on the following setup:

How to Build Your AD FS Lab on Server 2012 Part 1 http://blogs.technet.com/b/askpfeplat/archive/2013/12/09/how-to-build-your-AD FS-lab-on-server-2012-part-1.aspx

How to Build Your AD FS Lab on Server 2012, Part2: Web SSO http://blogs.technet.com/b/askpfeplat/archive/2013/12/23/how-to-build-your-AD FS-lab-on-server-2012-part2-web-sso.aspx

How to Build Your ADFS Lab on Server 2012 Part 3: ADFS Proxy http://blogs.technet.com/b/askpfeplat/archive/2014/03/17/how-to-build-your-adfs-lab-on-server-2012-part-3-adfs-proxy.aspx

AD FS Servers were then migrated to Windows Server 2012 R2

No worries there ASKPFE has you covered as well

http://blogs.technet.com/b/askpfeplat/archive/2014/03/31/how-to-build-your-adfs-lab-part4-upgrading-to-server-2012-r2.aspx

Migrating Active Directory Federation Services Role Service to Windows Server 2012 R2 http://technet.microsoft.com/en-us/library/dn486815.aspx

Lastly the Windows Server 2012 R2 April 2014 Update was applied

Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 Update April, 2014 http://support.microsoft.com/kb/2919355

Important

The following instructions are for configuring an AD FS test lab using the minimum number of computers. Individual computers are needed to separate the services provided on the network and to clearly show the desired functionality. This configuration is neither designed to reflect best practices nor does it reflect a desired or recommended configuration for a production network. The configuration, and all other configuration parameters, is designed only to work on a separate test lab network.

Attempting to adapt this AD FS test lab configuration to a production deployment can result in availability, configuration or functionality issues.

This AD FS lab consists of three subnets that simulate the following:

· The Internet

· A DMZ network

· An internal network

ADFSLab

Software requirements

The following are required components of the test lab:

All AD FS and WAP servers must be running Windows Server 2012 R2 with KB2919355

Step 1: Enable the Alternate Login ID Feature in AD FS

The default configuration is for this feature to be disabled.

Below are the steps required to enable this feature:

To enable alternate login ID

1. On one of the federation servers, open Power Shell.

Set-ADFSClaimsProviderTrust –Target Identifier “AD AUTHORITY” –AlternateLoginID <attribute> -LookupForests <forest domain>

image

Step 2: Configure Claims Rules for Demonstration

Add Pass through rule to Relying party for UPN

1. Log onto AD FS Server as Administrator

2. In Server Manager, Select the Tools Menu

3. In the Tools Menu, Select AD FS Management

4. In AD FS Management, On the console tree expand Trust Relationships

5. Under Trust Relationships, Expand Relying Party Trusts

6. Right Click the Relying Party Trust for your Claims Aware Application

7. In the context menu, Select Edit Claims Rules…

8. In Edit Claims Rules, Select Add Rule…

9. In Add Transform Claim Rule, in the Claim Rule Template dropdown

a. Select “Pass Through or Filter an Incoming Claim”

10. In the Add “Transform Claim Rule Wizard”

a. Choose any appropriate name

b. Incoming Claim type, Select UPN

c. Leave default selection “Pass Through all Claim Values”

11. Select Finish

Add Pass through rule to Relying party for Alternate Login ID

1. Log onto AD FS Server as Administrator

2. In Server Manager, Select the Tools Menu

3. In the Tools Menu, Select AD FS Management

4. In AD FS Management, On the console tree expand Trust Relationships

5. Under Trust Relationships, Expand Relying Party Trusts

6. Right Click the Relying Party Trust for your Claims Aware Application

7. In the context menu, Select Edit Claims Rules…

8. In Edit Claims Rules, Select Add Rule…

9. In Add Transform Claim Rule, in the Claim Rule Template dropdown

a. Select “Pass Through or Filter an Incoming Claim”

10. In the Add “Transform Claim Rule Wizard”

a. Choose any appropriate name

b. Incoming Claim type, Select Alternate Login ID

c. Leave default selection “Pass Through all Claim Values”

Select Finish

image

Step 3: Take Note of Test User Attribute Values

1. Log on to Domain Controller as Administrator

2. Open PowerShell

3. Run the following command

Get-ADUser –Filter ‘SamAccountName –eq “TESTUSERNAME”’ –Properties * | fl SamAccountName,UserPrincipalName,Mail

If you receive the following Error:

clip_image002

Import the Active Directory Powershell Module with:

Import-Module ActiveDirectory

image

Step 4: Navigate to ClaimsWeb Application from External Client

1. Log onto External Client

2. Open Internet Explorer, Navigate to ClaimsWeb Application

3. On Forms Based Logon (WAP) enter user Email Address and Password

image

4. Notice that since we logged in via an Alternate Login ID that claim type is included (due to the above claims rules)

image

Additional Information

For additional information including event’s and performance counters specifically added for this feature see:

Configuring Alternate Login ID

http://technet.microsoft.com/en-us/library/dn659436.aspx

Keith “What’s your ALTID” Brewer

How to use the File Server Capacity Tool (FSCT) on Server 2012 R2

$
0
0

Hi, my name is Tom Ausburne and I recently wrote an article simplifying the instructions to use the ADTest tool and verifying that it will work on newer operating systems. The logical follow up to that article deals with another testing tool from Microsoft, the File Server Capacity Tool or (FSCT for short). This tools helps to simulate CIFS/SMB/SMB2 client requests while putting a load on your file servers. It is a bit newer than the ADTest tool and the supported operating systems are Windows Server 2008 and Windows Server 2008 R2. But those operating systems, while still in wide use today, aren’t the latest and the ones customers are asking about. So let’s dive in and see if this tool still works on Windows Server 2012 R2 and Windows 8.1. If you want to follow along you can download the tool here.

File Server Capacity Tool v1.2- (64 bit)
http://www.microsoft.com/en-us/download/details.aspx?id=27284

Once you download the tool you can extract it and you will get a folder named 64bit. Inside the folder are all the files you will need including an instruction document named FSCTStepbyStepGuide.doc. All of the steps in this blog were taken from that document and are based on running the tool with Active Directory. There are also instructions on running it in Windows Workgroups, non-Windows servers and against a singleton Windows cluster in the document if you want to test in those environments.

Here is my little Disclaimer: The included Step by Step Guide does a wonderful and thorough job of describing each scenario and uses standard Microsoft naming conventions. I set all of this up in my lab using my existing domains so the names will be different. I’ll point out where things need to be changed to work in your scenario so this should all be fairly quick and painless. I should also mention that this should be used in a testing or lab environment and not in production. You wouldn't want to bring down your production servers.

This exercise uses the following configuration:

image

DC3– Domain Controller

FSCTServer– File Server to share files and it is this server’s performance that is tested

FSCTController– Used to synchronize test activity and collect test data

81Client– Client computer to generate the workload to stress the file server

Obviously this is a very simple setup and will not put a huge load on the server. The Step by Step guide shows that you can set up numerous clients over multiple networks which is preferable. But the purpose

of this article is to get you set up and running quickly and with very little pain. Once you have this working you can add and make things as complex as you like.

As in the previous article I am going to assume you know how to set up a basic Active Directory domain with servers and clients and won’t be covering that. So let’s get started.

Hardware Specifications

· Active Directory Infrastructure (at least 1 domain controller)

· 1 server computer running at least Windows Server 2008 R2 with sufficient disk space on one or more volumes designated for use in the test.
Note: These volumes will be formatted.

· 1 controller computer running at least Windows Server 2008 R2 with 1GB of available disk space

· 1 or more client computers running at least Windows 7 client with 1GB of available disk space

Machine Setup

The first thing to do is to copy the files contained in the 64bit folder (mentioned above) to a directory on each machine involved in the testing. To keep things simple I created a directory in the root of C: called FSCT.

clip_image001

Now comes the part where this article “should” make things easier. It’s always the syntax that makes things difficult for me and the lack of understandable examples. I hope I have resolved this for you. In the hopes of answering some what-if questions I have listed additional examples at the end of the article.

You need to prepare the domain controller, server, FSCT controller and the client(s) to work with FSCT. This makes sure that all of the users, files and directories have been created.

Prepare the Domain Controller

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct prepare dc /users 10 /clients 81Client /password Password1

Where:

/users – Specify the number of users per client computer

/clients – Specify the names of the client computers (in my case it was just 81Client)
If you have several you would list them all separated by a comma.
Ex: 81Client,82Client,83Client,84Client

/password – Specify the password for all users (They all use the same password)

image

You should see users created in Active Directory.

image

Prepare the File Server

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct prepare server /clients 81Client /password Password1 /users 10 /domain domain2.com /volumes D: /workload HomeFolders

Where:

/clients – Specify the names of the client computers (in my case it was just 81Client)
If you have several you would list them all separated by a comma.
Ex: 81Client,82Client,83Client,84Client

/password – Specify the password you used in the previous step

/users – Specify the number of users you used in the previous step

/domain – Specify the name of your domain (my domain was called domain2.com)

/volumes – Specify the list of volumes to be formatted during the prepare and cleanup phases. (For me there was just one volume D: ) If you have multiple volumes just separate them with a comma. Ex: D:,E:,F:,

/workload – Specify the workload name. FSCT includes one workload, HomeFolders, which simulates user activity when the server’s primary function is to store the user’s home directory.

image

This will format the volume specified and create an extensive folder structure that will be used for testing.

image

Prepare the Controller

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct prepare controller

There are no required parameters for this command although the Step by Step guide shows them. You can specify a config file and logging options if you want. The parameters are detailed on page 39 of the Step by Step guide.

image

Prepare the Client(s)

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct prepare client /server fsctserver /password Password1 /users 10 /domain domain2.com /server_ip 192.168.0.203

Where:

/server – Specify the name of the server you will be testing

/password – Specify the password you used in the previous steps

/users – Specify the number of users you used in the previous steps

/domain – Specify the name of your domain (my domain was called domain2.com)

/server_ip - Specify the IP address of the file server to be used by the client computer.

image

That’s it. That is how simple it is to set all this up. Of course like I said, this is only using 1 client and there is only one network and one volume, but you should be able to get the idea on how you can easily expand this setup. Now let’s do some testing.

Running the Test

On the Client(s)

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct run client /controller fsctcontroller /server fsctserver /password Password1 /domain domain2.com

Where:

/controller – Specify the name of the controller

/server – Specify the name of the server you will be testing

/password – Specify the password you used in the previous steps

/domain – Specify the name of your domain (my domain was called domain2.com)

image

On the Controller

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct run controller /server fsctserver /password Password1 /volumes D: /clients 81Client /min_users 2 /max_users 10 /step 2 /duration 360 /workload HomeFolders

Where:

/server – Specify the name of the server you will be testing

/password – Specify the password you used in the previous steps

/volumes – Specify the list of volumes to be formatted during the prepare and cleanup phases. (For me there was just one volume D: ) If you have multiple volumes just separate them with a comma. Ex: D:,E:,F:,

/clients – Specify the names of the client computers (in my case it was just 81Client)
If you have several you would list them all separated by a comma.
Ex: 81Client,82Client,83Client,84Client

/min_users - Specify the minimum number of users to be used during the test.

/max_users - Specify the maximum number of users to be used during the test.
(This number should be less than or equal to the number of users you created)

/step - This value is how many users the test will increase by with each iteration between the values for min_users and max_users.

/duration - Specify the duration, in seconds, of a single iteration of the test. If this value is set to 0 (zero), the test will finish when all high priority scenarios are complete. Also, if this value is set to 0 (zero), the profile configuration file must contain a maximum number of iterations to run, or the test will run indefinitely.

/workload – Specify the workload name. FSCT includes one workload, HomeFolders, which simulates user activity when the server’s primary function is to store the user’s home directory.

image

While the test is running you can look at the Resource Monitor (or Perfmon Counters) to see if everything is working as expected.

image

Cleaning Things Up

After testing, the cleanup process deletes users, files, and directories from the server, controller, and client computers.

Cleanup is required when:

· All your testing is finished

· You have made configuration changes to the server, controller, clients, volumes, or users; or the test failed or aborted during a test run.

Cleaning the File Server

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct cleanup server /clients 81Client /users 10 /volumes D: /domain domain2.com

Where:

/clients – Specify the names of the client computers (in my case it was just 81Client)
If you have several you would list them all separated by a comma.
Ex: 81Client,82Client,83Client,84Client

/users – Specify the number of users you used in your testing

/volumes – Specify the list of volumes to be formatted during the prepare and cleanup phases. (For me there was just one volume D: ) If you have multiple volumes just separate them with a comma. Ex: D:,E:,F:,

/domain – Specify the name of your domain (my domain was called domain2.com)

image

Cleaning the Controller

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct cleanup controller /backup c:\fsct_backup

Where:

/backup – The folder you want to back up the collected data to

image

Cleaning the Client

Open an elevated command prompt, change to the FSCT directory and use this command:

fsct cleanup client /users 10 /domain domain2.com

Where:

/users – Specify the number of users you used in your testing

/domain – Specify the name of your domain (my domain was called domain2.com)

image

Cleaning the Domain Controller

This is a manual but simple process. Just delete all the users you created for this test. That’s all there is to clean up. You can start over and redo all the steps again with different settings.

Reviewing the FSCT Test Results

After running the test, the performance results will be stored on the Controller computer. A sample results file can be seen in the FSCT Step by Step Guide.

Final Thoughts

As I mentioned earlier, this post doesn’t walk you through all the different scenarios that are possible or even set things up to put a load on the server. The main purpose was to show how easy this is to set up and get running. A few extra clients and a few different settings and you could be on your way to stressing a server to its limits. And isn’t that the whole purpose of this tool?

Oh, I mentioned earlier that for those of you who want to see what some expanded syntax looks like here are a few examples:

If you want to create 1000 users and use 10 client computers it would look like this:

On the DC:

fsct prepare dc /users 1000 /clients Client1,Client2,Client3,Client4,Client5,Client6,Client7,Client8,Client9,Client10 /password Password1

image

If you want to create a folder structure for 1000 users on two different volumes for 10 clients it would look like this:

On the Server:

fsct prepare server /clients Client1,Client2,Client3,Client4,Client5,Client6,Client7,Client8,Client9,Client10 /password Password1 /users 1000 /domain domain2.com /volumes D:,F: /workload HomeFolders

image

If you want the client to prepare to run the test with 1000 users against the server it would look like this:

On the Server:

fsct prepare client /server fsctserver /password Password1 /users 1000 / domain domain2.com /server_ip 192.168.0.203

Note:A separate TCP connection is required for each user (the redirector checks the server name and will collapse multiple connections into one if it can). FSCT edits the host’s file (%windir%\system32\drivers\etc\hosts), and then adds a separate entry for each user. You must run “fsct cleanup” before you run “fsct prepare clients”. If not, the host file may not contain all of the correct information, and subsequent runs are likely to fail with server access errors.

I hope this post will make using this tool much easier. I’m a big fan of the “do these things in this order and by the way here are the screenshots of what it looks like” method.

Tom Aushburne

Secure Extranet Publication of Exchange 2010 OWA via Server 2012 R2 Web Application Proxy

$
0
0

Hello all, Jesse Esquivel here again with a post about publishing Exchange 2010 via Server 2012 R2 Web Application Proxy (WAP). Before we get started on this post I wanted to take a minute to talk about reverse proxy functionality and where Microsoft is headed with this technology. As you know, Threat Management Gateway (TMG) and Unified Access Gateway (UAG) have a definitive end of life. Some folks have looked into Internet Information Services Application Request Routing or ARR. ARR is a web farm extension meant for publishing web sites, however ARR does not do pre-authentication, there are no PowerShell cmdlets, no high availability, and there is no ongoing investment in ARR. We now have a new product that is included as a server role in Windows Server 2012 R2 – the Web Application Proxy or WAP. WAP is a reverse proxy solution that relies on ADFS for publication of both claims aware and non-claims aware web applications. WAP is built for current and future web protocols; it understands ADFS, claims, OAUTH, it can also do protocol transition, and Kerberos constrained delegation. Specifically (and applicable to this post) protocol transition and KCD are required for smart card only authentication (authN) into extranet published Kerberos enabled web applications – one of the same functionality sets that TMG and UAG provided. This means that WAP can publish claims aware AND non-claims aware web applications using smart card only authN.

Though this article is specific to publishing Exchange 2010 as an illustration/example AnyKerberos aware web application can be published in this fashion using WAP.

Many customers are currently evaluating reverse proxy products for publishing things like Exchange, and other web applications for various reasons. Some customers are moving off of Internet Security and Acceleration (ISA) server for the first time. Some customers are publishing a new web application for the first time and want to publish with something that does not have a definitive end of life like TMG or UAG. Some customers may want to move off of TMG or UAG in preparation for the sunset of these products. Whatever the reasoning, WAP is a new Microsoft solution that is capable of doing reverse proxy for claims aware and non-claims aware web applications and offers forms based and certificate based pre-authentication as well as pass through authentication. Microsoft is committed to WAP and we plan to continue its evolution and add functionality over time - WAP is the strategic choice for application publishing from Microsoft. So if you’ve heard of it or been considering it, now is the time to get it in your labs and start evaluating it!

This post about publication of Outlook Web App (OWA) is targeted to folks that must use smart card only authentication for external access into OWA, or any other Kerberos aware web application for that matter. For lab use and discussion - today I’m writing to you about a high level overview of securely publishing Exchange 2010 OWA with smart card authN using Kerberos constrained delegation (KCD) via this new server role that is available in Server 2012 R2. In order to securely publish OWA we will use Server 2012 R2 Web Application Proxy, and Server 2012 R2 Active Directory Federation Services (ADFS 3.0). ADFS is a hard requirement for publishing via WAP. WAP and Server 2012 R2 ADFS provide a seamless and secure extranet publishing solution that can use strong two factor smart card authentication for OWA.

Overview

Here you’ll find an overview of a basic publication setup. We have the WAP box that sits in a DMZ construct, ADFS 3.0, a domain controller, and an exchange 2010 Client Access Server (CAS) array. All servers are joined to the same Active Directory domain.

image

Pre-Requisites

Couple of quick ones here (this is not an exhaustive list). Server 2012 R2 ADFS (3.0) is required.

DNS

· An external DNS record for the security token service running as an ADFS proxy service on the web application proxy will need to be created so internet clients can resolve the federation proxy service. The ADFS proxy service is integrated into the WAP server role.

· The external DNS record should resolve to the external firewall and traffic should be forwarded to the WAP server, which also acts as an ADFS proxy.

· An internal DNS record for the security token service running on the ADFS server will need to be created so that internal clients can resolve the federation service.

· If the internal domain DNS name and the external DNS A record of the federation service do not match, you will need to use a split brain DNS setup configuration, which is out of scope for this blog post.

· Optionally if internal ADFS use is not required you can use a host file entry on the Web Application Proxy server that has the federation service name and the IP of the internal ADFS server.

WAP

· The WAP server must be joined to a domain that is trusted by the Exchange CAS boxes for KCD to function, the domain may even belong to a different forest as long as cross-forest KCD enabled (2012 DCs required).

· The WAP server must be able to get client certificate revocation information via ocsp or crl fetch, both are port 80 so modify your firewall rules accordingly.

· Internal communications – the WAP server must be able to talk a DC in the domain it is joined to, the CAS array, and the ADFS box so allow communication to these boxes or restrict ports accordingly.

· WAP listens on port 443 (SSL) for incoming client connections for published web applications.

· If requiring smart card authentication into Exchange 2010 (or any published web app) WAP also listens on port 49443, which is TLS client certificate authentication. Both of these ports must be opened/forwarded on your external firewall.

Certificates

· You’ll need the SSL cert for your OWA URL. Since users will be hitting the WAP box for the web site to be served the OWA SSL cert will need to be imported on the WAP box.

· You will also need an SSL cert for the federation service. This certificate should have its private key marked as exportable, as it will need to be imported onto the WAP box for the ADFS proxy component which is integrated with the WAP role.

· The OWA and the federation service certificates must be trusted by internet clients so typically you will want an SSL cert for each of these from public certificate authorities.

· Note: Server 2012 R2 ADFS 3.0 does not support the use of a CNG key, so you must create the certificate signing request with a legacy key.

Installation and Configuration

As it is a hard requirement for WAP, ADFS 3.0 is up first! ADFS 3.0 installation was covered here, so you can use that to get started. The federation service name for this solution is sts.treyresearch.com

Note: When configuring ADFS 3.0 for the first time the federation Service Name field CANNOT be the same as the DNS name of the ADFS server. A good example, and used here is:

· Federation Service Name: sts.treyresearch.com

· ADFS Server Name: adfs1.treyresearch.com

ADFS should now be installed and configured. Since Exchange 2010 is a non-claims aware app and we are using smart card authN/KCD we need to create an ADFS 3.0 Non Claims Aware Relying part trust. Open the ADFS management MMC snap in. In the left pane right click Relying party trust and click “Add Non-claims-Aware Relying Party Trust…”

image

At the welcome screen click start, then at the specify display name screen, type a display name for the trust. At the configure identifiers screen type at least one trust identifier, use something that you can easily identify and is related to what you are doing, here I used the example URL and swapped the dns name to match my mail URL. This way when I look at the trust identifier (now or troubleshooting later) I know that this identifies the non-claims aware relying party trust for OWA. Click add, then click next.

image

At the “Configure Multifactor Authentication Now?” screen click“I do not want to configure multi-factor authentication settings…” and click next.

image

At the ready to add trust screen click next. At the completion screen check the following box and click close.

image

At the Issuance Authorization Rules screen click Add Rule at the bottom:

image

At the rule type screen select permit all users, click next, and then click Finish:

image

That wraps it up for the ADFS 3.0 part. Next we will go into configuring the WAP role.

Server 2012 R2 Web Application Proxy

Since WAP installation and configuration is covered here, I won’t go into it. Now that WAP is installed and configured with our federation service name of sts.treyresearch.com, and has the sts.treyresearch.com federation service certificate imported and configured for the ADFS proxy component we’re ready to move on, so let’s take a minute to review. We have the ADFS 3.0 box up, the federation service configured, and the non-claims aware relying party trust is configured. The WAP server is also installed and configured.

It’s time to publish Exchange 2010 OWA via WAP. But first you must be aware of the following…In order for Kerberos constrained delegation to function against an array of Exchange 2010 CAS you must configure all of the array members to use an alternate service account (ASA) credential for Kerberos and set a service principal name on the ASA. Please see the following articles to set the ASA on the CAS array members:

http://blogs.technet.com/b/kpapadak/archive/2011/03/13/setting-up-kerberos-with-a-client-access-server-array.aspx

http://technet.microsoft.com/en-us/library/ff808312(v=exchg.141).aspx

You must do this before publishing OWA via WAP as you will need the SPN that is used on the ASA credential. Once the ASA is configured on all CAS array members (as per the above link) proceed with publishing OWA via the WAP server. When configuring the ASA credential above make note of the SPN configured for the CAS ASA account as you will need it. For example here are the SPNs added in this setup:

image

On the WAP server open the remote access configuration manager by navigating to the start screen and typing “remote access.” In the right pane click Publish:

image

At the welcome screen click next. At the Preauthentication screen select ADFS and click next:

image

At the relying party trust select the Non Claims Aware RP trust that you created on the ADFS 3.0 server and click next:

image

Here’s where the magic happens, AKA where the rubber meets the road. At the publishing settings screen, select the SSL certificate for your published web application and configure the options shown. If you haven’t already, go ahead and import the web applications (OWAs) certificate with private key into the WAP servers personal or ‘My’ store.

Note: It’s best to import the cert and private key of your OWA cert beforehand here as well, this way you can simply select it in the drop down arrow when prompted. The web site SSL certificate is required on the WAP server (since it is a reverse proxy) as it will be serving the web page to the internet client (the CAS server does not serve it directly).

image

If you are publishing an Exchange CAS Array enter the HTTP SPN configured when the ASA was set on all CAS array members *you remembered this from above, right* ; )

image

In order to take advantage of the CAS Array load balancing (NLB or HLB flavors) you’ll want to use the internal DNS FQDN of the CAS array VIP for the backend server URL field. Click next and then click Publish. You will then see the published application in the remote access console.

Note: If publishing Exchange OWA you will also need to publish the ecp directory following the same guidelines above. It’s best to avoid publishing the entire server but instead publishing the URLs granularly.

image

Ok, now that we have OWA published, let’s configure KCD so that the WAP server can authenticate to the CAS array on behalf of the user. KCD is necessary to protocol transition a smart card credential to a Kerberos token. This step can be done before or after publishing the application via WAP, so long as the SPN has already been set on the CAS array.

Login to one of your domain controllers (or via RSAT) and locate the WAP computer object in Active Directory Users and Computers.

· Right click the object and click properties.

· Select the delegation tab, click add and select the computer or user object that has the registered SPN for your web application.

· In this case we are publishing Exchange 2010 OWA so we select the user or computer object that was used as the alternate service account (ASA) where the HTTP SPN was registered.

· Select “Trust this computer for delegation to specified services only,” and “use any authentication protocol”

image

Click OK to commit the changes to the WAP server computer object. Yes, that’s it. That’s all you have to do for KCD. So, we’ve installed and configured ADFS and WAP. Published Exchange 2010 OWA/ECP via WAP, and configured KCD for the WAP box. All this with smart card only authN into OWA. I have this solution installed in Windows Azure and I’ve turned off Forms based authentication in ADFS so that only certificate authentication can be used.

To do that open your ADFS snap-in on the ADFS server and click Authentication Policies in the left pane, then in the right pane under Primary Authentication | Global Settings click Edit.

image

Then under Extranet simply uncheck the Forms Authentication box.

image

Since forms based authentication is turned off in ADFS when you hit the URL you are presented with a certificate chooser. So here’s the user experience (you’ll notice in the shots below I’ve since changed the federation service name to trazurepki.cloudapp.net).

1. From an internet client the user enters the externally published URL in IE (https://mail.treyresearch.com/owa/) and hits ADFS proxy page on the WAP server and is prompted with a certificate chooser:

image

2. User selects certificate and enters PIN:

image

3. User is logged into their exchange 2010 mailbox:

image

Note: You can use the IE developer tools, specifically the network utility to view the redirects in IE on your way to the final destination (OWA). This can be somewhat misleading. Since we are publishing OWA via WAP, the WAP server is the only thing that is exposed to the internet and it also runs the ADFS proxy component. So when you see the OWA URL, and the ADFS URL (trazurepki.cloudapp.net in this case) they both actually resolve to the same IP…you guessed it the external interface IP of the WAP box. WAP behaves similarly to ISA/TMG in regards to reverse web proxy, the internet client only talks to the WAP box, and the WAP box proxies everything internally (to exchange in this case) via Kerberos constrained delegation. I have this entire solution published in Windows Azure, there is no real mail routing capability but it’s great to illustrate the use of exchange publication via Server 2012 R2 WAP. This is a new technology that I’m excited to see included as a server role in the Windows Server platform, add ADFS 3.0 as a hard requirement for WAP and you have a flexible and secure way to publish both claims aware and non-claims aware applications via WAP.

Server 2012 R2 adds great installation and configuration capabilities via Windows PowerShell cmdlets, WAP and ADFS 3.0 are no exception to this. In fact some WAP and ADFS 3.0 configurations can only be done via PowerShell. Additionally you can certainly go through an installation and configuration of this solution solely based on PowerShell.

Well folks once again I hope you found this post useful when developing solutions and discussing security (smart card authN), extranet access, and Exchange publication with your customers and partners. Up next I’ll cover publishing Exchange 2010 ActiveSync via Server 2012 R2 Web Application Proxy using pass-through authentication. See you next time!

Jesse “Hit-Man” Esquivel

Windows Unplugged Event Dates, Don’t Miss Out

$
0
0

Hey y’all, Mark back with a quick thing to make you aware of. Friend of the blog and former Springboard Series editor Stephen Rose is going back on the road like a famous rock band to talk Windows and he might be coming to a city near you. All the info can be found here. Some things to note, seats are limited so jump on this. This is also geared more at the IT Pro but developer folks can also attend. There will also be some food and drinks with a side of swag if you are lucky enough to win. So don’t miss out, stop what you are doing and plug in to windows unplugged event (see what I did there). Until next time.

Mark “was a roadie for Stillwater in ‘73” Morowczynski

A Look Back at KB2953095 Mitigation Option: Using Group Policy to Open RTFs in Protected View

$
0
0

Hi Folks,

Dan Cuomo, Microsoft Platforms PFE, here to discuss one of the mitigation options available for the recent KB2953095: “Vulnerability in Microsoft Word could allow remote code execution.”

If you subscribe to the technical security notifications you may have been relieved to read the advance notification that came out on April 3rd announcing the patch. Prior to the patch release (which was April 8th), many fellow IT Pros were scrambling as my phone bill will attest. I’ve been on calls with customers, calls with the customer reviewing that call, calls from other customers who asked what the aforementioned customers decided on their calls – But all of this rigmarole for a very good reason, this was a clear and present danger (sure I forced that…I’m no Jeff Stokes).

Now if you’re saying to yourself, “What’s the big deal, Dan? We approve all the security patches that come through WSUS so we should be good-to-go, right…? …right…?”

…wait for it….

During the long literary silence, you may have considered your timeline. After receiving the patch on the 8th, testing your lab, and finally applying it to the end user systems, Joe and Jane User (it’s a family business) may have been stuck with plain-text email for over a month.

Now take a stroll towards your help desk and ask if the call volume has been unusually high recently…

While plain-text email is certainly an inconvenience, it wasn’t considered a show-stopper for customers. It’s also only one of the mitigation methods. This specific vulnerability was identified as affecting RTFs in Word. This led many customers to consider using the “fix-it” (in the KB above) to disable the ability to open RTFs altogether. The big concern is that for many customers, disabling RTFs in this manner is out of the question, or at least for some users in the enterprise. It would help protect them, but ultimately break their LOB application.

If this was your situation, this article is for you. I won’t go into all of the mitigation techniques identified in the Security Advisory, or even discuss the intricacies of the vulnerability (for that, head over to Recommendation to Stay Protected and for Detection). What I will show you is how you can use our great friend ** insert trumpets here ** group policy to turn a difficult crisis into a simple, minimally invasive, and temporary solution. As always, remember test in your lab first!

** To be clear, you should always investigate and implement as many mitigation options as possible while balancing the functionality of your mission critical apps.

Force RTFs into Protected View Using Group Policy

First, download the Office ADMX Templates for the installed version of Office in your Enterprise (2010 or 2013). Double click on the download (admintemplates_64bit.exe in my case) and point the installer to a folder. The installer creates two folders and an excel spreadsheet.

image

Navigate into the ADMX folder. As you can see, the full list of Office ADMX files for each Office application is listed. There are also language specific folders containing ADML files. We’ll need the corresponding files for Word.

image

Note: You could easily copy all the files into the central store, or the ADMX files and a single language-specific folder. In this example, I’ll only include the files necessary for this mitigation. If you haven’t already created your central store and want a bit more information on taking the plunge, check out Tom Moser’s and Mark Morowczynski’s article.

Copy the files over to your central store

image

image

Open Group Policy Management and create a new policy, linking it to the user’s organizational unit that require the policy. Remember to name this appropriately and provide an accurate comment. This is not a long-term solution, so we’ll want to make sure that we can easily identify and remove the policy later.

image

Right-Click and edit the policy. As you can see, the administrative templates are now being pulled from the central store. The available policies are all located under User Configuration.

image

Navigate to Administrative Templates > Microsoft Word 2013 > Word Options > Security > Trust Center > File Block Settings

image

Enable the “RTF Files” Policy and Select “Open in Protected View.” As the description indicates, this setting will only allow the opening of the file in Protected View and will disable editing and saving of RTFs.

image

Log into a client machine with a user to which the policy applies and open any RTF file. As you can see, the document opens in Protected View.

image

When I attempt to edit the RTF, I receive the following message in the Status bar.

image

Clicking on the bar at the top “Protected View Editing this file type is not allowed due to your policy setings: Click for more details.” Brings me to the following:

image

Clicking on “File Block Settings” identifies the locked setting, denying access to open or save RTFs

image

Lastly, when we look at the registry we see the rtffiles dword added to the registry and set to 4.

image

Removal of the GPO

As we previously mentioned, this was a quick and temporary workaround until the patch arrived. Once that occurred and you verified that your systems have been updated, it’s time to remove the mitigation and enable RTF functionality for your users again.

You may have noticed that the registry key associated with this policy lives in one of the “preferred” group policy user settings locations; it is a “true policy.” This means that the key is fully managed and the system will return to its default behavior when the policy turns out of scope.

As a test, we can simply unlink the policy.

image

Log back into the client system and try opening your RTF again. I no longer see the “Protected View” message. I can edit and save the document.

image

The trust center confirms

image

And the registry keys are removed

image

Mixed Office Version Environments

The above will work, but unfortunately only for Office 2013. Do all of your systems have the same version of Office installed? Perhaps you’re in the middle of an upgrade or maybe you have another business justification. Whatever the reason, the simplest fix is to add the Office 2010 ADMX files to your central store.

image

image

You’ll notice that there are additional Policies for Office 2010

image

You can configure this in the exact same manner as with Office 2013.

image

I would recommend using separate policies for simplicity. I’m not going to venture into the arena of how to delimit the policies because ultimately, it depends. However if you’re considering a WMI filter make sure you’re aware of Ned’s warning on How to NOT Use Win32_Product in Group Policy Filtering

image

image

After logging back onto the client, Office 2010 also opens the RTF in Protected View

image

The “downside” of not filtering in some manner is that you’ll have processed the extra setting in that GPO for an application version you may not have installed.

image

Keep in mind that there are other mitigations to this vulnerability as identified in the advisory however the implementation of any of these must ultimately accommodate your business requirements.

On a separate note, testing has shown that Microsoft’s free tool EMET thwarts this exploit using its default configuration.

However, if you do not currently have EMET in your environment rolling it out in the middle of a crisis is not the time you want to start. You need to vet your business applications for compatibility and protect against this threat immediately. Until then, using group policy and the office ADMX templates in the central store allowed us to easily and quickly implement a minimally invasive mitigation technique.

Until next time,

Dan “I don’t always open RTFs but when I do, I open them in Protected View” Cuomo

List of References:

KB2953095 - Microsoft Security Advisory: Vulnerability in Microsoft Word could allow remote code execution

Microsoft Security Advisory (2953095) - Vulnerability in Microsoft Word Could Allow Remote Code Execution

Security Advisory 2953095: recommendation to stay protected and for detections

Microsoft Security Bulletin Advance Notification for April 2014

Advance Notification Service for the April 2014 Security Bulletin Release


How to Clean up the WinSxS Directory and Free Up Disk Space on Windows Server 2008 R2 with New Update

$
0
0

It’s finally here! After pages and pages of comments from you requesting the ability to clean up the WinSxS directory and component store on Windows Server 2008 R2, an update is available.

http://support.microsoft.com/kb/2852386

As a refresher, the Windows Server 2008 R2 update is directly related to my previous blog post announcing a similar fix for Windows 7 client

The Windows 7 version of this fix introduced an additional option to the Disk Cleanup wizard that would cleanup previous versions of Windows Update files. KB2852386 adds a Disk Cleanup option on Windows Server 2008 R2, similar to the Windows 7 update. 

What does this mean for Windows Server 2008 R2? After installing this update and prior to being able to perform the cleanup, the Desktop Experience feature must be installed. Why you ask? Disk Cleanup is not installed by default on Windows Server 2008 R2. It is instead a component installed with the Desktop Experience feature. 

Why was the update not included as a DISM switch like Windows Server 2012 R2? 

This was evaluated, however, due to the amount of changes required and the rigorous change approval process, it was not feasible to back port the functionality this way. Knowing that it would be some time before everyone could upgrade to Windows Server 2012 R2 and based on feedback from an internal survey taken of a subset of enterprise customers, it was determined that this update would still be useful in its Disk Cleanup form, even with the Desktop Experience prerequisite. We hope you agree. However, we are aware that for some of you, the Desktop Experience requirement will be a deal breaker, but decided to release it anyway hoping it will help in some instances. 

How can I get the update?

The update is available on Windows Update. It can also be manually downloaded from the Microsoft Update Catalog. The KB article listed above will also direct you to a download link in the Microsoft Download Center.

Let’s Cleanup those Old Windows Update Files!

First, let’s take a look at our starting point. Looking at my Windows 2008 R2 Server with SP1 installed, according to Windows Explorer, the size of my Windows/WinSxS directory is as follows: 

The size of the WinSxS directory will vary by server. Some of you will have smaller WinSxS directories, some larger.  

Installing the update is just like installing any other update. Just download and double-click on the .msu file: 

Installing the update does not require Desktop Experience to be installed beforehand, but if you check your WinSxS directory again, you’ll see there has been no change to the size. This is expected as we need to run Disk Cleanup in order for this to take effect. It also does not require a reboot to install the hotfix. 

But…we can’t do anything with what we just installed until we get Disk Cleanup which is installed with the Desktop Experience feature. 

When installing Desktop Experience, it does require additional features. Select the button to Add Required Features and click Next and then Install: 

A reboot is required to finalize the install. 

Click Close and Reboot when prompted. 

After we reboot, a Disk Cleanup option can be found under Start --> All Programs --> Accessories --> System Tools:

On launch, Disk Cleanup prompts for the drive you want to clean up: 

After clicking Ok, a scan is performed: 

Several options are provided for cleanup, including a new option for Windows Update Cleanup:

Just like the Windows 7 cleanup, mileage will vary. Also like Windows 7, the actual cleanup occurs during the next reboot. After the reboot, taking a look at the WinSxS directory, it has shrunk to the following: 

Automation

My super knowledgeable scripting cohort Tom Moser wrote a PowerShell script that automates THE ENTIRE PROCESS. Can I get a cheer? Ok. So maybe it is a bit much to expect IT admins to cheer, but can I get an appreciative grunt?  The script certainly beats the alternative of doing this all manually. 

You can find the script on the TechNet Script Center here: 

http://gallery.technet.microsoft.com/scriptcenter/CleanMgrexeKB2852386-83d7a1ae

What does the script do? 

In short, the script does the following: 

1) Installs Desktop Experience, if not previously installed, and performs a reboot. 

2) Sets the appropriate registry keys to automate the cleanup. The script will cleanup not only previous Windows Update files as well as Service Pack files. 

3) The script then initiates the cleanup. 

4) If Desktop Experience was not previously installed, the script uninstalls it.

5) Performs final reboot. 

For more details, read below.  

The script can be run from any directory on the server. It has two parameters: LogPath and a switch called NoReboot. LogPath will allow the user to specify a log location or if none is specified, by default, the script will create a log in the same directory from which the script was executed. NoReboot allows the user to suppress reboots, but will require manual reboots by an administrator. 

Note: Make sure to check the log file to verify the process completed successfully and to verify there is no manual interaction required. If the script has completed successfully, the log will end with CleanMgr complete.

The script has several phases, using a registry key to keep track of progress. After initial run, it inserts itself as a scheduled task, which runs as local system. The final phase removes the task.

Depending on pending reboots, etc, we have found that this phase may generate a few reboots. Do not be concerned if the server reboots a few times. 

Other Options

Aside from the cleanup mechanism included with this fix, if you have applied SP1 and have not cleaned up afterwards, I’d highly recommend doing so by running the following command from an administrative command prompt:

dism /online /cleanup-image /spsuperseded

or 

If you have installed the Desktop Experience feature and thus have the Disk Cleanup utility, you can select the following option to do the same thing: 

Specifying the /spsuperceded switch or choosing to remove service pack backup files will remove the ability to uninstall the service pack. If you haven't done it before, it is certain to free up some space. 

The Origins of this Update (Hint: Windows Server 2012 R2)

I’ve mentioned a couple of times that this is a back port. What does that mean? Well, it means that this functionality is already built into a later operating system. In this case, that operating system is Windows Server 2012 R2. Not only do we have several mechanisms to automatically cleanup previous versions of Windows Update files like this update does, we even have the ability to more accurately determine the size of the component store (aka the WinSxS directory). 

The command to accurately determine the size of the component store on Windows Server 2012 R2 is as follows: 

Dism.exe /Online /Cleanup-Image /AnalyzeComponentStore

Running this command analyzes the component store to determine the size and whether cleanup is recommended. Notice in the screen shot that it provides you with the Windows Explorer reported size and the actual size: 

Notice that the component store is much smaller than Windows Server 2008 R2 right out of the gate? This isn’t because I’ve used Features on Demand to remove roles and features. It’s because by default in Windows Server 2012 R2, we compress all unused binaries. Another win for Windows Server 2012 R2!

Looking at the breakdown of the 5.12GB. We see that Shared with Windows accounts for 3.83GB of the 5.12GB. Shared with Windows refers to the size of the files that are hardlinked between the WinSxS directory and the Windows location of the file. Because these hardlinks appear to take up space, but don't really, we can subtract them from our component store size. Therefore, the actual size of the component store is the total of Backups and Disabled Features plus Cache and Temporary Data or 1.28GB. 

But back to our cleanup. 

In the above screen shot, it’s stated that component store cleanup is recommended. We can manually cleanup the component store on Windows Server 2012 R2 by running the following command:  

Dism.exe /online /Cleanup-Image /StartComponentCleanup 

What does this do? When this runs, Windows cleans up the previous versions of the component that was updated. In other words, it is doing exactly what our update does for Windows Server 2008 R2 SP1. It removes previous versions of the files updated by Windows Updates. 

After running /StartCompomentCleanup, upon analyzing the size again, we see it is as follows: 

So no notable difference really. Largely because we’ve been running this cleanup all along. This same command is run every 30 days as a scheduled task with a time limit of 1 hour. 

With the scheduled task however, the task will wait at least 30 days after an updated component has been installed before uninstalling the previous versions of the component. This scheduled task can be found in Task Scheduler under the Task Scheduler Library\Microsoft\Windows\Servicing\StartComponentCleanup directory: 

More information on this can be found here:  http://technet.microsoft.com/en-us/library/dn251565.aspx 

If you’re in all out spring cleaning mode and want to perform super deep cleanup, you can use the /resetbase command with the /startcomponentcleanup to remove all superseded versions of every component in the component store: 

Dism.exe /online /Cleanup-Image /StartComponentCleanup /ResetBase 

This removes the ability to uninstall any updates applied until this point in time. 

And don’t forget the ability to completely remove any role or feature which also reduces the size. Take a look at one of my earlier blogs for more details on Features on Demand:  http://blogs.technet.com/b/askpfeplat/archive/2013/02/24/how-to-reduce-the-size-of-the-winsxs-directory-and-free-up-disk-space-on-windows-server-2012-using-features-on-demand.aspx 

Here’s a handy table showing when we introduced the various different cleanup and WinSxS size reductions by operating system: 

Operating SystemCompress Unused WinSxS BinariesCleanup Previous Windows Update FilesAutomatically Clean Up Previous Windows Update FilesCleanup All ComponentsFeatures on Demand
Windows Server 2008 R2With KB2852386
Windows Server 2012 With KB2821895xxx
Windows Server 2012 R2xxxxx

Want more information on how all this works under the covers? 

Check out the following series on the AskCore team blog for an in-depth look at servicing improvements on Windows Server 2012 R2: 

What’s New in Windows Servicing: Part 1

What’s New in Windows Servicing: Reduction of Windows Footprint : Part 2

What’s New in Windows Servicing: Service Stack Improvements: Part 3 

More on the Desktop Experience Feature

The Desktop Experience feature includes the following components and features:

* Windows Media Player

* Desktop themes

* Video for Windows (AVI support)

* Windows SideShow

* Windows Defender

* Disk Cleanup

* Sync Center

* Sound Recorder

* Character Map

* Snipping Tool

* Ink Support 

Most of these are not automatically turned on with the exception of Windows Defender whose service is started after reboot. You’ll likely want to stop the service and disable it after reboot. Not all 3rd party anti-viruses conflict with Windows Defender, but there have been reports that some do. 

~ Charity Shelbourne and Tom Moser, Spring cleaning servers since 1998

Update May 15th, 2014

We are aware of a method of copying in the appropriate Disk Cleanup/CleanMgr files into the appropriate location to avoid installing the Desktop Experience. If this were a tested and supported option, we certainly would have included these details in this post and definitely would have used this method to automate the cleanup. However, it was determined early on that this method would not be supported. If you decide to do this, do so at your own risk.

How To Automate Changing The Local Administrator Password

$
0
0

Hello again, Tom Ausburne with another post we think you’ll enjoy. With all of the security breaches we keep reading about on the Internet these days, I keep getting asked how to make servers and workstations less vulnerable to attack. The list of things you can do are extensive and more than I could ever cover in a single blog post. Today I decided to focus on one that is simple to implement, changing the local administrator’s password on a regular basis. If you only have one computer then this is an easy thing to do. But what about when you have thousands of servers and tens of thousands of workstations? Now we need a little bit of automation to keep things moving along smoothly.

The first thing that might come to mind is to use Group Policy Preferences. Because of the security concerns with storing passwords in Group Policy Preferences, Microsoft just released a security patch, MS14-025: Vulnerability in Group Policy Preferences could allow elevation of privilege, that removes this functionality. Once you apply this patch, this is no longer a usable option. Since some of you may be doing this already, I'll talk about this in a little detail and explain why we removed this. 

1. Start the Group Policy snap-in.

2. Expand Computer Configuration

3. Expand Preferences

4. Click Control Panel

5. Right-click Local Users and Groups

6. From the menu select New - Local User.

7. Select Update as the action

8. Type Administrator into the User name text box

9. Type the new password into the Password text box, confirming the password in Confirm Password text box.

10. Press OK.

image

That was pretty simple right. It is, but it is not all that secure. You see the stored password is obfuscated. That’s is a fancy way of saying that the password is converted to an unreadable format. Unreadable doesn’t mean secure however. And all machines that get this policy will have the same password. The other problem with this solution is that although the password is encrypted using AES 256, the private key for the encryption is located on MSDN.

clip_image001

The passwords are stored in the XML file that Group Policy Preferences uses to store its data. When you set this up you are even warned about this:

image

Let’s take a look at that file. You can see that while the information shown below in yellow isn’t the password I used (which was Password1), it is still readable by any authenticated user as they all have read access to Sysvol.

image

Let’s take this a step further. Using a simple PowerShell script (more info found at the link)  that utilizes the encryption key that was easily available on MSDN, I was able to get the password with a few keystrokes.

image

I’m not sure I could ever in good conscious recommend this to anyone who is serious about security. Once this password is discovered I am opening myself up to a Pass-the-Hash (PtH) attack. So what else can you do?

If you are a Microsoft Premier customer you have the option to get a Remediation Side by Side Securing Lateral Account Movement delivery (we typically call it SLAM). This course is an advanced course for implementing mitigation lateral movement mitigations from Microsoft’s Pass the Hash Whitepaper. It covers an overview of Pass-the-Hash (PtH) and ways to mitigate it, understanding the breadth of related credential theft risks, risks of using shared passwords, the enforcement of local account restrictions, randomized local Administrator account passwords, recovery procedures for privileged account passwords and quite a bit more.

Note: If you are a Microsoft Premier customer and would like more information on this delivery, please contact your Technical Account manager. If you aren’t a Microsoft Premier customer, contact us and we can get you talking to the right people.

I know what many of you are saying. I’m too small to be a Microsoft Premier customer but I still want to be do something to protect myself. What can I do? Glad you asked. One of our Microsoft Consulting Services (MCS) engineers wrote a solution to this problem that anyone can download and implement with ease. It works with Windows 2003 SP1/XP and above so as long as you have an Active Directory environment you should be good to go. You do have to be running domain controllers that are running Windows Server 2003 SP1 or a later because this solution uses confidential attributes which were introduced in 2003 SP1. This feature does not depend on whether a domain or a forest functional level is enabled.

So if you want to follow along with my installation instructions, you might want to go ahead and download the files. You will need the Installers and Documentation files.

Solution for management of built-in Administrator account's password via GPO
http://code.msdn.microsoft.com/Solution-for-management-of-ae44e789

While you are downloading these let’s talk about what this solution is designed to do:

• Password must be unique and random on each computer

• Password must not be guessable from name of workstation, MAC addresses, etc.

• There must be a way for eligible people (IT support staff) to easily know the password when necessary

• Password management solution must scale to support thousands of computers

• Password management solution must be easily deployable and manageable

• Password management mechanism must be resistant against tampering with

• Password management solution must support renaming of built-in Administrator account

• Password management solution must offer the mechanism for bulk password change when necessary

• Solution must be able to correctly handle the situation when computer is disconnected from corporate network, i.e. not to change the password when it is not possible to report it to the password repository

• Solution must support OS Windows XP/2003 and above

• Solution must support x86 and amd64 hardware platforms

There is a lot more that this solution can do but those are the highlights. The documentation provided with the solution is very detailed and can answer most any question you might have. The developer of the solution is also very responsive if your questions aren’t answered either here or on the Q/A section of the solutions page. What I wanted to do in this blog is to show you just how easy this is to install and configure.

Here is my little Disclaimer: You should test all of this out in a lab first as the first step requires that you modify the Active Directory Schema which as you probably know is non-reversible. As you are also aware, Microsoft no longer supports XP.If you continue to use Windows XP now that support has ended, your computer will still work but it might become more vulnerable to security risks and viruses. If you are reading this article on how to help increase security I'm assuming you are either already off of XP or well on your way. If you aren't, what are you waiting on!

As in the previous articles I am going to assume you know how to set up a basic Active Directory domain with servers and clients and won’t be covering that. So let’s get started.

Installation Procedures

Much of this work involves modifying the Schema, Group Policies and Organization Units (OU) so I am doing all of this work on my Domain Controller. I’m using Windows Server 2012 R2 and Windows 8.1 on the client so that’s where the screenshots are taken from.

The first thing to do is to extract the files from the Installers.zip to a folder. There will be two files, AdmPwd.Setup.x64.msi and AdmPwd.Setup.x86.msi. Copy these files to a working directory. Double click on the appropriate file to get started.

image

Click Next.

For this first machine you should enable all the installation choices. You will use these same files later to install the Client Side Extensions (CSE) on all the computers and the PowerShell module and/or Fat client on the management computers.

image

Click Next.

image

Click Install.

image

Modifying the Schema

The Schema needs to be updated to add two new attributes:

ms-MCS-AdmPwd – Stores the password in clear text

ms-MCS-AdmPwdExpirationTime – Stores the time to reset the password

I can already hear the wheels turning on the clear text part. When the password is changed the communication is encrypted with Kerberos encryption. We will cover restricting its viewing it in Active Directory in a just a minute.

** Just to make sure you didn’t just skim over the first part to get to the “meat” of the article, let me just say one more time that you should be testing this in a LAB environment first**

Okay, back to the good stuff. To update the Schema you first need to import the PowerShell module. Open up an Administrative PowerShell window and use this command:

Import-module AdmPwd.PS

image

Now you can update the Schema with this command:

Update-AdmPwdADSchema

image

Delegation of permissions on computer accounts

The next step is to remove all extended rights/permissions from users and groups that are not allowed to read the value of attribute ms-MCS-AdmPwd. This is required because the All Extended rights/permissions permission also gives permission to read confidential attributes. If you want to do this for all computers you will need to repeat the next steps on each OU that contains those computers. You do not need to do this on subcontainers of already processed OUs.

1.Open ADSIEdit

2. Right Click on the OU that contains the computer accounts that you are installing this solution on and select Properties.

3. Click the Security tab

4. Click Advanced

5. Select the Group(s) or User(s) that you don’t want to be able to read the password and then click Edit.

6. Uncheck All extended rights

image

Important: This will remove ALL extended rights, not only CONTROL_ACCESS right, so be sure that all roles will retain all necessary permissions required for their regular work.

Next we need to add the Write permission on ms-MCS-AdmPwdExpirationTime and ms-MCS-AdmPwd attributes of all computer accounts to the SELF built-in account. This is required so the machine can update the password and expiration timestamp of its own built-in Administrator password. We can use PowerShell to do this.

Set-AdmPwdComputerSelfPermission -OrgUnit <name of the OU to delegate permissions>

In my case the OU name was Servers. Repeat this for any additional OUs that contain computer accounts you want this solution to apply to.

image

Now add the CONTROL_ACCESS permission on ms-MCS-AdmPwd attribute of the computer accounts to group(s) or user(s) that will be allowed to read the stored password of the built-in Administrator account on managed computers.

Set-AdmPwdReadPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

Use the same –OrgUnit name(s) as in the previous command.

image

Note: You can use multiple groups and users in the same command separated by comma.

Ex: Set-AdmPwdReadPasswordPermission -OrgUnit Servers -AllowedPrincipals domain2\Administrator,domain2\HelpDesk,domain2\Group3

Lastly we need to add the Write permission on ms-MCS-AdmPwdExpirationTime attribute of computer accounts to group(s) or user(s) that will be allowed to force password resets for the built-in Administrator account on managed computers.

Set-AdmPwdResetPasswordPermission -OrgUnit <name of the OU to delegate permissions> -AllowedPrincipals <users or groups>

Again, use the same –OrgUnit name(s) as in the previous commands. You can still use multiple groups and users in the same command separated by comma.

image

Registration of CSE with chosen Group Policy Object

What makes all of this work is a Group Policy client side extension that runs on each server whenever Group Policy is refreshed. We need to register it with a PowerShell command.

Register-AdmPwdWithGPO -GpoIdentity: "General Server GPO"

The –GPOIdentity can be either the displayName, GUID or distinguishedName. My example uses the displayName.

image

If you are wondering what this command does, it adds its client side extension value to the gPCMachineExtensionNames attribute for the Group Policy you selected. I my policy it looked like this:

Before:

[{C6DC5466-785A-11D2-84D0-00C04FB169F7}{942A8E4F-A261-11D1-A760-00C04FB9603F}]

After:

[{C6DC5466-785A-11D2-84D0-00C04FB169F7}{942A8E4F-A261-11D1-A760-00C04FB9603F}][{D76B9641-3288-4f75-942D-087DE603E3EA}{D76B9641-3288-4f75-942D-087DE603E3EA}]

image

Run the client install on the local servers

The final step in the setup process is to run the client install on the local computers. These are the same install files, AdmPwd.Setup.x64.msi and AdmPwd.Setup.x86.msi, that we used earlier. I chose to do mine using the Software Installation feature of Group Policy but you can do this any way you like. If you just want to test this quickly a manual install like we did in the beginning works just fine. On the client you don’t need to choose any additional options.

image

Once this is installed you can see it in Programs and Features.

image

If you want to script this you can use this command line to do a silent install:

msiexec /i <file location>\AdmPwd.Setup.x64.msi /quiet or msiexec /i <file location>\AdmPwd.Setup.x86.msi

Just change the <file location> to a local or network path. Example: msiexec /i \\dc3\share\AdmPwd.Setup.x64.msi /quiet

I’m sure there are other ways but this should get you started.

Changing the Basic Configuration

By default this solution uses a password with password complexity, 12 characters and changes the password every 30 days. If you decide you want to change that just modify the settings as seen below.

image

Choose the settings that fits your needs.

image

So what does this look like?

Once everything is configured, and Group Policy has refreshed on the clients, you can look at the properties of the computer object and see the new settings.

image

The password is stored in plain text. The Expiration date is a little more complex. Active Directory stores date/time values as the number of 100-nanosecond intervals that have elapsed since the 0 hour on January 1, 1601 till the date/time that is being stored. The time is always stored in Greenwich Mean Time (GMT) in the Active Directory. If you want to convert it yourself try this command:

w32tm /ntte <number you want to convert>

image

I know you don’t want everybody poking around in Active Directory so wouldn’t it be nice if there was a graphical interface available. Well there is. When you install the program on a computer where you want the ability to easily retrieve the password just select the Fat client UI option.

image

The program you want to run is C:\Program Files\AdmPwd\ AdmPwd.UI.exe. It will be in the menu and looks like this:

image

When you launch the interface all you have to do is enter the client name and click Search.

image

If you want to manually reset the password just click the Set button. When a Group Policy refresh runs it will be updated.

If you don’t want to deal with a GUI you can also get the password using PowerShell. All the same security you set still applies. You still need the rights to read this attribute!

Get-AdmPwdPassword -ComputerName <computername>

image

You can also reset the password using PowerShell.

Reset-AdmPwdPassword -ComputerName <computername> -WhenEffective <date time>

image

Just in case you are wondering, let’s see what happens when a user who hasn’t been granted rights to see the local Administrators password tries to access it. If they were to gain access to the GUI interface the password won’t be displayed.

image

Likewise if they run Active Directory Users and Computers (ADUC) and try to view the password it will show as Not set.

image

Compare that screenshot with someone who has been granted rights to view the password.

image

Now you can see why earlier we removed the extended rights and granted only certain individuals and groups the rights to see this. It doesn’t take a lot of effort for someone to browse through Active Directory and see most any information. If you don’t restrict who can see this then why are you bothering to implement this in the first place?

Final Thoughts

As I mentioned earlier, security is at the top of many people’s list these days and sometimes the implementation of solutions can be costly and/or time consuming. I wanted to show that there are still things that can be done quickly and easily to lessen the impact of security threats. Speaking of that, don’t forget about the Premier offering I talked about towards the beginning. It covers a wider range of security issues. If you want someone onsite to help you with this and a whole lot more, tell your TAM you want SLAM. You can send them a link to this post if you want. If you aren’t a Premier customer and want to find out all the benefits of being one just let us know.

So your mission, should you decide to accept it, is to quickly and easily increase security in your environment by changing your local administrator passwords and make them unique on every machine. This password will self-destruct in 3…2…1….

Tom Ausburne

How DCs are Located Across Forest-Trusts: Part Two

$
0
0

Tom Moser here again. Sometime last year, I wrote a post around cross-forest DCLocator and how it works. I promised a sequel and then I got busy learning 2012 R2 stuff, VMM, ADFS, Hyper-V Network Virtualization, Forza 5, Ryse, Dead Rising, etc. After some emails and comments in the last few weeks demanding a sequel, you can put away the torches and pitchforks; this will be better than Episode I.

Review

Make sure to check out the first post in this two-part series if you haven't. I'm not going to spend a lot of time on review as it covers the bases pretty well.

It's somewhat common, particularly in enterprise environments, to have multiple forests within an environment due to architecture decisions or mergers and acquisitions. Often times these newly-acquired forests will have an IPv4 address space that may have overlap. Here I'll show what happens in the event that you have an overlapping subnet between two forests, from a DCLocator perspective.

To be clear, I'm not going to cover anything involving NAT or IPv6 as we generally don't recommend using NAT for AD. For the sake of discussion, we'll just assume that the networks are fully routable, but an admin added the wrong subnet in the other forest. So let's get to it.

Setup

If you read the last post, you'll remember that we're working with two different forests here, from my lab: DMZ.milt0r.com and Corp.milt0r.com. I've established a forest trust between the two and have created a somewhat similar site topology (Figure 1).

 

 

Figure 1 - Site Topology

Above you can see we've got the two forests, each with a few sites. CORPHQ exists in both forests, with the same 10.1.1.0/24 subnet. CORPBR also exists in both, but has conflicting subnet definitions in each forest. It appears that an administrator has set 10.50.1.0/24 on CORPBR in CORP, but hasn't set a subnet on CORPBR in the DMZ forest. Further, there's a site in the DMZ forest called "Siberia" that has the 10.50.1.0/24 subnet set.

The Client

Let's say we've got a user in the CORP forest. That user needs to access resources in the DMZ forest. For the sake of discussion, we'll assume this resource is a file server, but it applies to anything where we're doing Kerberos auth or need to bind directly to a DC, LDAP for example. Having read Part 1, you already know that when I attempt to authenticate to that DC on the other side of the trust, I'll use my own site name to construct a DNS query to find a DC.

First, this shows our client site (Figure 2):

Figure 2 - Where...am...I?

There you can see that the client belongs to the CORPBR site in CORP.

Next, we try to connect to the resource in the DMZ. Since we're going to use Kerberos, netlogon looks for a domain controller in DMZ.milt0r.com in the site from our local forest. Looking at the DNS traffic you'll notice that DNS query contains CORPBR (Figure 3):

Figure 3 - DNS Query for CORPBR

And immediately following, there's a DNS query for DMZDC2 (Figure 4):

Figure 4 - DNS Query for DMZDC2

Next, we send a UDP LDAP ping to the DC and get a reply (Figure 5):

Figure 5 - DMZDC2 Ping

That LDAPMessage is a UDP "ping" to the DC, trying to find a domain controller. As is expected, the domain controller replies in the next frame. If we examine the contents of the NetLogon response message, we see (Figure 6):

Figure 6 - SamLogonResponse from CORPBR DC

Up until this point, everything has been exactly the same as what I described in part one. However, while we did locate a domain controller in the CORPBR site, that NetLogon reply tells the client that, since we're on a 10.50.1.0/24 subnet, we should be talking to a domain controller in the Siberia site. Based on what I've demonstrated so far, you probably know what's next: A DNS query for a DC in Siberia… and that's exactly what we get (Figure 7).

Figure 7 - Let's find that Siberia DC

The client sends a DNS query for a DC in the Siberia site, receives a reply that DMZDC1 is the target, and then resolves the host record to an IP. Just as before, we'll send another LDAP ping (Figure 8):

Figure 8 - Pinging...

In the first frame, the ping goes out, the DC replies and we see (Figure 9):

Figure 9 - SamLogonResponseEx from Siberia

We've successfully bound to a DC in Siberia. Running nltest /dsgetdc, we can verify this (Figure 10).

Figure 10 - Proof.

How does this impact me?

In part one, I demonstrated how and why having similar site names across forest trusts is important. Ensuring that there are matching site names will provide your clients with a more efficient, consistent experience when crossing forest boundaries. That said, it's also important to ensure that IP subnet ranges on your AD sites align across forest boundaries as well.

In the example above, administrators may have expected that clients would authenticate from CORPBR in CORP to CORPBR in DMZ. Instead, what we see due to the subnet conflict, is that clients end up heading off to Siberia to authenticate. If the WAN links to Siberia were slow or there happened to be insufficient domain controller capacity in Siberia, clients will have a slow, inconsistent experience. Even worse, if there are firewalls out there prohibiting access across sites, clients may end up stalled for several minutes while trying to locate a DC.

It's also important to keep in mind that this applies to networks that are unroutable or might have conflicting address space. Commonly during mergers or acquisitions, both companies will have the same IP scheme, using 10.0.0.0/8 networks. There's no way to make this work without NAT as overlapping address space won't be routable (disclaimer: I'm not a networking expert). Since we don't support NAT your best option is probably to merge the networks or come up with some equally not-NAT-but-still-routable plan. Ultimately, the best plan will probably not be the easiest plan, and that's merging networks and re-IP'ing devices on one side or the other. If you have a solution of your own, leave a comment!

Wrap Up

I hope that between the two posts in this series, you've got a solid understand of cross-forest DCLocator. In most cases simply matching site names across forests will solve your problems… just be on the lookout for conflicting subnet definitions and overlapping address spaces as they may cause things to behave in ways you weren't anticipating.

Until next time!

Tom "Just get a packet capture" Moser

 

 

 

Group Policy Debug Troubleshooting: A Real World Example

$
0
0

Hi My name is Lakshman Hariharan and I work for Microsoft as a Premier Field Engineer supporting Active Directory.

One of the things I love about what we do on a daily basis is the fact that we get to work on some of the strangest issues. This is also one of the things that can be the most frustrating.

I recently worked an interesting issue. The issue as described to me was that on all Windows Server 2008 R2 domain controllers Group Policy updates have been happening every two minutes, on the dot. The default Group Policy update interval on domain controllers is five minutes. We found that the Group Policy operational log on all domain controllers contained the following event pictured in the screenshot below.

A Group Policy Object (GPO) was created as a stop gap mitigation strategy until we could find root cause for the Group Policy update spam. We configured the GPO to only update policy every 90 minutes and applied it to the domain controllers Organizational Unit (OU). But the Group Policy update still continued to happen every two minutes.

While this may sound like a corner case… which admittedly it is, the goal is to use this issue as an example to demonstrate how to start tracing and troubleshooting things such as seemingly random Group Policy updates being initiated from out of nowhere. Additionally, we can leverage this case study as an initiation on how to decipher Group Policy Service (gpsvc) debug logging.

Here is a screenshot of the event:

image

We the Resultant Set of Policy (RSOP) tools using a couple of different methods such as Group Policy Management Console (GPMC) and RSOP.msc using the Microsoft Management Console (MMC) to see what policies are being applied to the domain controllers and ensuring that there were no other policies or settings superseding or otherwise trumping the default settings for domain controllers to refresh policy.

Having ruled out any other conflicting settings to further troubleshoot the issue we first needed to find out what the PID for the Group Policy service (gpsvc) was.

One interesting item to note here is that the gpsvc service is hosted under a particular instance of svchost. There are several ways to find out what specific services are hosted under a svchost instance (Resource Monitor being another way) but the easiest way is to run the following command from a command line

tasklist /svc

The output will look similar to this, and we can see that the svchost instance with a Process ID (PID) of 860 is the one that hosts the gpsvc service.

image

Armed with this knowledge of the gpsvc PID we went about what we knew best to do when trying to find clues on what may be initiating a certain process or behavior on a machine. And that was to run Process Monitor (or ProcMon for short). So we monitored the PID of 860 for a few minutes.

Even though we could see clearly that every two minutes Group Policy update was being initiated and a detailed view using another tool called Process Explorer we could see that it was indeed processing Group Policy, (initiating an ldap connection to itself, enumerating the policies, pulling down settings etc.) we couldn’t find the caller Process Name or PID calling in to PID 860.

We also ran a network trace on the domain controller but since the connection was from the domain controller to itself we couldn’t necessarily see anything in the trace. Even if the network trace contained useful data, it would have been akin to finding the proverbial needle in a haystack.

The solution to our problem lay in finding the caller PID. And none of these tools were going to help us do that. After posting a question to an internal Microsoft distribution list that discusses Group Policy issues I was given the suggestion of turning on gpsvc debug logging, the steps for which are documented here.

For folks that are familiar with having to troubleshoot Group Policy issues from the Windows XP/Windows Server 2003 days, the log file generated by this logging technique should be very familiar to the User Environment Debug (UserEnv) log. Several excellent blogs have been written on deciphering UserEnv logs which I will refrain from referencing here, in the interest of brevity, but one specific point of finding what process is doing what at a particular instance is very useful.

We enabled gpsvc debug logging and waited the requisite two minutes at which point, sure enough, we saw the event ID 8004 being logged in the Group Policy operational log.

Below is the snippet from the Group Policy debug log. Notice the timestamps 10:17 a.m. and 10:19 a.m. in the log, highlighted in green. This is the two minute gap after which the Group Policy update happens.

Also notice the numbers right after “GPSVC”, highlighted in red. These are the process IDs, in hexadecimal format of the processes active during the timeframe, and as geek trivia, the numbers in hex after the period (2b98 and ca8 in this case) are the thread IDs, or the worker bees if you will, doing the actual work within those processes. Again, this should be quite familiar to anyone that has read UserEnv logs in the past.

GPSVC(35c.2b98) 10:17:44:850 ConnectToNameSpace: ConnectServer returned 0x0
GPSVC(35c.2b98) 10:17:44:850 ProcessGroupPolicyCompletedExInternal:
GPSVC(35c.2b98) 10:17:44:850 ProcessGroupPolicyCompletedExInternal: Finished processing extension <Audit Policy Configuration> at 634181894 ticks (ms)
GPSVC(35c.2b98) 10:17:44:850 ProcessGroupPolicyCompletedExInternal: Leaving GPSVC(1694.ca8) 10:19:43:311 GetGPOList: Entering.
GPSVC(1694.ca8) 10:19:43:311 GetGPOList: hToken = 0x310
GPSVC(1694.ca8) 10:19:43:311 GetGPOList: lpName = <NULL>
GPSVC(1694.ca8) 10:19:43:311 GetGPOList: lpHostName = <NULL>
GPSVC(1694.ca8) 10:19:43:311 GetGPOList: dwFlags = 0x1
GPSVC(35c.b10) 10:19:43:311 GetGroupPolicyObjectListInternal: Queried lpDNName = <CN=CONTOSODC1,OU=Domain Controllers,DC=contoso,DC=com>
GPSVC(1f8.b10) 10:19:43:311 GetLdapHandle: Getting ldap handle for host: contoso.com in domain: <Unspecified>.
GPSVC(35c.b10) 10:19:43:311 GetLdapHandle: Server connection established.
GPSVC(35c.b10) 10:19:43:311 GetLdapHandle: Binding using only kerberos.
GPSVC(35c.b10) 10:19:43:311 GetLdapHandle: Bound successfully.

From our previous tasklist /svc output we know that process 35c (860 in decimal) is the svchost instance running gpsvc.

From the log it is clear that right before the process with PID of 860 is called a process with PID 5780 (1694 in hexadecimal) is called.

Running tasklist /svc again we see that the process with PID of 5780 is PolicyAgent (screenshot below).

image

For those unfamiliar, PolicyAgent is the IPSec Policy Agent service. This was our caller process and corresponding PID.

Armed with this knowledge we ran another RSOP report against one of the domain controllers, and lo and behold we found out that there was a legacy* IPSec policy that was assigned to the domain controllers. Screenshot below of the legacy IPSec policy.

image

Investigating further regarding this policy I was informed that right around when this issue of Group Policy update every two minutes cropped up was when some changes were made to an old IPSec policy that was in place years ago per a directive from another business unit.

When the changes were made, the intent was to delete all the settings within the policy but just one of the policies ended up inadvertently being active. Upon further investigation we found out that because of invalid settings the policy was constantly failing and was triggering Group Policy update every two minutes.

Once we disabled the offending policy the Group Policy updated every two minutes stopped and life was back to normal.

*As an aside, the settings under Windows Settings à IP Security Policies are considered legacy policies. They are deprecated and should not be used anymore to configure IPsec settings. The recommended and supported way to configure IPsec settings on up level (read 2008/Vista and above) is to use Windows Firewall with Advanced Security (WFAS)

-Lakshman Hariharan

Finding The Latest Binary Version With SilverSeekKB

$
0
0

Hey, IT Pros!  Chris Harrod here. I'm a Senior Premier Field Engineer here at Microsoft and would like to introduce you to a pretty slick tool written by Julien Clauzel in his spare time called SilverSeekKB. It was previously an internal tool exclusive to Microsoft, and has recently been made available to the public, which I think is pretty awesome. I've been taking advantage of this tool for quite some time to quickly assist in performing root cause analysis on a wide range of problems I encounter in my customers' environments. 
SilverSeekKB helps in determining the latest version of nearly any Microsoft binary, to include SQL, Exchange and a myriad of other products we've released.  You’re probably wondering “That’s great, why is that so important?” As most IT administrators can attest, it takes a lot of effort to keep up with all the changes our products undergo with hotfixes and updates. SilverSeekKB allows you to identify possible hotfix solutions attributed to files or executables found in your troubleshooting process. Quite frequently I found I wasn’t aware there was a hotfix for the issue. I’m hoping you’ll add this to your troubleshooting toolbox before jumping into your favorite search engine and typing in a bunch of symptoms. It’s important to note that this tool only provides openly available information and it is ultimately up to the end-user in determining what hotfixes are applied. Please review the EULA for more information.

First, let's take a look at the SilverSeekKB UI and then we’ll go over a case study to illustrate its benefits.  Generally, you'll start in the Main tab and you can place all of the binaries you want to search for in a space delimited format.  From the pull-down menu, select the product you're looking for.

image

Clicking on "Search all latest builds" will kick you over to the summary tab where you can find the latest version of each file

image

To find granular detail on the files of all previous versions, and list hotfix information for each release, move over to the Details tab.  Here you'll see every General Distribution Release and Limited Distribution Release version of the files.   Notice the different file versions in the image below. If you're unfamiliar with the difference between GDR and LDR you can brush up at this blog. We highly suggest understanding the difference between the release branches and the implications from deviating from the GDR branch before applying LDR hotfixes to your baseline.

image

Scenario: A customer brings you a laptop that has been continuously experiencing a bug check, or as you may frequently hear it described as a BSOD. In this case we’re lucky enough to have a minidump of the crash, so let’s dig in and figure out what happened.
We won't go over debugging in this blog, but we'll skim the surface so you can get a good start.

Using WinDBG, open the minidump and run the command vertarget. You’re going to want this information so you’re looking for the right updates in SilverSeekKB rather than searching every OS.

image


Conducting a !analyze –v will tell us what the debugger thinks it may know about what was going on at the time. The debugger’s conclusion of why we had a bug check can be seen at the top. This is important to remember, which I’ll demonstrate shortly. Before we go any further, note that there is a LOT more that goes into true debugging beyond opening a crash dump and firing off !analyze –v.

image

image

It looks like we may have had a problem with a driver called usbvideo.sys, so let’s start there. Let's find out what version this machine was using with the lmvm command.

image

At this point, we're not sure if there are any updates to the driver.   Let's consult SilverSeekKB to see if there any hotfixes that may relate to our problem.  We can see that there are newer versions, but the latest version is a security update that was available on Windows Update.

image

We're probably going to want to apply that update, but let's dig down a little deeper to see if there was a hotfix for this specific issue.  On the Details tab, we can see all of the updates.

Zoomed

Looks like there were some updates for this specific problem (SYSTEM_THREAD_EXCEPTION_NOT_HANDLED). Because there are security updates for this video driver, my suggestion would be to apply the latest GDR version, knowing that the modifications for the earlier hotfix are in the latest version.

Usually, when I encounter an odd problem and I don’t have a minidump or process dump, I’ll go ahead and look for hotfixes by researching the components of whatever particular technology is involved.

Hopefully everyone will find this tool as useful as I do! Feel free to reach out in the comments section below if you have additional questions. Good luck!

-Chris Harrod

Viewing all 501 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>