Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

10 Tips and Tricks from the Field

$
0
0

Hello All. The AskPFEPlat team is here today with you in force. Recently we put together 10 Tips and Tricks from the Field – a collection of tips and tricks in our tool belt that we use on occasion. We wanted share these with all our readers in-an-effort to make your day a little easier. Certainly, this list of 10 will not cover everything. So, feel free to comment below if you have a great little trick to share with the community. Here is a list of everything in the article:

  1. Refreshing Computer Group Memberships without Reboots
  2. Why am I still seeing Kerberos PAC validation/verification when its off!?
  3. Recent GPO Changes
  4. Network Captures from Command Line
  5. Steps Recorder
  6. Command Shell Tricks
  7. Active Directory Administrative Center
  8. RDCMan
  9. Policy Analyzer
  10. GPO Merge

In addition to this article, you should really read a recently published article by David Das Neves:

https://blogs.msdn.microsoft.com/daviddasneves/2017/10/15/some-tools-of-a-pfe/

So, let’s get to all of it.

 

  • Refreshing Computer Group Memberships without Reboots Using KLIST
    Submitted by Jacob Lavender & Graeme Bray

This is one of my favorite little items that can save a significant amount of time. Let’s say that I just added a computer object in Active Directory to a new group. Now, before diving in, the account used must be able to act as part of the operating system. If you have a GPO which prevents this could cause a problem with this item.

Normally, how would you get the machine to update its group memberships and get the permissions associated? Reboot, right? Sometimes that just isn’t going to work. Well, all we actually need to do is update the machine Kerberos ticket. So, let’s purge them and get a new one. Step in klist.

https://technet.microsoft.com/en-us/library/hh134826(v=ws.11).aspx

Here is a great little PowerShell sample script that Graeme wrote that can help you make short work of this as well – for local and remote machines:

https://gallery.technet.microsoft.com/Clear-Kerberos-Ticket-on-18764b63

Requirement: You must perform these tasks as an administrator.

Let’s begin by first identifying the accounts with sessions on the computer we are working with. The command necessary is:

Command:    Klist sessions

Each LogonId is divided into two sections, separated by a “:”. These two parts are referred to as:

  • High Part
  • Low Part

Example:    HighPart:LowPart

LAB5\LAB5WIN10$    0:0x3e7

So, for this task, we are going to utilize the Low Part of the LogonId to target the account that we plan to purge and renew tickets for.

Just for reference, domain joined machines obtain Kerberos tickets under two sessions, identified below along with the Low Part of the LogonId. These two accounts will always use the same Low Part LogonId. They should never change.

  • Local System (0x3e7)
  • Network Service (0x3e4)

We can use the following commands to view the cached tickets:

Local System Tickets:    Klist -li 0x3e7

Network Services Tickets:    Klist -li 0x3e4

Let’s purge the computer account tickets. As an example of when this might be necessary, I’ve seen this several times with Exchange Servers where the computer objects need to be added to a domain security group but we are not allowed to reboot the server during operational hours. I’ve also seen this several times when a server needs to request a certificate, however the certificate template is restricted to specific security groups.

To view the cached tickets of the computer account, we’ll use the following command. Take note of the time stamp:

Command:    Klist -li 0x3e7

Now, let’s purge the machine certificate using the following command:

Command:    Klist purge -li 0x3e7

Let’s validate that the tickets have been purged using the first command:

Command:    Klist -li 0x3e7

Finally, let’s get a new ticket:

Command:    Gpupdate /force

Let’s now look at the machine tickets again using the first command:

Command:    Klist -li 0x3e7

What should stand out is that all the tickets prior to our purge were time stamped at 7:40:19. After purging the tickets and getting a new set, all the timestamps are now 7:46:09. Since the machine Kerberos tickets are how the domain joined resources determine which security groups the machine is a member of, it now has a ticket that will identify any updates. No reboot required.

Note: Within the Platforms community, there are reported occasions where this may not successfully work. Those scenarios appear to be specific and limited. However, its important to understand that this is not a 100% trick.

 

  • Why am I still seeing Kerberos PAC validation/verification when its off!? 
    Submitted by Brandon Wilson

Kerberos PAC verification is one of those items that is a blessing in that it adds additional security, but at the same time, it also adds additional overhead and can cause problems in some environments (namely, MaxConcurrentApi issues).

So, let’s cover one of the most basic items about PAC validation/verification, which is how to toggle it on or off (default is disabled/off on Windows Server 2008 and above). You can do that by going into regedit, browsing to:

HKLM\SYSTEM\CurrentControlSet\Control\Lsa\Kerberos\Parameters

Then we are going to set the value for ValidateKdcPacSignature to 0 (to disable) or 1 (to enable).

Pretty simple…

Now, where it tends to throw people off, is understanding *when* this setting actually effects Kerberos PAC validation, and that time is whenever anything is using an account with the “Act as part of the operating system” user right; in other words, a service/system account logon (think, network service, local service, etc). Now, this can be something stripped at launch time to limit the attack surface as well (Exchange 2013 and above does this, as an example), at which point you are effectively doing a batch logon, and batch logons, we will still see PAC validations for, regardless of what the registry entry is configured as.

A common area this is seen is on web servers, or more specifically, web servers that are clustered or load balanced. Due to the configuration necessary, IIS is using batch logons, and therefore we continue to get PAC validations.

This becomes important to know if you are troubleshooting slow or failed authentication issues that are related to IIS (or Exchange 2013 and above, as I referenced earlier), as it can be a contributor to authentication bottlenecks (MaxConcurrentApi) that lead to slow or failed authentication. 

For reference, take a look at these oldies but goodies:

Why! Won’t! PAC! Validation! Turn! Off! 

https://cloudblogs.microsoft.com/enterprisemobility/2008/09/29/why-wont-pac-validation-turn-off/ 

Understanding Microsoft Kerberos PAC Validation 

https://blogs.msdn.microsoft.com/openspecification/2009/04/24/understanding-microsoft-kerberos-pac-validation/ 

 

  • List Recently Modified GPOs
    Submitted by Tim Muessig

A common scenario that any system administrator might encounter is the “it’s broken, but nothing has changed.” We’ve all been there, right? Well, a common trick that Tim suggested we include is just a simple method by which to view the 10 most recently updated GPOs.

Get-GPO -all | Sort ModificationTime -Descending | Select -First 10 | FT DisplayName, ModificationTime

So, let’s briefly list what this command will perform:

  • It will obtain all GPO’s within the domain.
  • It will then sort those GPO’s based on their Modification Time stamp and arrange them in a descending order, effectively placing the newest at the top.
  • It then will select the first 10 of those GPOs
  • Finally, it takes those 10 GPO’s and places them in a table for your review with their display name and modification time

One of the greatest benefits of this simple little trick is that it is very flexible to meet your needs.

 

  • Network Captures from Command Line
    Submitted by Elizabeth Greene

Two great options for conducting network captures from the command line include:

  • Command Line: NETSH TRACE
    • Windows 7+
  • PowerShell: NetEventSession
    • Windows 8+

Netsh trace start capture=yes tracefile=c:\temp\capturefile.etl report=no maxsize=500mb

Netsh trace stop

One little great little addition is the persistent argument. This configured the capture to survive and reboot and capture network traffic while Windows is starting. Example:

Netsh trace start persistent=yes capture=yes tracefile=c:\temp\capturefile.etl report=no maxsize=500mb

Imagine that you’re attempting to troubleshoot a slow login? That might just be a great little command to have to capture the network traffic to the domain in that case.

The trace files are able to be opened with Microsoft Message Analyzer. Message Analyzer can then convert the files to .cap files if you prefer to view them in Wireshark.

I’ve also recently published a tool that you are welcome to look at, along with some REALLY great reference material for further review on this topic.

Simple PowerShell Network Capture Tool (by Jacob Lavender):

https://blogs.technet.microsoft.com/askpfeplat/2017/12/04/simple-powershell-network-capture-tool/

Note: The update for a multi-computer network capture tool is well on the way. Some nice updates already made and a few bugs to work out and it’ll be ready. Stay tuned on this one.

Using Wireshark to read the NETSH TRACE output ETL:

https://blogs.technet.microsoft.com/yongrhee/2013/08/16/so-you-want-to-use-wireshark-to-read-the-netsh-trace-output-etl/

Capture a Network Trace Without Installing Anything:

https://blogs.msdn.microsoft.com/canberrapfe/2012/03/30/capture-a-network-trace-without-installing-anything-capture-a-network-trace-of-a-reboot/

 

  • Steps Recorder

 

Hello everyone! Tim Beasley (PFE-Platforms) here to briefly discuss a handy dandy little tool known as Steps Recorder.

Officially, Microsoft says: Steps Recorder
(called Problems Steps Recorder in Windows 7), is a program that helps you troubleshoot a problem on your device by recording the exact steps you took when the problem occurred. You can then send this record to a support professional to help them diagnose the problem.

Ahh…but that’s just the beginning! This nifty piece of software not only can help during troubleshooting and diagnostics…but it can help you build desperately needed documentation! Let me tell you, I bring this up at every customer site I visit, and you wouldn’t believe how well it’s received. And, most of the time people haven’t even heard of it! Hence the reason for adding it to the Top 10 Tricks and Tips post. 😊

So, let’s get to the meat of this shall we? Naturally, there’s a few ways to launch it. (It wouldn’t be a Microsoft product if there weren’t!)

  1. You can search for “Steps Recorder” in Windows.
  2. Start, Windows Accessories, Steps Recorder
  3. Run psr.exe

Each will launch this little nugget:


For diagnostics and troubleshooting, simply launch Steps Recorder on the machine in question…click “Start Record” and then reproduce the error. AKA, go through and click around to repeat the problem. Once done, hit “Stop Record” and it’ll immediately bring up all the steps you took including screenshots, descriptions of what you clicked on and how you clicked it, and give you the option to save it. What else is cool, is that it also includes a text version of everything you did at the bottom of the output. Simply save it (it’ll be a zip file) and send to whomever is running the diagnostics and they’ll have a comprehensive step-by-step guide on how to reproduce the problem along with screenshots!

Now, let’s take a step further. Your boss asks you to build out a new PKI environment (or any other IT project, but since I’m a PKI guy I had to throw that in here…heh). The project manager wants complete documentation of how everything was built. But, you hate writing technical docs, it’s so time consuming, pain in the rear to gather all the screenshots, (insert excuse here) … But enter Steps Recorder! On the server, simply start a recording session before you begin the deployment and all steps are recorded! At the end you’ll have a nice as-built document for each server you run it on!

***Pro Tip: Steps Recorder well record 25 screens by default…if you need any more, you’ll need to adjust the settings (Max is 99). Simply click the little down arrow next to the help button and go into settings. There you can choose where the output file is saved, what to capture, and adjust the screen capture count.

That’s it for this little addition to the Top 10 Tricks and Tips. Put it to use! You won’t regret it!

  • Command Shell Tricks

    Submitted by Michele Ferrari

  • Open an explorer window from your current location in a command window
    • “start .”
  • Open a command window from your current location in explorer
    • Type “cmd” or “powershell” in the address bar
  • Copy the output of a command (or any text) to the clipboard
    • “dir c:\windows\drivers | clip” for cmd.exe
    • “get-childitem c:\windows\drivers |set-clipboard” in PowerShell
  • What is the path for a utility? – use the where command
    • “where notepad.exe”
  • What binary version is a specific file?
    • Get-ChildItem c:\windows\sysetm32\ntdll.dll|Format-List VersionInfo
    • Extra trick, use “Format-List *” for lots of other interesting info (like LinkType)

 

  • Active Directory Administrative Center

    Submitted by Graeme Bray

Hi everyone! Graeme Bray here with a quick tip on a piece of software initially shipped with Windows Server 2008 R2 RSAT that no one uses. Active Directory Administrative Center (ADAC) looks and feels different from Active Directory Users and Computers, but it provides more functionality and allows us to manage newer technologies introduced in later operating systems (like Fine Grained Passwords), without having to do all the work via PowerShell.

Fine Grained Passwords you say? One of my favorite pieces of technology within Active Directory Functional Level 2008 was the addition of Fine Grained Passwords (FGPP). The problem is that there was no easy way to create these before Windows Server 2012 (or Windows 8). With the on-going updates to ADAC, we have now been provided the ability to modify and work with FGPP in a much easier way.

To open, type Active Directory Administrative Center (or dsac.exe for short).

On the left side of ADAC, click the “Tree” icon

Expand your domain and then go to the System container

Inside, you’ll see Password Settings Container.

If you click this, you can create as many password policies as desired.

Typically, I would recommend having a policy for Highly Privileged accounts (Domain Admins and equivalent), one for Service accounts, and then if you needed a policy with few restrictions, you can target specific accounts.

For more details on how to create a Fine Grained Password Policy, go here.

But wait! That’s not all! What else can ADAC do? The other example that I use ADAC for is to demo how to create PowerShell without having to use your favorite search engine.

At the bottom of the ADAC window, there is a section called Windows PowerShell History. Create a user account, group, etc.. Afterwards, “steal” the code and use it over, and over again. No looking to create it on your own.

Click the (^) button, to expand the history, then see your results like below:

You can copy the cmdlets, customize, and they should magically work.

There are other nifty features that are only being added to the Active Directory Administrative Center. Poke around and see what else you can find!

 

  • RDCMan
    Submitted by Nathan Penn

Hello all! Nathan Penn back again to share with you a few of my go to tools. On a daily basis I need to Remote Desktop into multiple systems (Domain controllers, Member Servers, Clients). While the built-in remote desktop client (mstsc.exe) works, I can sometimes get lost on which system I am currently in a session with, especially when working in multiple full screen sessions. Enter RDCMan! RDCMan is a wrapper for the remote desktop client and allows for a manageable tabular view from the side. It enables me to define multiple servers into a single console, separate them into groups, save logon credentials (At least the username), specify an RDS gateway if needed, and much more. When you have an active session it becomes blue, and the checkmark indicates the session you are currently interacting with. What a time saver!

RDCMan is available for download here – https://blogs.technet.microsoft.com/rmilne/2014/11/19/remote-desktop-connection-manager-download-rdcman-2-7/

Security Note: Not all organizations will allow the use of all features of this tool, specifically saving credentials in the tool. Make sure you check your organization’s security policy prior to doing anything like that.

 

  • Policy Analyzer
    Submitted by Nathan Penn

The next tool that I want to share with you is Policy Analyzer.

This tool is for those of us that work in group policy and provides a capability that we have sought after for years. Policy Analyzer provides that capability to compare multiple group policies for duplicate settings, differences, and even conflicts. Just to clarify on terminology, a difference is a setting that is configured in one policy and not the other(s), while a conflict is a setting configured in the compared policies that is set to differing values. With Policy Analyzer, you can quickly review a pending revision of a GPO to identify all the changes that will occur by updating the policy.

In addition to the interaction you have within Policy Analyzer GUI, it also provides the capability to export the analysis to Excel. Many thanks to Aaron Margosis for creating this for us.

Policy Analyzer is available for download here: https://blogs.technet.microsoft.com/secguide/2016/01/22/new-tool-policy-analyzer/

 

  • GPO Merge

    Submitted by Nathan Penn

The final tool / trick is also for those that manage group policy. This one is a PowerShell script from fellow Microsoft PFE Ashley McGlone. Oftentimes, I want to consolidate two or more group policies into a single policy. As many of you know from experience this can be a tedious, time intensive effort, that can sometimes be error prone. Usually it involves running a Gpresult, maybe printing it out, and a good bit of duplication of the original effort(s). Not anymore, thanks to the GPO-Merge script.

GPO-Merge allows me to create an OU and link the group policies I want consolidated into one. Make sure to establish the correct link order, because the script also respects that, and only carries forward the winning settings. Run the script pointing it to your target OU and… VIOLA!!!! What would take countless hours before is now done in a couple of minutes.

A couple of notes just for awareness. GPO-Merge currently “can only migrate registry-based settings. Look at the warning details to see what other types of settings are included in the policy. These settings require manual copying.

It also does not migrate GPO Preferences.

With that said, this 95% solution is awesome, and when combined with the aforementioned Policy Analyzer, group policy administration just became much easier.

GPO-Merge is available for download here: https://blogs.technet.microsoft.com/ashleymcglone/2015/06/11/updated-copy-and-merge-group-policies-gpos-with-powershell/

 

 

  • Honorable Mentions
  1. Tools for Troubleshooting Slow Boots and Slow Logons:
    1. https://social.technet.microsoft.com/wiki/contents/articles/10128.tools-for-troubleshooting-slow-boots-and-slow-logons-sbsl.aspx
  2. Use PowerShell to Find the Location of a Locked-Out User:
    1. https://blogs.technet.microsoft.com/heyscriptingguy/2012/12/27/use-powershell-to-find-the-location-of-a-locked-out-user/
  3. PowerShell Tip: Run Local Functions Remotely in PowerShell: http://duffney.io/RunLocalFunctionsRemotely
  4. Clipboard Copy and Paste in vSphere Client: https://kb.vmware.com/s/article/1026437
  5. Setting the Failover Cluster Management Account in System Center Virtual Machine Manager:

    Contributed by Chuck Timon

Scenario: On-Premises, System Center Virtual Machine Manager (SCVMM) is used to manage a Private Cloud environment. This environment includes managing one or more Hyper-V Failover Clusters hosting virtual machine workloads. When managing a Failover Cluster in SCVMM, it is best practice that a single RunAsAccount be used that has local administrator privileges on each node in the cluster. This ensures for reliable communications to all nodes in the cluster so, for example, jobs executed in SCVMM against the cluster will complete successfully.

There are times when SCVMM Administrators choose not to use a configured RunAsAccount to create the cluster, or to add new nodes to a cluster as part of a scale-out initiative. The result is the RunAsAccount for Host Access is blank in one or more nodes in the cluster-

A properly configured cluster will reflect a single RunAsAccount being used throughout the cluster.

There are two ways to remedy the situation. You can use the GUI (SCVMM Console) or the SCVMM PowerShell module. The GUI method is not immediately obvious because you will note in the above screenshot (taken from an active node in a cluster), the ‘Browse’ button is greyed-out and cannot be used. However, if you access the ‘Properties’ of the cluster in the SCVMM console, you will see a selection called ‘File Share Storage.’

Clicking on that selection brings up the ‘File Share Storage’ information for the cluster. At the bottom of that information page is an area that can be used to add or modify the RunAsAccount for each node in the cluster (Browse button is ‘live’ in this context).

As shown in the above screenshot, I am using a RunAsAccount that I configured in SCVMM –

Note: The domain user account corresponding to the RunAsAccount configured in SCVMM must be a member of the local administrators group on each node of the cluster, and it should not be the SCVMM service account.

The management credential can also be changed across the cluster using the SCVMM PowerShell module. Here is an example –


Troubleshooting Active Directory Based Activation (ADBA) clients that do not activate

$
0
0

Hello everyone! My name is Mike Kammer, and I have been a Platforms PFE with Microsoft for just over two years now. I recently helped a customer with deploying Windows Server 2016 in their environment. We took this opportunity to also migrate their activation methodology from a KMS Server to Active Directory Based Activation.

As proper procedure for making all changes, we started our migration in the customer’s test environment. We began our deployment by following the instructions in this excellent blog post by Charity Shelbourne. The domain controllers in our test environment were all running Windows Server 2012 R2, so we did not need to prep our forest. We installed the role on a Windows Server 2012 R2 Domain Controller and chose Active Directory Based Activation as our Volume Activation Method. We installed our KMS key and gave it a name of KMS AD Activation ( ** LAB). We pretty much followed the blog post step by step.

We started by building four virtual machines, two Windows 2016 Standard and two Windows 2016 Datacenter. At this point everything was great, and everyone was happy. We built a physical server running Windows 2016 Standard, and the machine activated properly. And that’s where our story ends.

Ha Ha! Just kidding! Nothing is ever that easy. Truthfully, the set up and configuration were super easy, so that part was simple and straight forward. I came back into the office on Monday, and all the virtual machines I had built the week prior showed that they weren’t activated. Hey! That’s not right! I went back to the physical machine and it was fine. I went to the customer to discuss what had happened. Of course, the first question was “What changed over the weekend?” And as usual the answer was “nothing.” This time, nothing really had been changed, and we had to figure out what was going on.

I went to one of my problem servers, opened a command prompt, and checked my output from the SLMGR /AO-LIST command. The AO-LIST switch displays all activation objects in Active Directory.



The results show that we have two Activation Objects: one for Server 2012 R2, and our newly created KMS AD Activation (** LAB) which is our Windows Server 2016 license. This confirms our Active Directory is correctly configured to activate Windows KMS Clients

Knowing that the SLMGR command is my friend for license activation, I continued with different options. I tried the /DLV switch, which will display detailed license information. This looked fine to me, I was running the Standard version of Windows Server 2016, there’s an Activation ID, an Installation ID, a validation URL, even a partial Product Key.


Does anyone see what I missed at this point? We’ll come back to it after my other troubleshooting steps but suffice it to say the answer is in this screenshot.

My thinking now is that for some reason the key is borked, so I use the /UPK switch, which uninstalls the current key. While this was effective in removing the key, it is generally not the best way to do it. Should the server get rebooted before getting a new key it may leave the server in a bad state. I found that using the /IPK switch (which I do later in my troubleshooting) overwrites the existing key and is a much safer route to take. Learn from my missteps!


I ran the /DLV switch again, to see the detailed license information. Unfortunately for me that didn’t give me any helpful information, just a product key not found error. Because, of course, there’s no key since I just uninstalled it!


I figured it was a longshot, but I tried the /ATO switch, which should activate Windows against the known KMS servers (or Active Directory as the case may be). Again, just a product not found error.


My next thought was that sometimes stopping and starting a service does the trick, so I tried that next. I need to stop and start the SPPSVC service, which is the Microsoft Software Protection Platform Service. From an administrative command prompt, I use the trusty NET STOP and NET START commands. I notice at first that the service isn’t running, so I think this must be it!


But no. After starting the service and attempting to activate Windows again, I still get the product not found error.

I then looked at the Application Event Log on one of the trouble servers. I find an error related to License Activation, Event ID 8198, with a code of 0x8007007B.


While looking up this code, I found this article which says my error code means the file name, directory name, or volume label syntax is incorrect. Reading through the methods described in the article, it didn’t seem that any of them fit my situation. When I ran the NSLOOKUP -type=all _vlmcs._tcp command, I found the existing KMS server (still lots of Windows 7 and Server 2008 machines in the environment, so it was necessary to keep it around), but also the five domain controllers as well. This indicated that it was not a DNS problem and my issues were elsewhere.


So I know DNS is fine. Active Directory is properly configured as a KMS activation source. My physical server has been activated properly. Could this be an issue with just VMs? As an interesting side note at this point, my customer informs me that someone in a different department has decided to build more than a dozen virtual Windows Server 2016 machines as well. So now I assume I’ve got another dozen servers to deal with that won’t be activating. But no! Those servers activated just fine.

Well, I headed back to my SLMGR command to figure out how to get these monsters activated. This time I’m going to use the /IPK switch, which will allow me to install a product key. I went to this site to get the appropriate keys for my Standard version of Windows Server 2016. Some of my servers are Datacenter, but I need to fix this one first.


I used the /IPK switch to install a product key, choosing the Windows Server 2016 Standard key


From here on out I only captured results from my Datacenter experiences, but they were the same. I used the /ATO switch to force the activation. We get the awesome message that the product has been activated successfully!


Using the /DLV switch again we can see that now we have been activated by Active Directory.


Now, what had gone wrong? Why did I have to remove the installed key and add those generic keys to get these machines to activate properly? Why did the other dozen or so machines activate with no issues? As I said earlier, I missed something key in the initial stages of looking at the issue. I was thoroughly confused, so reached out to Charity from the initial blog post to see if she could help me. She saw the problem right away and helped me understand what I had missed early on.

When I ran the first /DLV switch, in the description was the key. The description was Windows® Operating System, RETAIL Channel. I had looked at that and thought that RETAIL Channel meant that it had been purchased and was a valid key.


When we look at the output of the /DLV switch from a properly activated server, notice the description now states VOLUME_KMSCLIENT channel. This lets us know that it is indeed a volume license.


So what does that RETAIL channel mean then? Well, it means the media that was used to install the operating system was an MSDN ISO. I went back to my customer and asked if, by some chance, there was a second Windows Server 2016 ISO floating around the network. Turns out that yes, there was another ISO on the network, and it had been used to create the other dozen machines. They compared the two ISOs and sure enough the one that was given to me to build the virtual servers was, in fact, an MSDN ISO. They removed that MSDN ISO from their network and now we have all our existing servers activated and no more worries about the activation failing on future builds.

I hope this has been helpful and may save you some time going forward!

Mike

Infrastructure + Security: Noteworthy News (March, 2018)

$
0
0

Hi there! Stanislav Belov is back to bring you the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy! 

Microsoft Azure
Just-in-Time VM Access is generally available
Azure Security Center provides several threat prevention mechanisms to help you reduce surface areas susceptible to attack. One of those mechanisms is Just-in-Time (JIT) VM Access. We are excited to announce the general availability of Just-in-Time VM Access, which reduces your exposure to network volumetric attacks by enabling you to deny persistent access while providing controlled access to VMs when needed.
What’s new in IaaS?
With the pace of innovation in the Cloud, it’s hard to keep up with what’s new across the entire Microsoft Azure platform. Let’s pause and take a moment to see what’s new—and coming soon—specifically with Azure Infrastructure as a Server (IaaS)
Announcing Storage Service Encryption with customer managed keys general availability
Storage Service Encryption with customer managed keys uses Azure Key Vault that provides highly available and scalable secure storage for RSA cryptographic keys backed by FIPS 140-2 Level 2 validated Hardware Security Modules (HSMs). Key Vault streamlines the key management process and enables customers to maintain full control of keys used to encrypt data, manage, and audit their key usage.
Azure’s layered approach to physical security
Over the next few months, as part of the secure foundation blog series, we’ll discuss the components of physical, infrastructure (logical) and operational security that help make up Azure’s platform. Today, we are focusing on physical security.
Best practices for securely moving workloads to Microsoft Azure
Azure is Microsoft’s cloud computing environment. It offers customers three primary service delivery models including infrastructure as a service (IaaS), platform as a service (PaaS), and software as a service (SaaS). Adopting cloud technologies requires a shared responsibility model for security, with Microsoft responsible for certain controls and the customer others, depending on the service delivery model chosen. To ensure that a customer’s cloud workloads are protected, it is important that they carefully consider and implement the appropriate architecture and enable the right set of configuration settings.
What is Azure Stack?
Microsoft Azure Stack is a hybrid cloud platform that lets you deliver Azure services from your organization’s datacenter. Azure Stack is designed to enable new scenarios for your modern applications in key scenarios, like edge and disconnected environments, or meeting specific security and compliance requirements. Azure Stack is offered in two deployment options to meet your needs.
Windows Server
Introducing SQL Information Protection for Azure SQL Database and on-premises SQL Server!

We are delighted to announce the public preview of SQL Information Protection, introducing advanced capabilities built into Azure SQL Database for discovering, classifying, labeling, and protecting the sensitive data in your databases. Similar capabilities are also being introduced for on-premises SQL Server via SQL Server Management Studio.

PKI Basics: How to Manage the Certificate Store

In this blog post we cover some PKI basics, techniques to effectively manage certificate stores, and also provide a script we developed to deal with common certificate store issue we have encountered in several enterprise environments (certificate truncation due to too many installed certificate authorities).

Windows Client
Windows 10 in S Mode coming soon to all editions of Windows 10

Last year we introduced Windows 10 S – an effort to provide a Windows experience that delivers predictable performance and quality through Microsoft-verified apps via the Microsoft Store. This configuration was offered initially as part of the Surface Laptop and has been adopted by our customers and partners for its performance and reliability.

Announcing Windows 10 Insider Preview Build 17120
On March 14th we released Windows 10 Insider Preview Build 17120 (RS4) to Windows Insiders in the Fast ring.
Security
Securing privileged access for hybrid and cloud deployments in Azure AD

We recently published new documentation that provides details on securing privileged access for hybrid and cloud deployments in Azure AD. This document outlines recommended account configurations and practices for ensuring privileged accounts, like global admins, are operated securely. It starts with essential recommendations to be applied immediately and goes on to establish a proactive admin model in the following weeks and months.

Invisible resource thieves: The increasing threat of cryptocurrency miners
The surge in Bitcoin prices has driven widescale interest in cryptocurrencies. While the future of digital currencies is uncertain, they are shaking up the cybersecurity landscape as they continue to influence the intent and nature of attacks
What is Azure Advanced Threat Protection?
Azure Advanced Threat Protection (ATP) is a cloud service that helps protect your enterprise hybrid environments from multiple types of advanced targeted cyber attacks and insider threats. Azure ATP leverages a proprietary network parsing engine to capture and parse network traffic of multiple protocols (such as Kerberos, DNS, RPC, NTLM, and others) for authentication, authorization, and information gathering.
Azure AD and ADFS best practices: Defending against password spray attacks
As long as we’ve had passwords, people have tried to guess them. In this blog, we’re going to talk about a common attack which has become MUCH more frequent recently and some best practices for defending against it. This attack is commonly called password spray. In a password spray attack, the bad guys try the most common passwords across many different accounts and services to gain access to any password protected assets they can find.
Behavior monitoring combined with machine learning spoils a massive Dofoil coin mining campaign
Just before noon on March 6 (PST), Windows Defender Antivirus blocked more than 80,000 instances of several sophisticated trojans that exhibited advanced cross-process injection techniques, persistence mechanisms, and evasion methods. Behavior-based signals coupled with cloud-powered machine learning models uncovered this new wave of infection attempts. The trojans, which are new variants of Dofoil (also known as Smoke Loader), carry a coin miner payload. Within the next 12 hours, more than 400,000 instances were recorded, 73% of which were in Russia. Turkey accounted for 18% and Ukraine 4% of the global encounters.
Using Protected Groups to Secure Privileged User Accounts
The blog post outlines the benefits, requirements, actions, and impact of using Protected Users group.
Vulnerabilities and Updates
Update on Spectre and Meltdown security updates for Windows devices
Microsoft continues to work diligently with our industry partners to address the Spectre and Meltdown hardware-based vulnerabilities. Our top priority is clear: Help protect the safety and security of our customers’ devices and data. We’d like to provide an update on some of that work, including Windows security update availability for additional devices, our role in helping distribute available Intel firmware (microcode), and progress driving anti-virus compatibility.
Support Lifecycle
Windows 10, version 1607 Semi-Annual Channel end of support
This will occur on April 10, 2018. This means that Windows 10, version 1607 Semi-Annual Channel will no longer receive security updates and customers who contact Microsoft Support after the April update will be directed to update to the latest version of Windows 10 to remain supported.
Microsoft Premier Support News
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Rescued by Procmon: The Case of the Certificate Authority Unable to Issue Certificates due to Revocation Failures

$
0
0

Hello Everyone, my name is Zoheb Shaikh and I’m a Premier Field Engineer with Microsoft India. I am back again with another blog and today I’ll share with you something interesting that I came across recently which caused the Certificate Authority to go down, and how I was able to isolate the issue by using Process Monitor (Procmon). (https://docs.microsoft.com/en-us/sysinternals/downloads/procmon)

Before I discuss about the issue, I would like to briefly share a bit of background on CDP & AIA extensions and their use.

I could try to explain what the AIA and CDP are and the way to configure it, but here is a short article on it and how revocation works.

https://docs.microsoft.com/en-us/windows-server/networking/core-network-guide/cncg/server-certs/configure-the-cdp-and-aia-extensions-on-ca1

https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ee619730(v=ws.10)

AIA and CDP extensions are very important for certificate validation.  The Authority Information Access or AIA repository host CA Certificates.  This location is “stamped” in the Authority Information Access extension of issued certificates.  A client that is validating a certificate may not have every CA certificate in the chain.  The client needs to build the entire chain to verify that the chain terminates in a self-signed certificate that is trusted (Trusted Root).

CDP extensions host the CRLs that the CA publishes.  The CRL Distribution Points extension is “stamped” in certificates.  Client use this location to download CRLs that the CA Publishes.  When a client is validating a certificate, it will build the chain to a Root CA.  If the Root CA is trusted this means the certificate is acceptable for use.  However, for applications that require revocation checking, the client must also validate that every certificate in the chain (with the exception of the Root) is not revoked.

Coming back to the customer scenario, they had a 2 Tier CA Hierarchy with an Offline Root CA and an Enterprise Subordinate CA both running 2012 R2 and an IIS server hosting the CDP/AIA extensions of Root CA (As shown in the diagram below):


Problem Symptom: When the customer was trying to enroll or issue any certificates, he was getting the following error:

Unable to renew or Enroll certificates getting the error | (The revocation function was unable to check revocation because the revocation server was offline. 0x80092013 (-2146885613 CRYPT_E_REVOCATION_OFFLINE)

The first thing we did was to export a certificate in .cer format and run the command “certutil -verify -urlfetch” against the certificate. As a result, we got the error:

Error retrieving URL: A connection with the server could not be established 0x80072efd (INet: 12029 ERROR_INTERNET_CANNOT_CONNECT)


http://fabricam-ca1.corp.fabrikam.com/vd/Fabricam_Group-CA.crt

We got this error for both CDP and AIA extensions.

When we tried to manually browse these extensions in Internet Explorer, we were able to access them but from the command line (I.e. certutil -verify -urlfetch) it always failed.

ROADBLOCK!!

We ran the same command (certutil -verify -urlfetch) against public certificates and observed similar behavior. And again, we could successfully browse to their CDP & AIA extensions from Internet Explorer.

Upon further checking, we found this behavior was occurring for about 20% of the users.

We checked if there were any proxy settings in IE and found none. CAPI2 logging further confirmed that there were issues with Certificate Revocation checking for both Internal and Public CA’s.

Since we were in trouble we decided to collect a Procmon log with a simultaneous network trace, while again running “certutil -verify –urlfetch.”

We saw the following in PROCMON:

11:48:25.9643758 PM certutil.exe 2348 TCP Reconnect Fabricam-ca1.corp.fabricam.com: 51188->210.99.197.47:8080 SUCCESS Length: 0, seqnum: 0, connid: 0

We also saw multiple reconnects

Operation > TCP Reconnect > Path > Fabricam-ca1.corp.fabricam.com:51188 -> 210.99.197.47:8080

Under Process Path > C:\Windows\system32\certutil.exe >> Command Line > certutil -verify -urlfetch subcacert1.cer

In Network Monitor (Netmon), we observed the following:

676 12:31:35 AM 6/17/2015 9.7827898 0 certutil.exe 10.10.60.47 Some Public IP TCP TCP:Flags=CE….S., SrcPort=52443, DstPort=HTTP Alternate(8080), PayloadLen=0, Seq=1424697589, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192 {TCP:125, IPv4:3}

815 12:31:38 AM 6/17/2015 12.7970106 0 certutil.exe 10.10.60.47 Some Public IP TCP TCP:[SynReTransmit #676]Flags=CE….S., SrcPort=52443, DstPort=HTTP Alternate(8080), PayloadLen=0, Seq=1424697589, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192 {TCP:125, IPv4:3}

So the requests from the Sub CA were not getting a response from an external IP. Further analyzing the Procmon showed us the following:

11:48:22.9158915 PM certutil.exe 2348 RegQueryValue HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\InternetSettings\Connections\ WinHttpSettings SUCCESS Type: REG_BINARY, Length: 45, Data: 28 00 00 00 00 00 00 00 03 00 00 00 19 00 00 00

11:48:22.9159009 PM certutil.exe 2348 RegQueryValue HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\WinHttpSettings SUCCESS Type: REG_BINARY, Length: 45, Data: 28 00 00 00 00 00 00 00 03 00 00 00 19 00 00 00

11:48:22.9174322 PM certutil.exe 2348 RegQueryValue HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Internet Settings\Connections\WinHttpSettings SUCCESS Type: REG_BINARY, Length: 45, Data: 28 00 00 00 00 00 00 00 03 00 00 00 19 00 00 00

We got the above for the path C:\Windows\system32\certutil.exe >> Command line certutil -verify -urlfetch subcacert1.cer.

We found that they had set Proxy settings before using Group Policy preferences, which got tattooed in the registry even though it was not reflected in Internet Explorer.

Thus, we deleted the registry key HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\ Internet Settings\Connections\WinHttpSettings and then confirmed that the revocation check worked fine for external and internal websites.

Lesson learned from this post? “WHEN IN DOUBT, USE PROCMON”

Hope this helps,

Zoheb

Windows Subsystem for Linux and BASH Shell (2018 Update)

$
0
0

Hello Everyone! Allen Sudbring here again, PFE in the Central Region, with an update to a blog post that I did on the Windows Subsystem for Linux and Bash On Ubuntu, found here,(link: https://blogs.technet.microsoft.com/askpfeplat/2016/05/02/installing-bash-on-ubuntu-on-windows-10-insider-preview/).

It’s been awhile since I posted on this topic and I wanted to update everyone with the exciting new options with the Windows Subsystem for Linux and different Linux distributions that are now available in the Windows store for download.

First, a little history. Back before the Windows 10 Anniversary update, we introduced the Windows Subsystem for Linux in the Windows Insider Preview. It was a new feature that allowed users to install a full Linux bash shell in windows. Introducing this feature made the reality of an all in one administration/developer workstation a reality. The need to run a Linux VM to access the Linux tools or other work arounds that have been used throughout the years to port Linux tools to Windows were no longer needed.

The install before did not have the option of multiple Linux distributions as well as choosing those distributions from the Windows Store

Instead of re-inventing the wheel, docs.microsoft.com has a great article on how to install the Windows Subsystem for Linux on Windows 10, as well as the exciting news of the ability to install the WSL on Windows Server starting with version 1709.

Windows Subsystem for Linux Documentation

From <https://docs.microsoft.com/en-us/windows/wsl/about>

Windows 10 Installation Guide

From <https://docs.microsoft.com/en-us/windows/wsl/install-win10>

Windows Server Installation Guide

From <https://docs.microsoft.com/en-us/windows/wsl/install-on-server>

I encourage everyone to check out this new feature, especially if you manage Linux and Windows Server or do cross-platform development!!

Nano Server 2018 Update

$
0
0

Hello again! Allen Sudbring here, PFE in the central region, bringing you some news on Nano server and an update to the blog post I did back in the Windows Server Technical Preview time frame on configuring and deploying Nano Server, located here (link: https://blogs.technet.microsoft.com/askpfeplat/2016/04/26/deploying-and-configuring-windows-nano-server-2016-tp4/). This massive blog post covered everything from management to deployment as both a virtual machine on Hyper-V and VMWare and on bare metal hardware.

A lot has changed since my initial post. Windows Server 2016 went GA and we announced a new Semi-Annual Channel branch to Windows Server that saw the first release named Windows Server 1709. With this new version, also comes a major change to Nano Server. Nano Server going forward will no longer be a host operating system available in a virtual machine or on physical hardware.

Nano server will be a container image only for use with Windows Containers and Docker. Containers are a new technology, similar to Virtualization, that containerizes an application for portability and resilience.

We originally announced these changes in a blog post in June, 2017, and have formally announced these changes at the following links:

Changes to Nano Server in Windows Server Semi-Annual Channel

From <https://docs.microsoft.com/en-us/windows-server/get-started/nano-in-semi-annual-channel>

More info on Windows Containers and Docker:

Windows Containers

From <https://docs.microsoft.com/en-us/virtualization/windowscontainers/about/>

As a result of Nano being optimized for containers, the image that is used is much smaller than the version that shipped with Windows Server 2016, as you can see from the screenshot below:

The 2016 version (Tagged latest) is over 1GB, the 1709 version is 312MB.

I encourage everyone to check out Windows Containers as well as our new guidance on the Semi-Annual Channel for Windows and how this affects Nano Server:

Windows Server Semi-Annual Channel overview

From <https://docs.microsoft.com/en-us/windows-server/get-started/semi-annual-channel-overview>

Making Sense of Replication Schedules in PowerShell

$
0
0

Hi all! Jan-Hendrik Peters, PFE, here to talk to you today about using PowerShell to view replication schedules. Automation with PowerShell is a part of our daily life. However, there are certain things that are for some reason not achievable with PowerShell out of the box. One of them is getting useful output for replication schedules in Active Directory.

We all know this problem. You are enjoying PowerShell and its great features and want to automate everything. Sooner or later, you are encountering issues with the default output format and resort to the way things used to be: Using a GUI.

If you are interested in finding out about scripting, read on. If you are pressed on time: https://gist.github.com/nyanhp/d9a1b591b5a69e300f640d53a02e0b44

To test what I did, I always make use of AutomatedLab. AutomatedLab is an open source project I am contributing to that can set up lab environments on Hyper-V, Azure and VMWare for you. You can find the script I used for my lab there as well: https://github.com/AutomatedLab/AutomatedLab/blob/master/LabSources/SampleScripts/Workshops/PowerShell%20Lab%20-%20HyperV.ps1

All sample scripts are part of the module, which you can install from GitHub or from the PowerShell Gallery by using Install-Module AutomatedLab.

One of my colleagues recently came to me with an issue his customer was facing. They simply wanted to get the replication schedule for their sites. While this sounds like a very easy task, the outcome was not what they desired.

This does not look right. What is an ActiveDirectorySchedule, and how can I use it? We wanted something like this:

To get rid of navigating to Sites and Services, finding the right schedule and viewing it in a nice calendar I will show you step by step how to get from unusable data to nicely formatted data. On the side we will also learn how to properly create a PowerShell function.

This blog post will show you how to make sense of normally unusable output and teach you PowerShell function design.

What are we dealing with?

The first, crucial point when dealing with these disappointments is finding out what we are up against. So, I would like to elaborate a little on an underrated tool that we all have access to: The cmdlet Get-Member.

We all know that PowerShell is an object-oriented shell that is built on the .NET framework. Being object-oriented means that we are dealing with classes that define how the objects, or instances of a class, look like. Get-Member harnesses this power and can show you all the .NET parts of the objects (i.e. the output of cmdlets) like properties and methods.

Properties are readable and, in many cases, writeable properties of an object that contain additional information. These additional pieces of information are of course also objects with more properties and methods.

Methods are pieces of code that can be executed to achieve certain results and may use the object’s properties to do so.

How does this look like with our little cmdlet?

As you can see, there are methods and properties of our site. The property we are most interested in in this example is called ReplicationSchedule.

Hmm. So our ReplicationSchedule is indeed a more complex object. We can use simple datatypes like datetime, timespan, int, string, bool, array and hashtable without issues. However, when it comes to more complex data types we must apply a little more elbow grease.

To make matters worse, there is no method or property to simply get the schedule in a readable format. The output of Get-Member revealed a property called RawSchedule, which sounds promising. Using this, we hit another brick wall:

Our property RawSchedule has the datatype bool[,,] – a three-dimensional array. Wow. This is where we need the online documentation. A quick search on MSDN for “System.DirectoryServices.ActiveDirectory.ActiveDirectorySchedule” reveals the documentation of the underlying .NET class. Luckily RawSchedule is well documented there at least.

Our array is encoded, so that the first index refers to the number of the weekday, the second index to the hour (0 – 23) and the third index to the 15-minute interval that the replication is active in. So how does that help?

Adapting

In our script we now must find a way to tie the Boolean entries to a tuple of weekday, hour and interval. The first idea that comes to mind is using loops to get to the desired result.

The first thing we need is our weekday indices. These are quite easy to come by. Remember me raving about Get-Member? Let’s pipe Get-Date to Get-Member. In the output you can see a property called DayOfWeek. Displaying this property returns a string – not what we need, right?

Wrong. While we certainly do not need a string, the object type is called DayOfWeek. DayOfWeek represents something that developers know as enumeration. It is simply a zero-based list of entries.

To see all available values of an enumeration we can use a so-called static method. Why static? Because this method does not need an actual object to perform its task. The .NET class Enum possesses the static method GetValues, that lists all values for an enumeration.

[System.Enum]::GetValues([System.DayOfWeek])

Casting the integers from 0 to 6 to a DayOfWeek quickly shows: 0..6 | Foreach-Object {[System.DayOfWeek]$_}

The hours are far easier, as they range from 0 to 23 in our zero-based three-dimensional array.

From these bits and pieces, we can finally cobble together a couple of loops that iterate over the array and return the Boolean values for each time slot.

Reusability

By now, we have rather unstructured code that is not easily reusable. I cannot tell you how many variations on the theme of ‘Set ACL on a folder’ or ‘Get something from AD’ I have seen up until now. In many companies there still is no centralized code repository that everyone can use, and most people are happily reinventing the wheel day-in, day-out.

In PowerShell, reusability is achieved by defining functions and by placing those functions in modules. Those modules can then be used by anyone and not only a select few.

Placing our code in a function is not that exciting. The keyword function and a script block would be enough. We could use $site as a parameter and be a bit more flexible. As you might recall however, using Write-Host is evil (http://www.jsnover.com/blog/2013/12/07/write-host-considered-harmful/). We would much rather use proper objects that we can then format at our leisure, export to CSV and so on.

Since we are building something good here, why not add some proper validation and pipeline support as well? While this sounds like a daunting task it is rather simple.

Pipeline input is achieved by using entire objects passing through the pipeline as well as using only certain property values. In our case we will start with property values.

[CmdletBinding()]
param
(
[Parameter(ValueFromPipelineByPropertyName = $true, Mandatory = $true)]
[System.DirectoryServices.ActiveDirectory.ActiveDirectorySchedule]$ReplicationSchedule,
[Parameter(ValueFromPipelineByPropertyName $true, Mandatory = $true)]
           [string]$DistinguishedName
)

By using the ReplicationSchedule as the parameter name and setting this parameter up for pipeline input by property name, we can now simply pipe our culprit from the beginning of this post in its entirety to our new cmdlet, Get-ADReplicationSchedule.

Get-AdReplicationSitelink
-Identity
“Munich – Abidjan”
-Properties
ReplicationSchedule
|
Get-ADReplicationSchedule

Get-AdReplicationSite
-Identity
Toronto
|
Get-ADReplicationSchedule

The full function then looks like this:

You might notice a couple of changes in the final code as well. For instance, I replaced the Write-Host statements entirely and am now simply creating objects for each site or site link that is piped to the cmdlet. In the script, the proper data types are used as well.

The resulting cmdlet is very flexible:

Filter it with Where-Object, format it with Format-List and Format-Table, view it in a GridView, whatever you fancy. Write-Host never generates useable output, which is something you should keep in mind at all times.

This concludes my first post on ASKPFEPLAT. I hope you have learned something new today and even if not: Enjoy the little script and make it your own!

Infrastructure + Security: Noteworthy News (April, 2018)

$
0
0

Hi there! Stanislav Belov is here with the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy! 

Microsoft Azure
Application Security Groups now generally available in all Azure regions
ASGs enable you to define fine-grained network security policies based on workloads, centralized on applications, instead of explicit IP addresses. Provides the capability to group VMs with monikers and secure applications by filtering traffic from trusted segments of your network.
Azure Availability Zones in select regions
Availability Zones are physically separate locations within an Azure region. Each Availability Zone consists of one or more datacenters equipped with independent power, cooling, and networking. With the introduction of Availability Zones, we now offer a service-level agreement (SLA) of 99.99% for uptime of virtual machines. Availability Zones are generally available in select regions.
Introducing Microsoft Azure Sphere: Secure and power the intelligent edge
Microsoft Azure Sphere is a new solution for creating highly-secured, Internet-connected microcontroller (MCU) devices. Azure Sphere includes three components that work together to protect and power devices at the intelligent edge.
Azure DDoS Protection for virtual networks generally available
Distributed Denial of Service (DDoS) attacks are intended to disrupt a service by exhausting its resources (e.g., bandwidth, memory). DDoS attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. With extortion and hacktivism being the common motivations behind DDoS attacks, they have been consistently increasing in type, scale, and frequency of occurrence as they are relatively easy and cheap to launch.
Windows Server
Use performance counters to diagnose app performance problems on Remote Desktop Session Hosts

One of the most difficult problems to diagnose is poor application performance – the applications are running slow or don’t respond. Traditionally, you start your diagnosis by collecting CPU, memory, disk input/output, and other metrics and then use tools like Windows Performance Analyzer to try to figure out what’s causing the problem. Unfortunately in most situations this data doesn’t help you identify the root cause because resource consumption counters have frequent and large variations. This makes it hard to read the data and correlate it with the reported issue.

Announcing Windows Admin Center: Our reimagined management experience

If you’re an IT administrator managing Windows Server and Windows, you probably open dozens of consoles for day-to-day activities, such as Event Viewer, Device Manager, Disk Management, Task Manager, Server Manager – the list goes on and on. Windows Admin Center brings many of these consoles together in a modernized, simplified, integrated, and secure remote management experience.

Windows Client
Update Windows 10 in enterprise deployments
Windows as a service provides a new way to think about building, deploying, and servicing the Windows operating system. The Windows as a service model is focused on continually providing new capabilities and updates while maintaining a high level of hardware and software compatibility. Deploying new versions of Windows is simpler than ever before: Microsoft releases new features two to three times per year rather than the traditional upgrade cycle where new features are only made available every few years. Ultimately, this model replaces the need for traditional Windows deployment projects, which can be disruptive and costly, and spreads the required effort out into a continuous updating process, reducing the overall effort required to maintain Windows 10 devices in your environment. In addition, with the Windows 10 operating system, organizations have the chance to try out “flighted” builds of Windows as Microsoft develops them, gaining insight into new features and the ability to provide continual feedback about them.
Security
Introducing Windows Defender System Guard runtime attestation
With the next update to Windows 10, we are implementing the first phase of Windows Defender System Guard runtime attestation, laying the groundwork for future innovation in this area. This includes developing new OS features to support efforts to move towards a future where violations of security promises are observable and effectively communicated in the event of a full system compromise, such as through a kernel-level exploit.
Conditional Access | Scenarios for Success (1 of 4)
Conditional Access is quickly becoming one of the most popular features our customers want to implement- it allows you to secure your corporate resources (such as Office 365) with quick and simple policies. We have identified several common scenarios that customers implement using conditional access. These scenarios secure your environment from different angles, enabling more holistic coverage. These are by no means the only policies that you can or should implement, but we have found them to be successful in addressing the most common customer scenarios we see.
New capabilities of Windows Defender ATP further maximizing the effectiveness and robustness of endpoint security
Our mission is to empower every person and every organization on the planet to achieve more. A trusted and secure computing environment is a critical component of our approach. When we introduced Windows Defender Advanced Threat Protection (ATP) more than two years ago, our target was to leverage the power of the cloud, built-in Windows security capabilities and artificial intelligence (AI) to enable our customers’ to stay one step ahead of the cyber-challenges. With the next update to Windows 10, we are further expanding Windows Defender ATP to provide richer capabilities for businesses to improve their security posture and solve security incidents more quickly and efficiently.
Incident Management Implementation Guidance for Azure and Office365
This document helps customers to understand how to implement Incident Management for their deployments of Microsoft Azure and Microsoft Office 365.
Secure Your Office 365 Tenant – By Attacking It
The Office 365 Attack Simulator is LIVE! Probe your environment before attackers do. Part 1, Part 2
Secure your backups, not just your data!
In today’s digital world where data is the new currency, protecting this data has become more important than ever before. In 2017, attackers had a huge impact on businesses as we saw a large outbreak of ransomware attacks like WannaCry, Petya and Locky. According to a report from MalwareBytes, ransomware detections were up 90 and 93 percent for businesses and consumers respectively in 2017. When a machine gets attacked by ransomware, backups are usually the last line of defense that customers resort to.
Why Windows Defender Antivirus is the most deployed in the enterprise
Currently, our antivirus capabilities on Windows 10 are repeatedly earning top scores on independent tests, often outperforming the competition. This performance is the result of a complete redesign of our security solution. What’s more, this same technology is available for our Windows 7 customers as well, so that they can remain secure during their transition to Windows 10.
Microsoft Security Intelligence Report volume 23 is now available
As security incidents and events keep making headlines, Microsoft is committed to helping our customers and the rest of the security community to make sense of the risks and offer recommendations. Old and new malware continues to get propagated through massive botnets, attackers are increasing focus on easier attack methods such as phishing, and ransomware attacks have evolved to be more rapid and destructive. The latest Microsoft Security Intelligence Report, which is now available for download at www.microsoft.com/sir, dives deep into each of these key themes and offers insight into additional threat intelligence.
Vulnerabilities and Updates
April 2018 security update release

On April 10 we released security updates to provide additional protections against malicious attackers. By default, Windows 10 receives these updates automatically, and for customers running previous versions, we recommend they turn on automatic updates as a best practice. More information about this month’s security updates can be found in the Security Update Guide.

Support Lifecycle
Configuration Manager 2007 approaching end of support: What you need to know
Microsoft System Center Configuration Manager 2007 has a support and servicing lifecycle during which we provide new features, software updates, security fixes, etc. This lifecycle lasts for a minimum of 10 years from the date of the product’s initial release. The end of the lifecycle is known as the product’s end of support. Configuration Manager 2007 reaches the end of its support lifecycle on July 9, 2019. We strongly recommend that you migrate your Configuration Manager 2007 infrastructure as soon as possible to the latest version of Configuration Manager (current branch).
Microsoft Premier Support News
Finally Remove Your Security Blockers: Introducing Project VAST
Has your organization’s security journey been hampered by environmental roadblocks in your infrastructure? Does your organization struggle to effectively measure the return on its security investment?
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Delegate WMI Access to Domain Controllers

$
0
0

Hi everyone! Graeme Bray back with you today with a post around delegating WMI access to Domain Controllers. Continuing the tradition of security themed posts that we’ve had recently on AskPFEPlat, I thought I’d throw this one together for you.

This post originally came about after several customers asked how to remove users accounts from Domain Admins and the Administrators group in the domain. These accounts are needed to monitor the systems, so we needed to find a way to get them to read the instrumentation of the system with non-elevated privilege.

At this point, most admins understand the danger of having an excessive number of users/service accounts in Domain Admins (and other privileged groups). If not, I recommend reading the Pass-The-Hash guidance.

What most don’t understand is that the Administrators group provides full control over the Domain Controllers and is just as critical of a group to keep users out of.

Source: https://technet.microsoft.com/library/cc700835.aspx

What’s the appropriate use case for doing something like this? Typically, in the Domain Admins group, you’ll see accounts for monitoring, PowerShell queries, etc. Those typically only need WMI access to pull information to monitor/audit. By following the theory of least privilege, it allows you to still give access needed to watch your infrastructure, without potentially compromising access.

Some of the components of what we’re doing in the step-by-step (below).

Set-WMINamespaceSecurity

This script will automate the addition of delegation of the group (or user) that you want to the Root/Cimv2 WMI Namespace on the remote machine.

You can do this manually by opening wmimgmt.msc and modifying the security on the Root/cimv2 namespace. The script will automatically ensure that inheriting is turned on for all sub-classes in this namespace.

            Special thanks to Steve Lee for the Set-WMINamespaceSecurity script.

Distributed COM Users

The Distributed COM Users group is a built-in group that allows the start, activation, and use of COM objects. Care should be taken and you should monitor this group to ensure that only users are added when you trust that account.

All this being said, the goal is to limit how WMI can be accessed and limit whom in the target groups have the access to log into a DC. This works via scheduled task and will result in the addition of a set of users having the ability to query WMI without access to log into a Domain Controller.

Without further ado, here is a simplified, step-by-step process for delegating access to WMI.

  1. Create a group, such as AD – Remote WMI Access
  2. Add appropriate users to this group
  3. Add the AD – Remote WMI Access group to Builtin\Distributed COM Users
  4. Download Script
  5. Create a new Group Policy object, such as “Domain Controller – Delegate WMI Access”
  6. Create file via Group Policy Preferences
    • Go to Computer Configuration -> Preferences -> Windows Settings
    • Click Files
    • Right Click and select New File
    • Select Source File (Set-WMINamespaceSecurity.ps1) file path
    • Select Destination File, such as C:\scripts\Set-WMINamespaceSecurity.ps1
    • Click <OK> to close.
  7. Create Scheduled Tasks via Group Policy Preferences
    • While the “Domain Controller – Delegate WMI Access” policy is open, navigate to Computer Configuration -> Preferences -> Control Panel Settings -> Scheduled Tasks
    • Right click and select New -> New Scheduled Task (At least Windows 7)
    • Set the name appropriately, such as Set WMI Namespace Security
    • Configure the security options task to run as NT Authority\System.
    • Configure the task to Run whether user is logged on or not and to Run with highest privileges.
    • On the Triggers tab, ensure that Begin the task: is set to At task creation/modification.
    • Feel free to customize this task as desired. Our goal was to run this once on every DC, but not more than once.
    • On the Actions tab, create a new action as follows:
      1. Program/script: PowerShell.exe
      2. Add Arguments: -file C:\Scripts\Set-WMINamespaceSecurity.ps1 -namespace root/cimv2 -account “surface\AD – Remote WMI Access” -operation Add -permissions Enable
    • On the Actions tab, create a second action as follows:
      1. Program/script: PowerShell.exe
      2. Add Arguments: -file C:\Scripts\Set-WMINamespaceSecurity.ps1 -namespace root/cimv2 -account “surface\AD – Remote WMI Access” -operation Add -permissions RemoteAccess
    • On the Action tab, create a third and final action as follows:
      1. Program/Script: PowerShell.exe
      2. Add Arguments: -ExecutionPolicy Bypass -command “Restart-Service winmgmt -force”
    • The remainder of the scheduled task can be left default or customized for your specific environment.
    • Click <OK> to close this scheduled task.

Should you want to not restart the WMI Service, do not create the third Action.

The scheduled task must be created this way due to the way that multiple values are being passed to the “Permissions” property. An error will occur with PowerShell when passed as “Enabled,RemoteAccess”.

Wait 5 minutes for group policy to refresh on the Domain Controllers and the script will have been copied, the tasks will run, and WMI security will be updated.

If you try to do a remote shutdown via WMI, you get an error “Privilege not held.” This is due to the fact that you don’t have the “Shut down this system” User Rights Assignment.

That’s it! By doing each step (you did each, right?), you’ve delegated access to WMI. These accounts no longer have the ability to log into a DC to reboot the machine, or do other nefarious things.

Until next time!
-Graeme Bray

Pertinent Links:

Distributed COM Users: https://docs.microsoft.com/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc771990(v=ws.11)

Azure Stack Identity: Choosing the right Azure Stack Identity Model

$
0
0

Hello Everyone, my name is Zoheb Shaikh and I’m a Premier Field Engineer with Microsoft India. I am back again with another blog and today I’ll share with you information about Azure Stack Identity models.

Before I explain the concept of this topic, if you are not aware about what Azure Stack is then go check this out https://azure.microsoft.com/en-us/overview/azure-stack/

Coming back to our subject, Azure Stack requires Azure Active Directory or Active Directory Federation Services as its Identity Provider. Azure Stack works on OpenID Connect protocol just like Azure. AAD or ADFS both are compatible with these protocols.

Your decision to use Azure Active Directory or ADFS is dependent on the deployment models for Azure Stack should be i.e if you decide to use a Connected mode or a Disconnected mode respectively.

You must also decide which licensing model you wish to use. The available options depend upon whether or not you need to deploy Azure Stack connected to the internet.

  • For a Connected deployment, you may choose either Pay-as-you-use or Capacity-based licensing models. Pay-as-you-use requires a connection to Azure AD for it to report usage, which is then billed through Azure commerce.
  • Only Capacity-based licensing is supported when you deploy a Disconnected mode which means there is a disconnection with the internet. For more information about the licensing models, see Microsoft Azure Stack packaging and pricing.

To know more about choosing connected or disconnected modes please see Azure Stack Connection Models

Since we now know the two Identity models lets talk about scenarios where you could use them.

1. Enterprises : Dedicated hosting

This is a scenario of an Enterprise company that could use Azure Stack for a Single Directory Tenant in Azure AD. Authentication for Azure Stack Admins and Tenants will be served by a Single Directory Tenant. Since Authentication will be served by Azure AD this has to be connected and we can either use capacity-based or consumption-based licensing.

2. Azure Stack Service Provider : Shared hosting

Azure Stack allows users from multiple directories to sign in and use Azure AD but this would then have to be designed in such a way that only one Directory Tenant has access to the Admin Portal and Azure Resource Provider which means that the Admin Portal and Admin ARM are single-tenanted and the Public/User Portal, ARM and RPs are multi-tenanted. Since Authentication will be served by Azure AD this has to be connected to the internet and we can either use capacity-based or consumption-based licensing.

When a user from different tenants logs on, they will be redirected via their own ADFS to authenticate against their on-premises AD and gain access to Azure Stack Public portal.

3. Enterprises : Dedicated hosting

Since this is a disconnected scenario Azure AD is out of context here.

Azure Stack ADFS server and On-Premises ADFS server will be used for creating a Federation trust and the authentication will happen from On-Premises ADDS.

I hope this helps in understanding the different types of Identity scenerios that you can use for Azure Stack.

Zoheb

CredSSP, RDP and Raven

$
0
0

Welcome to another addition of AskPFEPlat, this is Paul Bergson and Graeme Bray bringing up the topic of CredSSP when in use with the Remote Desktop Protocol. This topic became an internal discussion around Premier Field Engineering and customers like you as to how this would impact accessing systems via RDP starting in May. This discussion kind of aligns itself with an experience I recently had with my Miniature Schnauzer, Raven. You might be asking yourself what could Raven possibly have to do with IT maintenance?

Being a Premier Field Engineer I end up traveling and my backpack is my carryon of choice when I board a plane, so I always carry some snacks in the event I get hungry. A couple of months back I returned from a trip presenting “Protecting Against Ransomware” to a customer and upon my return I left a half-eaten bag of candy in the side pocket of my backpack. This was just a regular size bag, but sugar isn’t good for dogs. I kept telling myself, I should remove the bag, but I wanted to ensure I had something in the event of a candy emergency. So, my urge for sweets beat my common sense that Raven would ever find the half-eaten bag in my backpack.

So, I get home late with my wife, a couple of nights ago and Raven races to the door to greet us, but she quickly decides to race around the house just to run. All I could think was what got into her??? As I entered the living room (she went zooming by) I see the candy wrapper from my backpack strewn all over the carpet. All I could do was think, that I knew better and wasn’t happy with myself.

Raven didn’t get sick, but it was a lesson to me to follow my instincts and not put Raven in this situation. This could have easily been prevented but I just convinced myself, “Don’t worry, things will be fine” when in fact I was aware of the risk and ignored it anyways!

So, with that in mind, I wanted to call to your attention a Microsoft, May 2018 tentative update that could impact the ability to establish remote host RDP session connections within an organization. This issue can occur if the local client and the remote host have differing “Encryption Oracle Remediation” settings within the registry that define how to build an RDP session with CredSSP. The “Encryption Oracle Remediation” setting options are defined below and if the server or client have different expectations on the establishment of a secure RDP session the connection could be blocked. There is the possibility that the current default setting could change from the tentative update and therefore impact the expected secure session requirement.

With the release of the March 2018 Security bulletin, there was a fix that specifically addressed a CredSSP, “Remote Code Execution” vulnerability (CVE-2018-0886) which could impact RDP connections.

“An attacker who successfully exploited this vulnerability could relay user credentials and use them to execute code on the target system.”
https://portal.msrc.microsoft.com/en-us/security-guidance/advisory/CVE-2018-0886a

Besides both the client and server being patched, there is the requirement that a new Group Policy setting be applied to define the protection for the CredSSP configuration, currently the setting will default to “Vulnerable”. The recommendation is to define a group policy to set it to either “Force updated clients” or “Mitigated” on both client and server.

If you review the options of the group policy settings, you will see that there are 3 states in which the registry setting can exist on the clients and servers. Engineers will also want to consider devices in an unpatched state as seen in the table at the end of this document.

Note: Ensure that you update the Group Policy Central Store (Or if not using a Central Store, use a device with the patch applied when editing Group Policy) with the latest CredSSP.admx and CredSSP.adml. These files will contain the latest copy of the edit configuration settings for these settings, as seen below.
https://support.microsoft.com/en-us/help/4056564/security-update-for-vulnerabilities-in-windows-server-2008

Group Policy

Policy path and setting name Description
Policy path: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation

Setting name: Encryption Oracle Remediation

Encryption oracle remediation

This policy setting applies to applications that use the CredSSP component (for example, Remote Desktop Connection).

Some versions of the CredSSP protocol are vulnerable to an encryption oracle attack against the client. This policy controls compatibility with vulnerable clients and servers. This policy allows you to set the level of protection that you want for the encryption oracle vulnerability.

If you enable this policy setting, CredSSP version support will be selected based on the following options:

Force Updated Clients – Client applications that use CredSSP will not be able to fall back to insecure versions, and services that use CredSSP will not accept unpatched clients.

Note This setting should not be deployed until all remote hosts support the newest version.

Mitigated – Client applications that use CredSSP will not be able to fall back to insecure versions, but services that use CredSSP will accept unpatched clients.

Vulnerable – Client applications that use CredSSP will expose the remote servers to attacks by supporting fallback to insecure versions, and services that use CredSSP will accept unpatched clients.

The Encryption Oracle Remediation Group Policy supports the following three options, which should be applied to clients and servers:

Policy setting Registry value Client behavior Server behavior
Force updated clients

0

Client applications that use CredSSP will not be able to fall back to insecure versions. Services using CredSSP will not accept unpatched clients.

Note This setting should not be deployed until all Windows and third-party CredSSP clients support the newest CredSSP version.

Mitigated

1

Client applications that use CredSSP will not be able to fall back to insecure versions. Services that use CredSSP will accept unpatched clients.
Vulnerable

2

Client applications that use CredSSP will expose remote servers to attacks by supporting fallback to insecure versions. Services that use CredSSP will accept unpatched clients.

A second update, tentatively scheduled to be released on May 8, 2018, will change the default behavior from “Vulnerable” to “Mitigated”.

Note: Any change to Encryption Oracle Remediation requires a reboot.

https://support.microsoft.com/en-us/help/4093492/credssp-updates-for-cve-2018-0886-march-13-2018

From the policy description above and with the tentative update and default registry setting coming in May, it is best that you plan a policy to ensure there is no loss in connectivity to your servers from RDP connections.

If you review the table below and consider the tentative update for May, then the updated default registry setting changes from “Vulnerable” to “Mitigated” then the resulting connection is:

  • If the client has the patch applied but the remote host (server) does not, the connection will be blocked from connecting
  • If the client is unpatched but the remote host (server) Is patched, the session will be vulnerable to the attack
  • If both the client and the remote host (server) are patched, then the session will connect in a secure manner


If you notice if both the client and server are patched, but the default policy setting is left at “Vulnerable” the RDP connection is “Vulnerable” to attack. Once the default setting is modified to “Mitigated” then the connection becomes “Secure” by default.

Remember, any updates from Group Policy will supersede any local settings applied by the system.

After reading through this document, I strongly urge your enterprise to review the current approach to the CVE-2018-0886 mitigation and establish a policy to ensure that your hosts can continue to establish a secure session following the possibility of any update that might occur. Our goal is to minimize any issues preventing you from RDP access to remote hosts.

You aren’t required to force a “Secure” session if all the hosts haven’t been updated. Just creating a policy that you define, and control is much better than leaving things to chance which is what I did when thinking Raven would never get into the candy in my backpack.

Thanks for reading

Paul and Graeme

Simple PowerShell Network Capture Tool – Update

$
0
0

Hello all. Jacob Lavender here once again for the Ask PFE Platforms team to give you an update on the little sample tool that I put together at the end of last year.

The original post is located here:

https://blogs.technet.microsoft.com/askpfeplat/2017/12/04/simple-powershell-network-capture-tool/

But before you fly off to read that post – as good as it was, let me just inform you that I’ve made some significant updates which include two major improvements:

  • Multiple Target Computers – Yes, now we can target multiple computers at the same time using this tool (single computer still supported)
  • Enhanced Logic for credential validation.

There are a number of other improvements which are made as well, which I’ll continue to tweak as time passes and post in the gallery.

As a note: While you review the sample tool, if you opt to run it and stop it without completing or choosing a provided exit option, make sure that you always run the Clear-Variables function in the sample script. Why you might ask? Simple, you just don’t want those variables lying around – especially the one’s with credentials in them.

As a final note: The report provided no longer includes any data on processes. Instead, that is performed on the remote machine and stored in a text file on the machine – and moved to the central file share upon completion of the script.

Where is the tool:

https://gallery.technet.microsoft.com/Remote-Network-Capture-8fa747ba

My original post has a great deal of details on the value of NETSH TRACE and New-NetEventSession, so give it a look if you need some clarification. There are lots of great reference articles provided by other tech guru’s way above my level – so make sure to check them out too!

Limitation: PowerShell 3.0 or above is required for full functionality. If you are using PowerShell 2.0 on a target machine, then the trace files will not be moved to the central file share. But c’mon! PowerShell 6.0 is here! Why would you still be hanging on to 2.0? (Yes, I know that there are some applications for it – I get it. Sigh.)

Hyper-V Integration Services – Where Are We Today?

$
0
0

Hyper-V Integration Services provide critical functionality to Guests (virtual machines) running on Microsoft’s virtualization platform (Hyper-V). For the most part, virtual machines run in an isolated environment on the Hyper-V host. However, there is a high-speed communications channel between the Guest and the Host that allows the Guest to take advantage of Host-side services. If you who have been working with Hyper-V since its initial release you may recognize this architecture diagram –


As seen in the diagram, the Virtualization Service Client (VSC) running in a Guest communicates with the Virtualization Service Provider (VSP) running in the Host over a communications channel called the Virtual Machine BUS (VMBUS). The Integration Services available to virtual machines today are shown here:


Integration Services are enabled in the Virtual Machine settings in Hyper-V Manager or by using the PowerShell cmdlet Enable-VMIntegrationService. These correspond to services running both in the virtual machine (VSC) itself and in the Host (VSP).

To ensure the communication flow between the Guest and the Host is as efficient as possible, Integration Services may need to be periodically updated. It has always been a Microsoft ‘best practice’ to keep Integration Services updated to ensure the functionality in the Guest is matched with that in the Host. There are several ways to accomplish this including custom scripting, using System Center Configuration Manager (SCCM), using System Center Virtual Machine Manger (SCVMM), and mounting the vmguest.iso file on the Host in the virtual DVD drive in the Guest (Windows only Guests.)


Linux Guests use a separate LIS (Linux Integration Services) package. After installing the latest package, you can verify the version for the communications channel (VMBUS):


You can also list out the Integration Services and other devices connecting over the communications channel:


Note: The versioning shown here for LIS is the result of installing LIS v4.2 in a CentOS 7 virtual machine.

More detailed information related to the capabilities of Linux Integrations Services can be found here.

With the release of Windows Server 2016, updating Integration Services in Windows Guests has changed and will be primarily by way of Windows Update (WU) unless otherwise stated here. Up until very recently, this process had not been working and even now has not been fully implemented for all Windows Guest operating systems. To date (as of the writing of this blog), the Integration Components for Guests running Windows Server 2012 R2 and Windows Server 2008 R2 SP1 are updated using Windows Update. The latest versions of Integration Components for the down-level Server SKUs as well as their corresponding Windows Client SKUs is shown here:


Note: Testing was conducted by deploying virtual machines, in Windows Server 2016 Hyper-V, using ISO media downloaded from a Visual Studio subscription. Each virtual machine was then stepped through the updating process using only Windows Update until it was fully patched. The latest Integration Services for Windows Server 2012 R2 and Windows Server 2008 R2 SP1 are included in KB 4072650.

Integration Services versioning (Windows) information can be obtained using a variety of scripting methods, but a quick way to do it from inside the virtual machine itself is to run one of these commands in PowerShell –


Revisiting the method for updating Integration Services on earlier versions of Hyper-V by mounting the vmguest.iso file from the Host in the virtual machines’ DVD drive, if you open any of the *.xml files in the package, you can ascertain version information –


As of this writing, versioning information is older in a vmguest.iso file as compared to what is registered in virtual machines updated by KB 4072650. This seems to indicate the vmguest.iso file on the Host (prior to Windows Server 2016\Windows 10) is no longer being updated. Instead, virtual machines are updating their Integration Services using Windows Update. Even if you run setup.exe in the ISO package, the result is an output of the version registered in the Guest.


Thanks for your attention and I hope this information was useful to you.

Charles Timon, Jr.
Senior, Premiere Field Engineer
Microsoft Corporation

Are My RDP Connections Really Secured by a Certificate?

$
0
0

Hello everyone! Tim Beasley – Platforms PFE coming at you live from the funky fresh jam known as LAS VEGAS! That’s right people! I’m having a blast by the pool at the MGM Grand and loving life!! …writing a blog post for Microsoft. At Vegas. In the sun poolside…writing…a…technical blog post…what’s wrong with me?!

Okay not really. Once again I’m here in Missouri, where it’s cold in the Spring. I’m just wishing I was in Vegas at the moment. Aren’t we all???

Before I go too far off the deep end, let me zip back into focus here and discuss the topic at hand. The other day I was approached with:

“Hey Timmeh, I followed your awesome blog post about ensuring my RDP connections were configured to use a certificate from my internal PKI (found here). I believe everything’s working but I’m just not sure. When I connect to a remote machine on my network/domain, the connection always shows that I’m connected via Kerberos…NOT the certificate. No matter what I try I can’t seem to prove the certificate’s actually being used.”

Anyone ever come across this one before? If so, I have the answer! If not, I still have the answer! Muah ha ha ha! (Quick shout out to Sergey Kuzin – authentication expert in Product Group, who assisted me with tracking all this down.)

Let me enlighten you people on what it is I’m referring to that’s causing said confusion:

  • Step 1. On a client joined to your domain, simply launch the Remote Desktop Connection Client (mstsc.exe) and establish any connection to a machine on the domain.
  • Step 2. Click the little LOCK icon.
  • Step 3. Read what the notification says.

Kerberos?!?

“But Tim, I followed your instructions in your last blog post and I know for a fact that the proper certificate is installed, and the terminal services are set to use the right thumbprint, etc.!!! You know what I think!? I think this is garbage, and Microsoft is full of it…blah blah blah!”

Take a breath (wooo saaahhhh) and relax. I promise it’s not what you think.

Remember that RDP encryption was used by default (Ahhh, but is it?). You’ll find lots of online documentation saying as much. One example is here: https://technet.microsoft.com/en-us/library/ff458357.aspx. Back in the day sure (2003 and older)…but to my surprise, I recently found out that RDP encryption is NO LONGER THE DEFAULT. It can be used, but it must be enabled at the client side. Say what?! (Yeah now I’ll have to add an update my previous blog post…) Not to mention now a few of the TechNet docs are a bit outdated…(hey it happens, stuff doesn’t last forever).

“So…. what’s the default encryption method now?”

TLS encryption! Hurray! In a nutshell, if a certificate from a PKI doesn’t exist on the machine to use for RDP sessions, then the machine will generate a self-signed certificate, and RDP will use that instead to guarantee TLS is always used.

And we can prove it. Just look at my network capture from an RDP session I did in my labs (after I set everything up to use a proper certificate…not the self-signed one).

See the TLS exchanges occurring when the session is established? Feel free to try it yourself in your own environment.

Now, let’s go re-visit that little LOCK icon from before. I guarantee you it’s working as intended, and the result is as expected. I know it’s confusing on the surface. However, once you understand that it’s related to the AUTHENTICATION method used to establish the session, then it makes more sense. That’s right, authentication. It’s how the session was authenticated to the session host! Not
whether the traffic is encrypted, and how it’s encrypted.

What happens if you follow my advice from my other blog and establish RDP sessions using FQDN and proper certificates? You get this:

Note the difference. Now you will see that the identity was verified by both a certificate AND Kerberos.

To sum up, it’s always best to first ensure proper certificates are used, and then…connect using FQDN instead of short names or IP addresses. Thanks for reading! And now it’s back to wishing I was actually in Vegas…

Tim Beasley – Platforms PFE

Infrastructure + Security: Noteworthy News (May, 2018)

$
0
0

Hi there! Stanislav Belov is here with the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis. Enjoy! 

Microsoft Azure
Azure confidential computing
The Azure team, alongside Microsoft Research, Intel, Windows, and our Developer Tools group, have been working together to bring Trusted Execution Environments (TEEs) such as Intel SGX and Virtualization Based Security (VBS – previously known as Virtual Secure mode) to the cloud. TEEs protect data being processed from access outside the TEE. We’re ready to share more details about our confidential cloud vision and the work we’ve done since the announcement.
The 3 ways Azure improves your security
As we all know, companies worldwide are challenged by the ongoing volume of evolving security threats and with retaining qualified security talent to respond to these threats. In fact, the average large organization gets 17,000 security alerts each week, which results in an of average 99 days to discover security breaches. That contrasts with the less than 48 hours it takes for security breaches to grow from one system compromised into significantly broader issues.
Manage virtual machine access using just in time
Just in time virtual machine (VM) access can be used to lock down inbound traffic to your Azure VMs, reducing exposure to attacks while providing easy access to connect to VMs when needed.
Windows Server
Delegate WMI Access to Domain Controllers

Typically, in the Domain Admins group, you’ll see accounts for monitoring, PowerShell queries, etc. Those typically only need WMI access to pull information to monitor/audit. By following the theory of least privilege, it allows you to still give access needed to watch your infrastructure, without potentially compromising access.

Windows Client
What’s new in the Windows 10 April 2018 Update

With this update, available as a free download today, you get new experiences that help minimize distractions and make the most of every moment by saving you time. Our hope is that you’ll have more time to do what matters most to you whether that’s to create, play, work, or simply do what you love.

Features removed or planned for replacement starting with Windows 10, version 1803

Each release of Windows 10 adds new features and functionality; we also occasionally remove features and functionality, usually because we’ve added a better option. Here are the details about the features and functionalities that we removed in Windows 10, version 1803 (also called Windows 10 April 2018 Update).

Security
Enhancing Office 365 Advanced Threat Protection with detonation-based heuristics and machine learning

Office 365 Advanced Threat Protection (ATP) uses a comprehensive and multi-layered solution to protect mailboxes, files, online storage, and applications against a wide range of threats. Machine learning technologies, powered by expert input from security researchers, automated systems, and threat intelligence, enable us to build and scale defenses that protect customers against threats in real-time.

Finally Remove Insecure LDAP and Protect your Credentials with Project VAST
The problem is with how the client asks for the data. Specifically, in how it binds to the DC. Unless you’ve configured the DC to require signing, many clients are returning unsigned traffic, which is susceptible to replay or attacker-in-the-middle attacks. This may result in nefarious activity, such as modified packets, in which a server or even a person makes decisions based on forged data.
Mail flow insights are available in Security & Compliance center
Admins can use mail flow dashboard in the Office 365 Security & Compliance Center to discover trends, insights and take actions to fix issues related to mail flow in their Office 365 organization.
Security baseline for Windows 10 “April 2018 Update” (v1803)
Microsoft on April 30, 2018, announced the final release of the security configuration baseline settings for Windows 10 April 2018 Update, also known as version 1803, Redstone 4, or RS4.
Building a world without passwords
Nobody likes passwords. They are inconvenient, insecure, and expensive. In fact, we dislike them so much that we’ve been busy at work trying to create a world without them – a world without passwords.
Microsoft Advanced Threat Analytics v1.9 released
We are pleased to announce a new release of Microsoft Advanced Threat Analytics (ATA) version 1.9. This release includes numerous new features and performance enhancements, making it an even more powerful security solution.
Vulnerabilities and Updates
Unable to RDP to Virtual Machine: CredSSP Encryption Oracle Remediation

With the release of the March 2018 Security bulletin, there was a fix that addressed a CredSSP, “Remote Code Execution” vulnerability (CVE-2018-0886) which could impact RDP connections.

.NET Framework May 2018 Security and Quality Rollup

A security feature bypass vulnerability exists in Windows which could allow an attacker to bypass Device Guard. An attacker who successfully exploited this vulnerability could circumvent a User Mode Code Integrity (UMCI) policy on the machine. To exploit the vulnerability, an attacker would first have to access the local machine, and then run a malicious program. The update addresses the vulnerability by correcting how Windows validates User Mode Code Integrity policies

Support Lifecycle
The end of support (EOS) for SQL Server and Windows Server 2008 and 2008 R2 is approaching rapidly:
  • July 9, 2019 – SQL Server 2008 and 2008 R2
  • January 14, 2020 – Windows Server 2008 and 2008 R2
Microsoft Premier Support News
Coming by popular demand from customers having received the POP-Securing Lateral Account Movement (SLAM) offering, the Onboarding Accelerator – Securing Lateral Account Movement – Premium has now been released. This is a multi-week engagement in which Microsoft Premier Field Engineers support you in increasing your resiliency against critical credential theft attacks by implementing core mitigations into your production environments. Each of the services included in the Premium offering consist of a one week engagement which matures your overall mitigation defense against leveraging lateral account movement as a means of a potentially devastating compromise; together these mitigations result in a defense-in-depth approach. Customers may elect to implement all three services (the Premium offering), any one of the individual services by itself, or any combination of the three.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

How Healthy is your LAPS Environment?

$
0
0

Hi all. I’m Michael Rendino, Senior Premier Field Engineer, based out of the Charlotte, NC campus of Microsoft! Previously, I’ve helped you with some network capture guidance (here and here), but today, I want to talk about something different. Over the last couple of years, one of the hottest tech topics has been security (as it should be). You should be eating, sleeping and breathing it. Part of your security focus should be on mitigating pass-the-hash attacks. You’ve probably heard a ton about them, but if not, venture over to http://aka.ms/pth for a wealth of helpful information.

One great tool that we offer for FREE (yes, really…don’t be so sarcastic) it’s the Local Administrator Password Solution, or LAPS. If you don’t believe me, go here and download it. The idea behind this tool is to eliminate those instances where you have multiple computers with the same local admin account password. With LAPS, each machine will set its own random password for the built-in local administrator account (or a different account of your choosing) and populate an attribute on that computer account in Active Directory. It’s easy to deploy and works great. The challenge comes in knowing if it’s actually working. How do you know if your machines have ever set the password? Or maybe they set it once and haven’t updated it since even though it’s past the designated expiration date? It’s definitely worth monitoring to ensure that your machines are operating as expected.

Well, internally, this question was asked long ago and the creator of LAPS, Jiri Formacek, threw together a small PowerShell script to provide that capability. I have built on what he started and have implemented this script with my customers. Since my PowerShell-fu is not super strong, I got help from Sean Kearney who helped refine it and make it cleaner. Now, my customer can easily see the status of their deployment and troubleshoot those computers that are out of compliance. By default, the LAPS health report will be written to the file share you specify, but can also email you, if you choose. Simply use the -SendMessage switch and set it to $true. Make sure to edit the SMTP settings variables first.

Requirements:

  • A computer to run the script. My customer uses a Windows Server 2012 R2 box, but any computer running PowerShell 3.0 or better should work.
  • The S.DS.P PowerShell module downloaded from https://gallery.technet.microsoft.com/scriptcenter/Using-SystemDirectoryServic-0adf7ef5 and installed on that computer. If your server has internet connectivity, you can also launch PowerShell as Administrator and run “Install-Module S.DS.P“. This requires NuGet 2.8.5.201 so if it isn’t already installed, you will get prompted if you want it done.


  • The script will need to be run using credentials with rights to read the LAPS attributes on the computer objects.

Once you have met those basic requirements and have adjusted the variables for your environment, run this script and get a simple report like this:


Now you can start investigating why these computers are out of compliance.

If you have deployed LAPS, I hope you find this script to be beneficial and can ensure that everything is working as expected. Good luck!

Usage

  1. First, where noted, edit the variables so they reflect your environment.
  2. If you just run the script as-is, no email will be sent. If you want to send one, append SendMessage
    $true

param(

[Switch]$SendMessage=$False

)

<#

.DISCLAIMER

This Sample Code is provided for the purpose of illustration only and is not intended to be used in a production environment. THIS SAMPLE CODE AND ANY RELATED INFORMATION ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY AND/OR FITNESS FOR A PARTICULAR PURPOSE. We grant You a nonexclusive, royalty-free right to use and modify the Sample Code and to reproduce and distribute the object code form of the Sample Code, provided that You agree: (i) to not use Our name, logo, or trademarks to market Your software product in which the Sample Code is embedded; (ii) to include a valid copyright notice on Your software product in which the Sample Code is embedded; and (iii) to indemnify, hold harmless, and defend Us and Our suppliers from and against any claims or lawsuits, including attorneys’ fees, that arise or result from the use or distribution of the Sample Code.

#>

<#Make sure you have installed the S.DS.P module from https://gallery.technet.microsoft.com/scriptcenter/Using-SystemDirectoryServic-0adf7ef5 on the server where you will be running the script.

Thanks to Jiri Formacek for creating the foundation of the script. I just put the cherry on top!

Optimizations tweaked by Sean Kearney, Platforms PFE and ‘Scripting Guy’

#>

#Import the S.DS.P PowerShell module

Import-Module S.DS.P

#Edit the following variable to specify the domain or OU to search (e.g. workstations or servers)

$searchBase=“dc=contoso,dc=com”

#Edit the following variable to specify the LDAP server to use. Using domain name will select any DC in the domain

$Server=“contoso.com”

#Edit the following variable to specify the share to store the output.

$fileshare=“c:\temp”

#Edit the following variable, if necessary, to match the LAPS group policy for the age of passwords (default is 30)

$maxAgeDays=30
$ts=[DateTime]::Now.AddDays(0$maxAgeDays).ToFileTimeUtc().ToString()

#LDAP queries for LAPS statistics

$enrolledComputers=@(Find-LdapObject -LdapConnection $Server -searchFilter “(&(objectClass=computer)(ms-MCS-AdmPwdExpirationTime=*))” -searchBase $searchBase -PropertiesToLoad @(‘canonicalname’,‘lastlogontimestamp’))

$nonEnrolledComputers=@(Find-LdapObject -LdapConnection $Server -searchFilter “(&(objectClass=computer)(!(ms-MCS-AdmPwdExpirationTime=*)))” -searchBase $searchBase -PropertiesToLoad @(‘canonicalname’,‘lastlogontimestamp’))

$expiredNotRefreshed=@(Find-LdapObject -LdapConnection $Server -searchFilter “(&(objectClass=computer)(ms-MCS-AdmPwdExpirationTime<=$ts))” -searchBase $searchBase -PropertiesToLoad @(‘canonicalname’,‘lastlogontimestamp’))

#Write the LAPS information (summary and detail) to a temporary file in the previously specified share

$Content=@”
COUNTS
——
Enrolled: $($enrolledComputers.Count)
Not enrolled: $($nonEnrolledComputers.Count)
Expired: $($expiredNotRefreshed.Count)
DETAILS
——-
Enrolled
——–
$($enrolledComputers | Select-Object ‘canonicalname’,@{l=‘lastlogon’; e={[datetime]::FromFileTime($_.lastlogontimestamp).ToString(“MM-dd-yy”)}} Out-String)

Not enrolled
————
$($nonEnrolledComputers Select-Object ‘canonicalname’,@{l=‘lastlogon’; e={[datetime]::FromFileTime($_.lastlogontimestamp).ToString(“MM-dd-yy”)}} Out-String)

Expired
——-
$($expiredNotRefreshed Select-Object ‘canonicalname’,@{l=‘lastlogon’; e={[datetime]::FromFileTime($_.lastlogontimestamp).ToString(“MM-dd-yy”)}} Out-String)

“@
$FileDate = (Get-Date).tostring(“MM-dd-yyyy-hh-mm-ss”)
$Filename=$Fileshare+‘\’+$Filedate+‘LAPSReport.txt’
Add-Content -Value $Content -Path $Filename
If ($SendMessage)
{
#Edit the variables below to specify the email addresses and SMTP server to use
$EmailFrom ‘lapshealth@tailspintoys.com’
$EmailTo=’emailaddress@tailspintoys.com’
$today Get-Date
$EmailSubject ‘LAPS Health Report for ‘ $today.ToShortDateString()
$EmailBody=$Content
$smtpserver “smtp.tailspintoys.com”

Send-MailMessage -Body $EmailBody -From $EmailFrom -To $EmailTo -Subject $EmailSubject -SmtpServer $smtpserver
}

A Platforms Admin Guide to Setting up Event Rules/Monitors in SCOM

$
0
0

Hello to all who are reading. My name is Nathan Gau. I’m a Microsoft Premier Field Engineer and have been supporting System Center Operations Manager (SCOM) for about 4 years now. Most of my blogging is normally SCOM or Cyber Security related, but I wanted to put my platforms hat back on for a bit and talk about SCOM’s event monitoring capabilities along with some of the typical mistakes that windows admins such as myself have made. Not all these tips and tricks are easy to dig up, and while experts in the SCOM world will know most of them; for those of us wearing multiple hats who are occasionally tasked with touching SCOM, we might be in for a bit of a surprise. I know it’s not exciting, but it can be useful.

First to cover some basic capabilities. Most people use SCOM for its alerting capabilities. That is true, and in most environments, it will generate a lot of alerts out of the box. I’m not going to delve into much there, as I’ve done so on my blog, but I wanted to point out that SCOM has the capability to collect and report on events and/or performance data for things such as performance baselining (such as performance before/after major changes to an application) or collecting events that you need to see a frequency for but not necessarily generate alerts. This is a very useful, and often overlooked, component of operations manager.

That said, I want to take a deeper dive into how SCOM consumes event logs for monitoring. When one looks at an event log, what we see is the general view designed for human being. SCOM, however, is a robot and prefers looking at the XML. It’s easier to parse, but that also leads to some odd quirks that can have some unexpected results, as you may end up in a scenario where you think you’re monitoring something and are not. The main reason for this is that the values in the XML sometimes differ from that in the friendly view. Take a look at this 4624 event from my lab:

The friendly view defines the Impersonation Level field, while the XML is using a code (%%1832 in this case). While not terribly common, this can happen with certain events. If a rule or monitor was configured to search the log for the “identification” impersonation level, instead of the %%1832, no alert will ever be generated.  This can extend to more common features as well:

In this case, the event source differs. This can be very confusing since the source is often something used to filter out event IDs. Again, this isn’t a common occurrence, but I’ve run into it enough that it’s worth mentioning. Again, the values in the XML view are what matters, not the friendly view.

The last thing I wanted to discuss is parameterization. Most events are parameterized, meaning that the event description is effectively broken down into sections. The easy way to search event logs would be to use a common field such as “EventDescription”. SCOM doesn’t give its admins the ability to select this parameter; instead, a SCOM administrator must know this particular parameter by name. There’s a reason for this. Its use is horribly inefficient. It can also be problematic for the SCOM agent, especially if the log being searched happens to be one that fills up rapidly, like say the security log.

Effective use of parameterization allows SCOM to search only the relevant portion of the log. Other than being efficient, it can also reduce noise, which is something that any SCOM admin will have to deal with. You have a couple ways of accomplishing this. Take this 4634 event as an example:

I can search this event by a named parameter, such as TargetUserSid. This is easy enough to do, but it’s also a place where I could easily make a typo and not catch it. For this purpose, SCOM uses numbered parameters. Numbered parameters only apply to the event data, so in the example above, I collapsed the <System> tag since no numbered parameters exist in it. That leaves only the parameters in the event description. The numbering system is straight forward. TargetUserSid is parameter 1. TargetUserName is parameter 2. All you need to do is count to the field you desire and that is the parameter number you need to monitor.

While not exciting information, hopefully I’ve added something useful, as these things can pose issues from time to time.

– Nathan 

 

Windows Server 2016 Reverse DNS Registration Behavior

$
0
0

Greetings everyone! Tim Beasley (Platforms PFE) coming back at ya from the infamous Nixa, Missouri! It’s infamous since it’s the home of Jason Bourne (Bourne Identity movies).

Anyways, I wanted to reach out to you all and quickly discuss the behavior changes of Windows Server 2016 when it comes to reverse DNS records. Don’t worry, it’s a good thing! We’ve written the code to follow RFC standards. But if you’re not aware of them, you might run into some wacky results in your environment.

During some discussions with one of my DSE customers, they had a rather large app that ultimately broke when they introduced WS2016 domain controller/DNS servers to their environment. What they saw was some unexpected behavior as the app references hostnames via reverse DNS records (PTRs). Now you might be wondering why this became an issue…

Turns out the app they use expects reverse DNS records in ALL LOWERCASE FORMAT. Basically, their application vendor did something silly, like take data from a case insensitive source and used it in a case sensitive lookup.

Before you all possibly go into panic mode, most applications are written well; they don’t care about this and work just fine. It’s the apps that were written for this specific behavior (and quite frankly don’t follow RFC standards) that could experience problems. Speaking of RFC Standards, you can read all about case insensitivity requirements per RFC 4343 here.

Let me give you an example of what it is I’m talking about here. In the below screenshot, you will see “2016-PAMSVR” as a pointer (PTR) record. This was taken from my lab environment running WS2016 1607 with all the latest patches (at this time April 2018 updates). Viewing the DNS records in the MMC, reflects uppercase and lowercase. In contrast, prior to 2016 (so 2012 R2 and lower) the behavior was different in that ALL PTRs registered show up in LOWERCASE only.

***Note, the client OS levels doing the PTR registrations does not matter. This behavior will be reflected no matter what version of Windows or other OS you use.***

Here’s another example from an nslookup perspective:

To reiterate, when dynamically registering a PTR record against a DNS Server running Windows Server 2012 R2 or older, the DNS Server will downcase the entry.

Test machine name: WiNdOwS-1709.Contoso.com

When registering it against a DNS Server running Windows Server 2016,
we keep the machine name case.

Please keep this behavior in the back of your mind when you’re introducing WS2016 Domain Controllers / DNS servers to your environments for the first time. Chances are you won’t run into any problems whatsoever. But if the stars aligned improperly and this does turn out to be an issue for you, then here are some suggestions on how to remediate it:

  1. Involve your App Vendor(s) and have them update their code the correct way, following RFC standards.
  2. What #1 says.
  3. Again, do what #1 says.
  4. If the app vendor pushes back and you absolutely have no other choice…you could update all the hostnames in your environment via PowerShell to reflect lowercase. You’d then have to clear out all reverse records and have the devices re-register once their hostnames are down-cased. An example of this can be found here. Just be careful doing this and make sure you test the PowerShell script first before deploying to a production environment!!!

Thanks for reading!

Tim Beasley…out. (for now)

Infrastructure + Security: Noteworthy News (June, 2018)

$
0
0

Hi there! Stanislav Belov is back with the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
General availability: Disaster recovery for Azure IaaS virtual machines
You can easily replicate and protect IaaS-based applications running on Azure to a different Azure region within a geographical cluster without deploying any additional infrastructure components or software appliances in your subscription. The cross-region DR feature is generally available in all Azure public regions where Site Recovery is available.
Azure AD delegated application management roles are in public preview!
If you have granted people the Global Administrator role for things like configuring enterprise applications, you can now move them to this lesser privileged role. Doing so will help improve your security posture and reduce the potential for unfortunate mistakes.
Use Azure Monitor to integrate with SIEM tools
Over the past two years since introducing Azure Monitor, we’ve made significant strides in terms of consolidating on a single logging pipeline for all Azure services. A majority of the top Azure services, including Azure Resource Manager and Azure Security Center, have onboarded to Azure Monitor and are producing relevant security logs.
Why you should bet on Azure for your infrastructure needs, today and in the future
For the last few years, Infrastructure-as-a-service (IaaS) has been the primary service hosting customer applications. Azure VMs are the easiest to migrate from on-premises while still enabling you to modernize your IT infrastructure, improve efficiency, enhance security, manage apps better and reduce costs. And I am proud that Azure continues to be recognized as a leader in this key area.
Eight Essentials for Hybrid Identity: #1 A new identity mindset
Today, analyst firms report that the average enterprise’s employees collectively use more than 300 software-as-a-service applications (and some estimates are much higher). And that number is rapidly expanding. Between the hyper growth of these apps, the rate at which they change and the business demand to harness new cloud capabilities for business transformation, it’s challenging to keep up. What we’ve learned from customers is that relying on an on-premises identity solution as the control point makes connecting to all these cloud applications a nearly impossible task. Then add on all the user devices, guest accounts, and connected things and you have a major management and security nightmare.
Eight Essentials for Hybrid Identity: #2 Choosing the right authentication method
With identities as your control plane, authentication is the foundation for cloud access. Choosing the right authentication method is a crucial decision, but also one that’s different for every organization and might change over time.
Windows Server
Windows Server 2008 SP2 servicing changes

We are moving to a rollup model for Windows Server 2008 SP2. The initial preview of the monthly quality rollup will be released on Tuesday, August 21, 2018. Windows Server 2008 SP2 will now follow a similar update servicing model as later Windows versions, bringing a more consistent and simplified servicing experience. For those of you who manage Windows updates within your organization, it’s important that you understand the choices that will be available.

Windows Client
Making IT simpler with a modern workplace

Complexity is the absolute enemy of security and productivity. The simpler you can make your productivity and security solutions, the easier it will be for IT to manage and secure—making the user experience that much more elegant and useful. We’ve learned from building and running over 200 global cloud services that a truly modern and truly secure service is a simple one.

What is new in Windows 10 1803 for PAW (Privileged Access Workstation)?
Prior to 1803 release, to start a shielded VM, the host must connect to the HGS server in order to perform health attestation. One of the top customer feedback is that, PAW devices are sometimes in an offline mode, which means it does not have access to any network, or unable to connect to the HGS server, yet it is important to support the user to access the shielded VM at any time. We introduced the Offline HGS feature in the 1803 release to support this scenario.
Security
Cybersecurity Reference Architecture: Security for a Hybrid Enterprise

The Microsoft Cybersecurity Reference Architecture describes Microsoft’s cybersecurity capabilities and how they integrate with existing security architectures and capabilities. We made quite a few changes in v2 and this post highlights some of what has changed as well as the underlying philosophy of how this document was built.

Detecting script-based attacks on Linux
In April, Azure Security Center (ASC) extended its Linux threat detection preview program to include detection of suspicious processes, suspect login attempts, and anomalous kernel module loads. This post demonstrates how existing Windows detections often have Linux analogs, such as base64-encoded shell and script attacks.
Machine learning vs. social engineering
Machine learning is a key driver in the constant evolution of security technologies at Microsoft. Machine learning allows Microsoft 365 to scale next-gen protection capabilities and enhance cloud-based, real-time blocking of new and unknown threats. Just in the last few months, machine learning has helped us to protect hundreds of thousands of customers against ransomware, banking Trojan, and coin miner malware outbreaks.
Azure Security Center Dashboard Updated
We’ve refreshed the dashboard to make it easier for you to identify new issues with your Azure Virtual Machines and PaaS services; find those issues easily using the New alerts & incidents tile; get to work fast with the ROI on investigations by using the most attacked resources tile; access more information on a single screen.
IT Expert Roundtable: How Microsoft Secures Elevated Access with Tools and Privileged Credentials (Video)
Learn about the strategies Microsoft uses to help secure critical corporate assets and to increase protection against emerging pass-the-hash attacks, credential theft, and credential reuse scenarios.
Vulnerabilities and Updates
Microsoft Guidance for Lazy FP State Restore

On June 13, 2018, an additional vulnerability involving side channel speculative execution, known as Lazy FP State Restore, has been announced and assigned CVE-2018-3665.

Security updates available for Flash Player

Adobe on June 7, 2018, released security updates for Adobe Flash Player for Windows, macOS, Linux and Chrome OS that address critical vulnerabilities in Adobe Flash Player 29.0.0.171 and earlier versions. According to the bulletin, the attacks leverage Office documents with embedded malicious Flash Player content distributed via email.

Microsoft on June 7, 2018, released ADV180014 | June 2018 Adobe Flash Security Update that addresses the following vulnerabilities, which are described in Adobe Security Bulletin APSB18-19: CVE-2018-4945, CVE-2018-5000, CVE-2018-5001, CVE-2018-5002.

Intune moving to TLS 1.2 for encryption

Starting on October 31, 2018, Intune will move to just support Transport Layer Security (TLS) 1.2 to provide best-in-class encryption, to ensure our service is more secure by default, and to align with other Microsoft services such as Microsoft Office 365. The post provides a list of the devices and browsers that will not be able to work with TLS 1.2.

Support Lifecycle
The end of support (EOS) for SQL Server and Windows Server 2008 and 2008 R2 is approaching rapidly:
  • July 9, 2019 – SQL Server 2008 and 2008 R2
  • January 14, 2020 – Windows Server 2008 and 2008 R2
Microsoft Premier Support News
We are happy to announce the release of Security: Azure Security Center – Fundamentals. Azure Security Center (ASC) provides unified security management and advanced threat protection across hybrid cloud workloads.

This is a 4-day engagement gets you started with Security Center by learning how to create and apply security policies across your workloads, limit your exposure to threats, and detect and respond to attacks with a Premier Field Engineer guiding you the technologies, clarifying blockers you may have and enabling key features of the product.

We are happy to announce the release of Security: Azure Information Protection – Fundamentals. Azure Information Protection (AIP) is a cloud-based solution that helps an organization to classify, label, and protect its documents and emails. During this engagement Microsoft Premier Field Engineer (PFE) will help your technical staff understand how AIP works on the background, how data is actually encrypted what is technical requirements for implementation and how AIP can be integrated with other cloud or on-premises applications.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

PowerShell Profiles Processing Illustrated

$
0
0

Hello everyone! My name is Preston K. Parsard (Platforms PFE) here again this time for a procedural review of PowerShell profiles. Now I realize that this topic is already well documented and in fact, have included some great references at the end of this post, where you can find more information. What I can offer here in addition to these sources is an illustrated step-by-step approach to explain how PowerShell profiles are loaded, processed and even relate to each other.

PURPOSE

Profiles can be used to establish a baseline set of features provided by references to variables, modules, aliases, functions and even PowerShell drives. Profiles can also enable a common set of experiences like colors for the various host on a system that will be shared among various engineers, like Dana and Tim in our upcomming scenario who will use the same development server. With these configurations, all users will have access to these resources and experiences by the time their individual user/host profile loads. In addition, individual profiles can be customized to each specific logged-on users own perferences. Even if a user or team of engineers need to log on to multiple systems, such as servers or workstations, they can still leverage a common set of resources by employing remote profiles hosted on a file share.

PRE-REQUISITES

Version
The tests and screenshots shown are based on Windows PowerSell version 5.1. The PowerShell console (Microsoft.PowerShell) and the Integrated Scripting Environment, (Microsoft.PowerShellISE) hosts will be covered.

Hosts
Hosts in this context refers to the PowerShell engine used such as the PowerShell console or the Integrated Scripting Environment (ISE), not the computer as in localhost. While there are other hosts that can be used for PowerShell, such as Visual Studio Code and Visual Studio 2017, among others, we will focus our discussion only on the native Windows PowerShell console and ISE engines and acknowledge that the process is similar in concept for all hosts, with the exception for where certain host profiles reside on the file system and how the host specific themes, apearance and layout for each host can be modified with profile settings.

Execution
Scripts cannot be executed using the default PowerShell execution policy, which is restricted, and since profiles are implemented as PowerShell scripts, this execution policy will have to be changed to allow script execution so that the profile scripts will run. You will also require administrative access in order to change the execution policy on a system.

SCENARIO
Adatum, a fictitous regional manufacturing company of 100 employees, has a team of Windows systems engineers who have recently been tasked with building their PowerShell script library. This repository will host new and existing scripts to automate routine tasks for the the Windows server team.
We will examine the PowerShell profile processing experience for one of the senior members of this team – Dana who will use login alias usr.g1.s1 and is a member of the local adminsitrators group on that machine. Dana will be logging on to the development domain of dev.adatum.com. There is also a usr.g2.s2 alias for the other engineer, Tim, but it will not be used for the demos however. In this scenario, we will use screenshots after Dana logs on to the Windows Server 2016 development server named AZRDEV1001.

Figure 1: Window PowerShell Profile Processing Flowchart.

PROFILE PROCESSING

Step 1:
Select machine.
Dana decides to examine and edit user profiles on the development server, AZRDEV1001, for all users who log on to create and run powershell scripts, as well as for her individual profiles for each host.
Step 2:
User logs on (select user).
Dana logs on to AZRDEV1001 and her user profile is loaded. This is the first time she is logging on to AZRDEV1001 as it is a newly provisioned server in Microsoft Azure for the team.
Step 3:
usr.g1.s1 user selects and opens host (Console or ISE).
Dana opens both the PowerShell console and the ISE, since she will be editing profile scripts using the script pane of the ISE. Before editing these profile scripts however, she needs to determine which ones already exist in each host and which ones must be created.
In the console host, Dana first sets the execution policy to RemoteSigned as follows.
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Force

Afterwards, she issues the following commands shown below in figure 2.


Figure 2: Listing and verifying console profile path availability.


Figure 3: Creating the AllUsersAllHosts profile.


Figure 4: Creating the AllUsersCurrentHost profile.


Figure 5: The CurrentUserAllHosts path do not yet exist.

At this point, as shown in figure 5, Dana will have to create both the WindowsPowerShell directory as well as the profile.ps1 file for the CurrentUserAllHosts profile before editing it since neither currently exists.

New-Item -path $Home\Documents\WindowsPowerShell -ItemType Directory

New-Item -path $profile.CurrentUserAllHosts -ItemType File


Figure 6: Creating the CurrentUserAllHosts profile.


Figure 7: Creating the CurrentUserCurrentHost profile.

Dana now closes the console and opens the ISE using the run as Administrator option.


Figure 8: The …CurrentHost profiles are not available.

Notice that only the …AllHosts profiles 1 & 3, were pulled into the current session for the ISE host. Any guesses why?
Well it turns out that when the ISE opens, it will try to load the …CurrentHost profiles 2 & 4 for the ISE as a separate host. This is because Dana created the current hosts profile previously while she was in the console, not the ISE, so only the console specific …CurrentHost profiles were created.
This means that while Dana now has the ISE host opened, she will just need to create the two …CurrentHost, profiles, starting with AllUsersCurrentHost and then CurrentUserCurrentHost.
Unfortunately, as the console pane of the ISE here shows, these profiles do not yet exist and therefore must be created first to be loaded in any subsequent ISE sessions.


Figure 9: AllUsersCurrentHost and CurrentUsersCurrentHost profiles do not yet exist for the ISE host.


Figure 10: Creating the AllUsersCurrentHost and CurrentUserCurrentHost profiles.

Now Dana can edit and customize the …CurentHost profiles for the ISE using the psedit command.


Figure 11: Editing the AllUsersCurrentHost profile.


Figure 12: Editing the CurrentUserCurrentHost profile.

Dana has configured all profiles now, so she closes the ISE and we can continue to observe the results in the remaining steps.

Step 4c: Console host starts (Microsoft.PowerShell).
When the PowerShell console is invoked, it will first load the AllUsersAllHosts profile, which executes the profile script located at $PsHome\Profile.ps1, and usually resolves to: C:\Windows\System32\WindowsPowerShell\v1.0\Profile.ps1. 
Step 4i:
ISE host starts (Microsoft.PowerShellISE).
When the ISE loads, it will also execute the AllUsersAllHosts profile script as the first in the sequence of all the profile scripts, just as shown above in step 4c since it is common to all hosts.
Step 5:
[1] Host agnostic profile loads for any logged-on user (usr.g1.s1 or usr.g2.s2).
The AllUsersAllHosts profile script also loads for any user that logs on, so even if Tim borrowed Dana’s workstation and logs in with his user account usr.g2.s2, and opens either the PowerShell console or the ISE, or both, this PowerShell profile, if available will execute. It is universal in that it will load the profile for any user using any host.


Figure 13: AllUsersAllHosts profiles loaded.
Step 6c:
[2] Console host profile loads for any logged-on user (usr.g1.s1 or usr.g2.s2).
Since Dana did open the PowerShell console, the AllUsersCurrentHost script $PsHome\Microsoft.PowerShell_profile.ps1, which resolves to C:\Windows\System32\WindowsPowerShell\v1.0\Microsoft.PowerShell_profile.ps1 will then execute and load. This is the profile that Dana created and configured in figure4.

Figure 14: AllUsersCurrentHost profile loaded for the console host.
Step 6i:
[2] ISE host profile loads for current logged-on user (usr.g1.s1).
Dana also opened the ISE, which has its own version of the AllUsersCurrentHost profile script at $PsHome\Microsoft.PowerShellISE_profile.ps1, resolving to C:\Windows\System32\WindowsPowerShell\v1.0\Microsoft.PowerShellISE_profile.ps1 that Dana created previously in figure 11.

Figure 15: AllUsersCurrentHost profile loaded for the ISE host.
Step 7: [3] Host agnostic profile loads for current logged-on user (usr.g1.s1).
Next, the CurrentUserAllHosts profile script, if available will run and load its configuration from what is specified in the content of $Home\Documents\WindowsPowerShell\Profile.ps1 that expands to C:\Users\usr.g1.s1\Documents\WindowsPowerShell\Profile.ps1.
Note that this profile script will apply to both the PowerShell console host and the ISE for Dana, but only for Dana. Of course if Tim had logged on to AZRDEV1001, then the path would be different because Tim would have a separate user profile at C:\Users\usr.g2.s2, not
C:\Users\usr.g1.s1.
Step 8c:
[4] Console host profile loads for current logged-on user (usr.g1.s1).
Finally, the last user and host specific profile CurrentUserCurrentHost for the PowerShell console will load for Dana from $Home\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1 (C:\Users\usr.g1.s1\ Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1).

Figure 16: Console host loaded for current user.
Step 8i: [4] ISE host profile loads for current logged-on user (usr.g1.s1).
The CurrentUserCurrentHost ISE specific profile for will also load for Dana, but the profile script filename is slightly different and includes “ISE”. The path is: $Home\Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1 (C:\Users\usr.g1.s1\ Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1) and loads only if it is available, which as we know for this scenario, it is because Data created it in figure 12.

Figure 17: ISE host loaded for current user.
Step 9:
END. All available profiles loaded.
Figure 2: Shared and separate profiles for console and ISE hosts
At this point, all four (4) profiles per host are now loaded if, and only if they were available, otherwise they will be skipped and the next profile in the processing order will be loaded…if available. This is because each profile represents the complete path to its corresponding PowerShell script, which is dot sourced or pulled into the current host session as it starts up. Therefore, if a profile script in a set of 4 scripts isn’t available, it will be skipped as the host attempts to load each of them.


Figure 18: All profiles loaded for all hosts.

Dana had configured all profiles at the beginning, which is why we see the effective color settings and variable assignments here now.

Figure 19: Profile Loading Sequence and Relationships

Now that we’ve covered the details, let’s sumarize by reviewing the simpler Venn diagram in Figure 19 above to focus on just the profile loading sequence and their relationship to each host.

[1] We’ll start at item 5 which shows that for either the console or the ISE host, the AllUsersAllHosts profile at $PsHome\Profile.ps1 will load. Note in the diagram how this step is shared between both the console and the ISE since it appears in the intersection of both circles.
[2] If the console host was launched, then at 6c the AllUsersCurrentHost, located at $PsHome\Microsoft.PowerShell_profile.ps1 is pulled in to console session.
[2] If however the ISE was opened, even if the console was already opened, then 6i indicates that AllUsersCurrentHost will load. Although it has the same name as in 6c , it is specific for the ISE host, so the path directory will be identical, but the file will differ to reflect that it is associated with the ISE and not the console. This is represented by the full profile path of: $PsHome\Microsoft.PowerShellISE_profile.ps1
[3] In step 7 , which is another profile shared between hosts, CurrentUserAllHosts with the path value of $Home\[My] Documents\WindowsPowerShell\Profile.ps1 will be dot sourced into the loading session of either the console or the ISE, or even both if both are opened.
[4] For the console host, step 8c represents the CurrentUserCurrentHost profile, pointing to the path $Home\[My] Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1.
[4] So it’s no surpise that in 8i , although using the same profile name as 8cCurrentUserCurrentHost. This profile actually points to $Home\[My] Documents\WindowsPowerShell\Microsoft.PowerShellISE_profile.ps1, because now we’re importing an ISE host specific profile instead of the console version.

User Host Sequence Shared between hosts Path
All All 1 Yes $PsHome\Profile.ps1
All Current 2 No $PsHome\Microsoft.<hostid>_profile.ps1
Current All 3 Yes $Home\[My] Documents\WindowsPowerShell\Profile.ps1
Current Current 4 No $Home\[My] Documents\WindowsPowerShell\<hostid>_profile.ps1

Figure 20: Profile patterns

Lets see if we can finally identify some common patterns to reinforce this review. Notice that in Figure 20, whenever there is an All host refence as in sequences 1 and 3, the profile script name will be Profile.ps1, otherwise the script name is represented as …<hostid>_profile.ps1 which is host dependent and uses the current host, such as for the AllUsersCurrentHost and CurrentUserCurrentHost profiles. Whenever we see a CurrentUser part of a profile name, it will be specific to the currently logged on user and therefore the full path to that users profile must begin with their user home directory at $Home. For example $Home will resolve to C:\Users\usr.g1.s1 for Dana and C:\Users\usr.g2.s2 for Tim if Tim were to log on and initialize his profile.

Shared Profiles
Now what if Dana and Tim both need to log into various systems in dev.adatum.com, but for convenience wanted to use the same set of shared scripts, functions, modules, variables and sourced form a single, common PowerShell profile?
Dana decides to configure this profile since she knows that she can dot source a shared profile script in any of thier CurrentUser… profiles (for either Dana or Tim), for each machine profile scripts by adding a line using the following syntax:
. //<server>/<share>/<SharedProfileScript.ps1>

First, she creates the remote \\azrweb1001\profiles share and sets the modify permissions to allow the group to which Tim and her belong – GS-AutomationTeam access to the shared profile. Dana then creates the profile on the network file share and edits it as shown in figure 22 below.


Figure 22: Creating a remote shared profile.

Next, Dana dot sources the remote profile as shown below in line 3.


Figure 23: Dot sourcing a remote profile in an existing profile for CurrentUserAllHosts.

Since the execution policy was originally set to RemoteSigned on AZRDEV1001, PowerShell will perceive network shares as remote locations and will insist that any scripts on the shares must be signed. Therefore, Dana must add the server on which the share resides to the local intranet zone in her browser advanced options.


Figure 25: Adding a server with shares to the local intranet zone.

Now when Dana open any hosts on AZRDEV1001, she sees that the remote profile will be loaded.


Figure 26: Remote profile loaded in ISE host.


Figure 27. Remote profile loaded in console host.

Cumulative Effect
Profiles are cumulative so that after the common baseline profile AllUserAllHosts loads, the remaining profiles will contribute their values to the final individual profile CurrentUserCurrentHost as well. This is why after a host fully loads, we see all the values for $profile1, $profile2, $profile3 and $profile4 that were assigned at each separate profile, confirming the cumulative effect.

Conflict Behavior
We observed that for the console host, although Dana set the foreground color to green, when the 4th profile loaded – CurrentUserCurrentHost, which specifies a foreground color of magenta, the resultant or effective color was magenta after-all. That’s because the most recent profile to load in the sequence wins, or last wins for short if there is a conflicting value.

Other Hosts
In case you’re still curious, here are the profile paths for some other hosts. Note that the …CurrentHosts profiles will include the hostid in the profile filenames.

PowerShell Tools for Visual Studio 2017
AllUsersAllHosts : C:\Windows\System32\WindowsPowerShell\v1.0\Profile.ps1
AllUsersCurrentHost : C:\Windows\System32\WindowsPowerShell\v1.0\PoshTools_profile.ps1
CurrentUserAllHosts : C:\Users\prestopa\Documents\WindowsPowerShell\Profile.ps1
CurrentUserCurrentHost : C:\Users\prestopa\Documents\WindowsPowerShell\PoshTools_profile.ps1

PowerShell Language Support for Visual Studio Code
AllUsersAllHosts : C:\Windows\System32\WindowsPowerShell\v1.0\profile.ps1
AllUsersCurrentHost : C:\Windows\System32\WindowsPowerShell\v1.0\Microsoft.VSCode_profile.ps1
CurrentUserAllHosts : C:\Users\prestopa\Documents\WindowsPowerShell\profile.ps1
CurrentUserCurrentHost : C:\Users\prestopa\Documents\WindowsPowerShell\Microsoft.VSCode_profile.ps1

SUMMARY
The combination of hosts and users provides 4 possibilities; AllUsersAllHosts, AllUsesCurrentHosts, CurrentUserAllHosts and CurrentUserCurrentHost. Given two hosts – the console and the ISE, there are a total of 6 profiles, 2 of which are common for AllHosts + 2 that are host dependent for the console + 2 that are host dependent for the ISE. If it makes it easier, you can try the abbreviations for the processing order, and also consider that users must log on before they open a particular host, so we list the users profile condition before the host condition (AllUsers… or CurrentUsers…). Also, notice that the All conditions always gets evaluated first, both for users and then for hosts.

  1. AUAH (AllUsersAllHosts)
  2. AUCH (AllUsersCurrentHost)
  3. CUAH (CurrentUserAllHosts)
  4. CUCH (CurrentUserCurrentHost)

See the pattern now?

Well that’s it for now friends. I hope that you have a better understanding of how profiles are loaded and relate to each other. If it hasn’t sunk in immediately, just keep this post as a handy reference and rely on the visual cues from the diagrams and table as a quick guide. If you do this often enough, plus use the references listed below your comprehension of PowerShell profiles will continue to grow.

Happy scripting!

REFERENCES

Index Title Type Link
1 about_Profiles Microsoft Docs Link
2 Understanding the Six PowerShell Profiles TechNet Link
3 How to Use Profiles in Windows PowerShell ISE Microsoft Docs Link
4 Persistent PowerShell: the PowerShell Profile Simple-Talk (www.redgate.com) Link
5 Getting Started with PowerShell Profiles YouTube Video Link
6 Use a Central File to Simplify Your PowerShell Profile TechNet (Scripting Guy) Link
Viewing all 501 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>