Quantcast
Channel: Ask Premier Field Engineering (PFE) Platforms
Viewing all 501 articles
Browse latest View live

Deploying Upgrade Readiness without SCCM

$
0
0

Hello everyone! My name is Paul Fitzgerald and I’m a Platforms PFE. I love learning new things and sharing what I have learned with others. My role as a PFE allows me the opportunity to do just that, and I’ll now have the pleasure of sharing with you all as well!

I’ve been working a lot with my customers lately on Windows 10 adoption. One thing that has helped tremendously in providing valuable insight into their environments is Windows Analytics Upgrade Readiness. So today, I thought we’d take a look at deploying Upgrade Readiness to help you migrate from legacy Windows operating systems to Windows 10 and to help you stay current with Windows 10 feature updates.

Many customers use System Center Configuration Manager to deploy Upgrade Readiness to their environment. But what if you don’t have SCCM? That’s what we’ll focus on today – deploying Upgrade Readiness without SCCM. More specifically, we’re going to go over an approach that utilizes Group Policy Preferences and Scheduled Tasks to perform the initial configuration and periodic script execution. Let’s get started, shall we?

Let’s review Upgrade Readiness

Upgrade Readiness is a free solution built on Azure Operations Management Suite (OMS). It provides great insight into your client environment to help you plan and manage your Windows 10 upgrade process. It not only helps you move from Windows 7 and Windows 8.1 to Windows 10, but also helps you align with the Windows as a Service model and keep current with Windows 10 features updates.

Upgrade Readiness accomplishes this by analyzing your organization’s telemetry data and providing you with a workflow that guides you through the entire process, a detailed computer and application inventory, incredible insights into application and driver compatibility, and more. You can read more about Upgrade Readiness here.

What if you don’t have Azure? No worries, it’s simple to setup and everything we’re talking about today is free! When configured correctly, all data associated with the Upgrade Readiness solution is exempt from billing in both OMS and Azure. Upgrade Readiness data does not count toward OMS daily upload limits. Have a look at the Getting Started with Upgrade Readiness article for more details.

Getting started is easy

We recommend you start with a small pilot to ensure everything is in working order and you’ve met all the prerequisites. Sometimes it takes a bit of time to get approval to enable telemetry and to get firewalls and/or proxy servers configured, so be sure to read through the information on telemetry and connectivity. Once you’ve successfully completed your pilot, you’re ready to deploy at scale. There are essentially three options for to accomplish that goal:

  1. Configure settings via Group Policy (Active Directory)
  2. Configure settings via Mobile Device Management (Intune)
  3. Deploy the Upgrade Readiness deployment script

With the first two options, you may have to wait a long time (possibly weeks) before you see data about your devices.

We recommend using the deployment script and further recommend scheduling it to run monthly. Doing so ensures a full inventory is sent monthly and includes various tests to help alert you, through the Upgrade Readiness solution, to potential issues. This is frequently accomplished by creating a package in SCCM and deploying that package to your collection(s).

What you can do if you don’t have SCCM

Not every customer has SCCM or a similar client management solution. And few are content waiting for results after configuring settings via Group Policy or MDM. To avoid having your client administrators walk from PC to PC to manually run the deployment script, you might consider configuring Group Policy Preferences to create a couple Scheduled Tasks that will automate the initial configuration of Upgrade Readiness and schedule the deployment script to run monthly as recommended.

Let’s review how to set this up step by step!

Step 1: Prepare an Upgrade Readiness folder on a network share

We’ll store the Upgrade Readiness script on a network share that’s available to clients. This will give us a central location to manage settings and makes it simple to upgrade the script when a new version is released.

First extract the Upgrade Readiness deployment script, then place the contents of the Deployment folder on your file server. In my case, I placed it in a subdirectory of an existing share. Since the script will be run as the Local System account, you’ll need to next ensure all your computer accounts have read access to this folder on this share. I usually do this by granting the Everyone group Change and Read permission on the share and limiting NTFS permissions on the folder in question as shown below. Finally, don’t forget to ensure your RunConfig.bat is configured properly

Step 2: Prepare the Group Policy Object

We’re going to use Group Policy Preferences to create two Scheduled Tasks. The first will be an Immediate Task and will be responsible for running the Upgrade Readiness deployment script the first time the Group Policy is applied. The second Scheduled Task will be configured to run the Upgrade Readiness deployment script once per month.

To get started, open the Group Policy Management mmc, browse to and select Group Policy Objects in the tree view, and finally select New from the Action menu. Next, provide a name for the GPO and click OK. I chose to call mine Upgrade Readiness.

Before we edit the GPO, there’s one more thing I like to take care of. Since we’ll only be working with Computer settings, select the new Upgrade Readiness GPO in the tree view and move to the Details tab. Click the drop-down next to GPO Status and choose User configuration settings disabled.

Now right-click the new Upgrade Readiness GPO in the tree view and choose Edit… to open the Group Policy Management Editor. Browse to Computer Configuration > Preferences > Control Panel Settings > Scheduled Tasks. Here’s where we’ll create the two new Scheduled Tasks.

Step 3: Create the initial Scheduled Task

Let’s start by creating the Immediate Task. Right-click on Scheduled Tasks in the tree view, then click New > Immediate Task (At least Windows 7). Then configure it as described below.

General Tab

  • Name: Upgrade Readiness (Initial)
  • Description: This Scheduled Task executes the Upgrade Readiness deployment script once upon initial application.
  • User: NT AUTHORITY\System
  • Run whether user is logged on or not
  • Run with highest privileges
  • Configure for: Windows 7, Windows Server 2008 R2

Actions Tab

  • Action: Start a program
  • Program/script: <Path to RunConfig.bat>

Conditions Tab

  • Start only if the following network connection is available: Any connection

Settings Tab

  • No changes required

Common Tab

  • Apply once and do not reapply

Step 4: Create the recurring Scheduled Task

Now let’s create the Scheduled Task. Right-click on Scheduled Tasks in the tree view, then click New > Scheduled Task. Then configure it as described below. I’ve chosen to run the deployment script on the first Monday of each month. You can choose whatever schedule best meets your needs.

General Tab

  • Name: Upgrade Readiness (Recurring)
  • Description: This Scheduled Task executes the Upgrade Readiness deployment script once a month.
  • User: NT AUTHORITY\System
  • Run whether user is logged on or not
  • Run with highest privileges
  • Configure for: Windows 7, Windows Server 2008 R2

Triggers Tab

  • Begin the task: On a schedule
  • Settings: Monthly
  • Months: <Select all months>
  • On: First Monday (Or most appropriate options for your organization)

Actions Tab

  • Action: Start a program
  • Program/script: <Path to RunConfig.bat>

Conditions Tab

  • Start only if the following network connection is available: Any connection

Settings Tab

  • Allow task to be run on demand
  • Run task as soon as possible after a scheduled start is missed

Common Tab

  • No changes required

We’re done with the Group Policy Management Editor, so you can go ahead and close it and move on to the next section.

Step 5: Link the Group Policy Object to your Organization Units

Now that the GPO has been created and configured, you’ll need to link it to the Organization Unit(s) that contain(s) your Windows 7, Windows 8.1, and Windows 10 PCs. NOTE: Do not link this GPO to OUs that contain systems running Windows Server.

Browse to and select an applicable Organization Unit, then select Link an Existing GPO… from the Action menu. Locate and select the new Upgrade Readiness GPO and click OK. Repeat for each OU that contains your PCs as described above.

What to do now that you’ve deployed your new GPO

Once your PCs receive the new GPO, the “immediate” Scheduled Task will be executed, and their first full scan will be sent to Microsoft’s telemetry endpoints tagged with your Commercial ID. You should immediately begin to see some details listed under Script Insights within the Upgrade Readiness solution Settings as depicted in the following screenshot.

After that, you can expect to see processed data show up in the Upgrade Readiness solution within the next 48-72 hours as seen in the next screenshot.

Wrapping up

There you have it! Upgrade Readiness is a great tool to help you more quickly and easily upgrade to Windows 10. And as you can see, following the recommended deployment practices doesn’t require a full management platform such as System Center Configuration Manager.

Until next time!

— Paul Fitzgerald, Platforms PFE


Configuring a PowerShell DSC Web Pull Server to use SQL Database

$
0
0

Introduction

Hi! Thank you for visiting this blog to find out more about how you can configure a PowerShell DSC Web Pull Server to use an SQL database instead of the “Devices.edb” solution we currently use.

Since you made it his far I assume that you’re already familiar with PowerShell and PowerShell Desired State Configuration but if not, I encourage you to read more about PowerShell and PowerShell Desired State Configuration.

Either way, you are probably ready to experiment with Desired State Configuration or ready to implement a Desired State Configuration architecture within your environment (perhaps even production).

I wrote this blog post to show you how you can implement an example Desired State Configuration environment where the Secure Pull Web Server uses a SQL database to store all data.

About me

Before I do so I will tell you a little bit about myself.

My name is Serge Zuidinga and I’m a Dutch Premier Field Engineer with System Center Operations Manager as my core technology.

I started working at Microsoft in September 2014 focusing on supporting customers with their Operations Manager environment(s) and, among other things, the integration with automation products like System Center Orchestrator.

I always had a passion for scripting and application development so this was the ideal situation for me since I could use my passion for PowerShell in combination with Operations Manager and Orchestrator.

I’ve been seriously working with PowerShell ever since and am currently involved with not only System Center Operations Manager and Orchestrator but with Azure in general and Azure Automation, OMS, EMS, Operations Manager Management Pack Authoring, Visual Studio, Visual Studio Team Foundation Server, PowerShell and PowerShell Desired State Configuration in particular.

I currently also support customer in designing and building a Continuous Integration and Continuous Deployment pipeline with Desired State Configuration and Visual Studio Team Foundation Server besides Operations Manager, Orchestrator and Operations Management Suite.

Let’s get started

Glad to see you made it through the introduction.

So, this is the plan:

  • Step 1: the prerequisites
  • Step 2: implement our example environment
  • Step 3: watch it work
  • Step 4: enjoy our accomplishments

Prerequisites

Windows Server 2019 Technical Preview

To be able to leverage the ability to use an SQL database with our pull server, we need to deploy a Windows Server 2019 Technical Preview server which holds the version of WMF 5.1 that includes the ability to connect to SQL server.

We should make sure that we have the latest version of Windows Server 2019 Technical Preview installed since, at least up until build 17639, the MUI file could be missing required elements to support SQL server.

Note: there is currently no support for SQL with DSC on Windows Server 2016 (or previous Windows Server versions) even though WMF 5.1 is available for Windows Server 2016!

If you want, you can read all about the supported database systems for WMF versions 4.0 and higher at Desired State Configuration Pull Service (“Supported database systems”-section) and please check out this great post by Raimund Andrée on how to use a SQL server 2016 as the backend database for a Desired State Pull Server.

We also need to make sure that we have version 8.2.0.0 (or higher) of the “xPSDesiredStateConfiguration”-module installed on our Windows Server 2019 Technical Preview server.

Hint: Find-Module -Name xPSDesiredStateConfiguration | Install-Module

Note: version 8.3.0.0 is the latest version of the “xPSDesiredStateConfiguration”-module at the time this blog post was written

A certificate for enabling a HTTPS binding within IIS is also required for our example environment to work so please make sure you have a web server certificate installed on your Windows Server 2019 Technical Preview server along with the “xPSDesiredStateConfiguration”-module.

Finally, access to any SQL server instance to host our database.

From a firewall perspective, we only need access to the TCP port the SQL server instance is listening on from our pull server.

There’s no need to create a database upfront since this will be taken care of by our pull server (our database will always be created with “DSC” as the name for our database) and both SQL and Windows Authentication is supported.

Note: you can use a Domain User account instead of the “Local System”-account the IIS AppPool is configured with by default.

If you want to use a Domain User account, you only need to make sure that it has “dbcreator”-permissions configured for the SQL Server instance that will host the “DSC”-database

Let’s get cracking!

Implement a Secure Web Pull Server

Step 1

Install the PowerShell Desired State Configuration by using “Add Roles and features” available through Server Manager or from PowerShell: Add-WindowsFeature -Name DSC-Service

Step 2

Get the thumbprint of our web server certificate we are going to use for our HTTPS binding: Get-ChildItem -Path Cert:\LocalMachine\My\ -SSLServerAuthentication

Get a unique GUID that we are going to use as a registration key: (New-Guid).Guid

Get the SQL connection string that will allow our pull server to connect to the appropriate SQL server instance or modify and use one of the following examples:

  • Windows Authentication: Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=SQL\DSC
  • SQL authentication: Provider=SQLOLEDB.1;Password=”password”;Persist Security Info=True;User ID=user;Initial Catalog=master;Data Source=SQL\DSC

Note: you can leave Initial Catalog=master as is because we’ll create and use a specific database (called “DSC”) for use with our pull server.

Step 3

Create a MOF file that we will use to configure our pull server. You can modify and use this example:

# === Arguments ================================================ #
# We got these from step 2 #
$Thumbprint = “BF6E5EFC44A15FE238CDE2A77D9A12B07B0BA200”
$Guid = “5fd98d96-7864-4006-b60d-0a907a676c6a”
# === Arguments ================================================ #
# === Section Secure Web Pull Server with SQL database ========= #
Configuration SecureWebPullServerWithSQLDatabase {
Param([string]$NodeName“localhost”,
[string$Thumbprint = $(Throw “Provide a valid certificate thumbprint to continue”),
[string]$Guid$(Throw “Provide a valid GUID to continue”))

Import-DscResource -ModuleName PSDesiredStateConfiguration
Import-DSCResource -ModuleName xPSDesiredStateConfiguration

Node $NodeName {
Windowsfeature DSCServiceFeature {
Ensure = “Present”
Name “DSC-Service”
}

xDscWebService SecureWebPullServer {
Ensure = “Present”
EndpointName “SecureWebPullServer”
Port 443
PhysicalPath “C:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Website”
CertificateThumbPrint $Thumbprint
ModulePath “C:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Modules”
ConfigurationPath “C:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Configuration”
State “Started”
DependsOn “[WindowsFeature]DSCServiceFeature”
RegistrationKeyPath “C:\Program Files\WindowsPowerShell\DscService”
AcceptSelfSignedCertificates $true
UseSecurityBestPractices $true
SqlProvider $true
SqlConnectionString “Provider=SQLOLEDB.1;Integrated Security=SSPI;Persist Security Info=False;Initial Catalog=master;Data Source=PUT_DMZ_SQL_SERVER_INSTANCE_HERE”
}

Windowsfeature IISMGMTConsole {
Ensure “Present”
Name “Web-Mgmt-Console”
DependsOn = “[xDscWebService]SecureWebPullServer”
}

File RegistrationKeyFile {
Ensure “Present”
Type “File”
DestinationPath “C:\Program Files\WindowsPowerShell\DscService\RegistrationKeys.txt”
Contents $Guid
DependsOn “[xDscWebService]SecureWebPullServer”
}
}
}

# === Section Secure Web Pull Server with SQL database ========= #
SecureWebPullServerWithSQLDatabase -NodeName PUT_SERVER_FQDN_HERE -Thumbprint $Thumbprint -Guid $Guid -OutputPath C:\Windows\Temp -Verbose

Just open it in Windows PowerShell ISE (I use Visual Studio Code but you can use any editor of your preference) and make the necessary modifications (at least the thumbprint and registration key).

Assuming the previous steps went well, we should now have a MOF file in C:\Windows\Temp on our Windows Server 2019 Technical Preview server.

Let’s get our pull server configured by consuming the MOF file we just created: Start-DscConfiguration -Path C:\Windows\Temp -Wait -Verbose

Our pull server has now been configured and we are ready to host (partial) configurations and have clients connect to consume the appropriate configurations.

We will create such a partial configuration as an example so that we can serve any connected clients.

So, like what we just did we can create a configuration and MOF file that our client(s) will consume. You can modify and use this example:

Configuration TelnetClient {
Import-DscResource -ModuleName PSDesiredStateConfiguration

Node TelnetClient {
Windowsfeature TelnetClient {
Name ‘Telnet-Client’
Ensure ‘Present’
}
}
}

TelnetClient -OutputPathC:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Configuration” -Verbose

New-DscChecksum -PathC:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Configuration” -OutPathC:\Program Files\WindowsPowerShell\DscService\SecureWebPullServer\Configuration” -Verbose

We are now ready to connect one or more clients to our pull server. You can modify and use the following example on a Windows Server (for this example you should not use your pull server) that you want to connect to our pull server:

[DscLocalConfigurationManager()]
Configuration PartialConfig {
Param([string]$NodeName = ‘localhost’)
Node $NodeName {
Settings {
RefreshFrequencyMins = 30;
RefreshMode = “PULL”;
ConfigurationMode = “ApplyAndAutocorrect”;
AllowModuleOverwrite = $true;
RebootNodeIfNeeded = $true;
ConfigurationModeFrequencyMins = 60;
}


ConfigurationRepositoryWeb PullServer {
ServerURL = “https://FQDN_SECURE_WEB_PULL_SERVER/PSDSCPullServer.svc/”
RegistrationKey = “5fd98d96-7864-4006-b60d-0a907a676c6a”
ConfigurationNames = @(“TelnetClient”)
#ConfigurationNames = @(“TelnetClient”,”Web-Mgmt-Console”) # Multiple partial configurations
}

ReportServerWeb PullServer {
ServerURL = “https://FQDN_SECURE_WEB_PULL_SERVER/PSDSCPullServer.svc/”
RegistrationKey = “5fd98d96-7864-4006-b60d-0a907a676c6a”
}

PartialConfiguration TelnetClient {
Description = “Installs the Telnet Client”
ConfigurationSource = @(“[ConfigurationRepositoryWeb]PullServer”)
}
}
}

PartialConfig -OutputPath C:\Windows\Temp -Verbose

Assuming the previous steps went well, we should now have a Meta MOF file in C:\Windows\Temp on our Windows Server 2019 Technical Preview server that allows for configuring the Local Configuration Manager.

To configure the LCM to actually connect to and retrieve (a) configuration(s) from our pull server, we just need to execute: Set-DscLocalConfigurationManager -Path C:\Windows\Temp -Verbose

Step 4

Congratulations on implementing your pull server with a SQL database! Sit back and enjoy your newly installed and configured PowerShell DSC Secure Web Pull server with a SQL database!

Stay tuned for the next post were I will tell you more on how you can pull reports.

Let’s Build a Switch Embedded Team in SCVMM!

$
0
0

Hello, my name is Michael Godfrey and I am a Platform’s Premier Field Engineer (PFE) at Microsoft. I have been a Fabric Administrator for the past few years and have made it a habit of building quite a few Hyper-V Clusters with System Center Virtual Machine Manager. I have helped a lot of customers deploy Switch Embedded Teams in SCVMM 2016 over the past year, and like every good engineer, I decided it was time to share that knowledge with the world.

So, in this post, I will be walking you through a deployment of a Switch Embedded Team in SCVMM 2016 or the new SCVMM 1801 edition. The steps are the same in both, so feel free to check out SCVMM 1801, if you are not familiar with our Semi Annual Channel release of System Center, you can read more about it here.

If you are not familiar, a Switch Embedded Team or SET, is a new function in Server 2016 as well as SCVMM 2016/1801 and will allow converging of multiple network adapters. This is not new from 2012R2, but SET will allow us to simplify the deployment of our teams, with the combined benefits of Hardware Accelerated Networking features like RDMA and RSS. The SET is managed at the Hyper-V Switch level and not the Network Team or LBFO level, ensuring that we can build multiple vSwitches inside the team, while preserving our QOS.

As with every network deployment, it is wise to understand your available networks first, before you start deploying. In this example, I am using vlans presented to me by my Network Team, that are already created and deployed. I will be taking these networks, and creating a matching Virtual Network in SCVMM and Hyper-V. In the example I have the following networks.

Name VLAN Subnet
Management(Host OS) 10 192.168.10.0/28
Live Migration 11 192.168.11.0/29
Cluster 12 192.168.12.0/29

These are just example networks for this demo, you will need subnets with enough range for all your hosts. I would also include other networks like SMB, Guest Vlans for all the Virtual Machines and Backup networks. For the sake of the post, I wanted to keep things simple.

Logical Network

First thing you need to do is create a Logical Network. You can think of the Logical Network as the definition of all your Hyper-V Hosts networks for your entire organization. This is the central space we can manage our “Distributed Networking” if you will in VMM. In it, we will deploy several Network Sites. The Network Sites will be the barrier for the network segments, and I like to describe them as Datacenters. You can use them how ever you like, as a DMZ, a Lower Lifecycle or any other network barrier, but I have found Datacenters works best for me.

You will need to visit the Fabric Workspace of VMM to get started with Logical Networks, then you can find it in the networking section. Start by creating a new logical network, giving it a name and a description. Then you will have the choice between three options for what type of logical network you would like. This is a crossroads, and you will not be able to change this. You need to determine one and can use multiple logical networks in that case.

You will see a One Connected Network. This is a great option if you are planning on using the same virtual network for all your VMs or if you are planning on implementing Software Defined Networking v2 in Server 2016. This option allows you to create your own network segmentation at a Virtual level but will require the deployment of Network Controllers in your environment.

The most popular option I see is the second, VLAN Based Independent. This option is useful for providing VLAN based segmentation for our VMs and the Infrastructure Networks. This requires you to add each vlan to the assigned Network Site in VMM, and then create a VM Network. Once the Logical Network is deployed to a host, any change you make like adding a VMNetwork and Subnet is automatically associated with the host(s), essentially working in a Distributed Switch model.

The third option is a Private Network, this is great in a Lab scenario, where all the VMs will be able to communicate with themselves, they will however not be able to communicate outside their VMNetwork to other resources outside the cluster.

Network Site

Once you select the Logical Network Setting, you will need to create your first, of many, Network Sites. Remember, Network Sites can be any form of Network Isolation you need, I prefer to separate my sites as Datacenter Locations. You will give your Network Site a name and then isolate it to your Host’s Groups, this will make sure that network can only be deployed to Hosts in that Network Site. This prevents accidental deployments and helps create my favorite word in Virtualization; Consistency.

You will then need to add the Vlan ID or Subnet or even both, no one will ever fault you for providing both, so I suggest adding both, the more information you present, the better the design.

Port Profile

The next step in our journey toward a Consistent and Highly Available Switch Embedded Team is to provide a Port Profile. There are two types of Port Profiles; Uplink and Virtual. We will be using Virtual Port Profiles in Logical Switching but will need to define an Uplink Port Profile for the Physical Adapters to use in our Virtual Networks. The Uplink port profile will also define the Load Balancing method and Algorithm our Physical adapters are subjected to. You have a few choices, but in utilizing Switch Embedded Teaming, we are restrained to using Switch Independent connections for our Physical Adapters. This means that each of our nics is connected to a separate Physical Switch. Most Admins connect Nic 1 & 3 to Switch A, and Nics 2 & 4 to Switch B, to provide Fault Tolerance. This is a best practice and is widely accepted as a good design.

You will see that LACP is another option, while this is great if you can configure your Switch with Aggregate ports, it is not supported in S.E.T. Therefore we will not use it.

You also will be picking a Load Balancing option, in S.E.T. we will choose the Host Default, which provides load balancing for all network traffic in our team, across all Nics. This will work best when we utilize things like SMB Multichannel and RDMA (Remote Direct Memory Access) to utilize the full bandwidth available to our NICs.

The last option in the Port Profile is selecting a Host Group that can utilize it. The great thing about Port Profiles is they are Logical Network Dependent and not Site dependent, so you can use just one, or you can make several, the option is up to you, and dependent on the type of Network Traffic you expect.

VM Networks

The Virtual Machines and Virtual Switches will need something to connect to, to provide their Network Isolation, this is known as VM Networks. These networks provide the VLAN and Subnet separation in VMM and should be a virtual representation of your Physical networks. You will need these in the Uplinks section of Logical Switches and can create them in the Fabric Workspace. When creating them, give them a name so when your Administrators assign them, they can be confident they chose the right network. Also, be sure to select the correct Logical Network associated with the Subnet/VLAN you are creating the VM Network for. In the Isolation Options, you will be able to select the Network Site, IPV4 Subnet or IPV6 Subnet for the VM Network. This will ensure that VMs or Virtual Network Adapters that are placed in this VM Network are isolated to that VLAN/Subnet. If you provided a VLAN ID of 0 in the Network Sites selection of Logical Networks, the VLAN will be untagged for the VMs in that VM Network.

Port Profiles

When creating a Custom Port Profile or customizing the ones Microsoft provides, you have several options, including Security, Offload and Bandwidth Settings.

In the offload settings you will be able to enable things like VMMQ, SR-IOV, RSS and RDMA. Virtual Machine Queue is a way of distributing the packet processing among the virtual processors in a VM. The SR-IOV and RDMA options will require Network cards that support these, and SR-IOV cannot be used in a Team, so keep that in mind.

The Security Settings will allow you to block things like MAC address spoofing, or DHCP broadcasts in your VMs. It will also allow NIC teaming in your VM Guests, handy if you want to deploy Virtual SQL Clusters.

The Bandwidth settings allow you to set Network QOS settings. This is the section that allows you to set “speed limits” on your Virtual Networks and even provide lanes, for higher priority traffic, like Live Migrations or Storage.

Logical Switch

The Logical Switch is where we begin to build our Switch Embedded Team. This is a Network Site dependent feature, so you will need one per Datacenter in my example, or per Network Site. Then logical switch is where we create a Network Team, create several vSwitches and then set QOS Port Profiles to define the traffic on those vSwitches.

In the Logical Switches section of the Fabric Workspace, you will create a new Logical Switch. You can then give it a name and description. The next step is important. In the Uplink Mode you will have an option for Team or Embedded Team. Since we are doing Switch Embedded we will select Embedded Team. The Team option will deploy a Load Balanced Failover Team (LBFO). This is the method we used in Server 2012 & 2012R2, but with 2016 we will be using Switch Embedded, and so we want to select it.

The next Settings are the bandwidth settings, this comes into play with our Virtual Port Profiles, or Network QOS. You can select Weight or Default, which will use the Weight setting in our Virtual Port Profiles. The other options are Absolute and None, the best practice guidance is to use Weight or Default, they are the same setting.

The Extensions are used for Network Filtering, they are not recommended for Switch Embedded Teaming, and so we will clear these selections.

The next setting in Logical Switches is where we start to define what type of traffic we will be assigning to our virtual switches. For this we will be using Virtual Port Profiles. These are the Network Settings on a Nic that are Pre-defined, to make Network QOS or Security settings consistent to deploy. The System Center team has included several pre-built Port Profiles for you, but you can always customize them, with from the Logical Switch Wizard or the Port Profiles Section of the Fabric Workspace. We will add a Port Classification for each type of Network Traffic we expect to use. Each Port Classification will be attached to a Port Profile, that will define things like Minimum/Maximum Bandwidth, Bandwidth Weight and Network Security Options, more on that later. Add each classification you plan to use, and then connect it to the Virtual Port Profile that matches that Classification. In this example, pre-defined Port Profiles are being used, but remember you can always customize.

The last step is to define the Switch Embedded Team. This is what you have been waiting for, so lets do it. First add your existing Port Profile we created earlier. If you forgot, don’t worry, you can select New Port Profile and create it here as well. You will see the Port Profile Load Balancing settings for the Team and will see the Network Site assigned. Then select New Virtual Network Adapter and create the first of many Virtual Switches. In the example I create a virtual Switch for Management Traffic (OS), Cluster and Live Migration. Each Virtual Network Adapter will have a VM Network assigned to it, which was previously created. Then you can pick the IP Pool settings, to allow VMM to assign the IP address for you, or Statically assign it. The last option is to set the Port Profile Classification, which we defined in the previous step of the Logical Switch Wizard.

One important note, the Management network will need to have two additional checkboxes enabled, the virtual adapter will be used for host management and Inherit Network Connection Settings from Adapter. These will allow our Management team to function as the primary vSwitch and let us adopt the Static IP address of our Hyper-V Host.

Once you place all these Uplinks in the Logical Switch, it will be deployed to every host, with these settings when assigned. This is the key to Cluster Network Consistency. I told you that word would be used a lot.

Congratulations, the Logical Network is designed. This is the hard part. The next step is to assign it to our hosts.

Configure the Hosts

The last step of setting a Switch Embedded Team is to deploy it to the hosts. This is so much easier now in SCVMM 2016 and leads again to my favorite word in Virtualization; Consistency. This method will ensure that every host at the Network Site is deployed with the same configuration, every time.

Start with the Fabric workspace and navigate to the Host Group that you will be working with. Select a host and go the properties of the host. You will then select Virtual Switches from the Navigation Bar and then begin applying your Logical Network.

Select the New Virtual Switch option, then then select New Logical Switch. Select the Logical Switch we created for that site and in the Physical Adapters select the Physical Nics you will be assigning to the team. Here is a hint I have picked up over the years, name the nics something that describes their purpose, and if you can append a switch name and switch port for easier troubleshooting later.

The rest is done for you, as we defined the Uplinks in the Logical Switch configuration, so you see all the Virtual Networks that will be deployed to the hosts, in this case Management, Live Migration and Cluster Networks will be added as vSwitches on this host. I also created IP Pools to handle IP address assignment for my Live Migration and Cluster Networks, and because we told the Management Virtual Network to inherit the IP settings, it set our Static IP as the Management Address.

Please visit my GitHub repository here. to download the PowerShell to build the Switch Embedded Team via SCVMM. You will need to update some Variables and the subnets to be your own, but I hope you find it useful.

That’s it, we have deployed Switch Embedded Teaming in a scalable, consistent manner. I hope you enjoyed this post, and I look forward to sharing more.

Michael Godfrey

Premier Field Engineer

@mgodfre3

Breaking into Windows Server 2019: Network Features: Accurate Network Time (Seriously!)

$
0
0

Happy Wednesday Everyone! Brandon Wilson here to give a quick heads up to all of our outstanding readers about a new blog series being posted by the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. This has some seriously cool implications on time services in Windows, in addition to being some awesome content with information you simply do not want to miss! If you have comments or questions on the post, your most direct path for questions will be in the link below. So, without further ado, and in the Product Group’s own words, here you go…

“Today, the Windows Core Networking team kicked off their Top 10 Networking Features in Windows Server 2019 blog series with: #10 Accurate Network Time

Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!”

Just to peak some interest, here is an excerpt:

Windows Server 2019 provides regulatory compliance with highly accurate time that is traceable and UTC-compliant, including support of leap seconds.  In this article, we’ll talk about the technical advances we made between Windows Server 2016 and Windows Server 2019 including true UTC-compliant leap second support, a new time protocol called Precision Time Protocol, and end-to-end traceability.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

 

Infrastructure + Security: Noteworthy News (July, 2018)

$
0
0

Hi there! Stanislav Belov is here with the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Azure DDoS Protection for virtual networks generally available
Distributed Denial of Service (DDoS) attacks are intended to disrupt a service by exhausting its resources (e.g., bandwidth, memory). DDoS attacks are one of the top availability and security concerns voiced by customers moving their applications to the cloud. With extortion and hacktivism being the common motivations behind DDoS attacks, they have been consistently increasing in type, scale, and frequency of occurrence as they are relatively easy and cheap to launch. Azure DDoS Protection Standard service is now available in all public cloud regions. This service is integrated with Azure Virtual Networks (VNet) and provides protection and defense for Azure resources against the impacts of DDoS attacks.
Eight Essentials for Hybrid Identity: #3 Securing your identity infrastructure
The volume of these current threats shows a significant rise, and new threats are emerging as well centered around IoT (Internet of Things), privacy, and consent. While we fight the good fight to ward off threats in your cloud infrastructure, we’d also like to recommend steps that you can take that could immediately protect your hybrid infrastructure. But before we can even start, ensure all your privileged Azure AD roles are protected with multi-factor authentication. Recently Microsoft released a baseline protection policy providing a one-click experience to protect privileged Azure AD roles.
Azure AD Password Protection and Smart Lockout are now in Public Preview!
Many of you know that unfortunately, all it takes is one weak password for a hacker to get access to your corporate resources. Hackers can often guess passwords because regular users are pretty predictable. Regular users create easy to remember passwords, and they reuse the same passwords or closely related ones over and over again. Hackers use brute force techniques like password spray attacks to discover and compromise accounts with common passwords, an attack pattern we told you about back in March.
Announcing public preview of Azure Virtual WAN and Azure Firewall
To help customers with these massive modernization efforts, we are announcing Azure Virtual WAN to simplify large-scale branch connectivity, and Azure Firewall to enforce your network security polices while taking advantage of the scale and simplicity provided by the cloud.
Microsoft Azure launches tamper-proof Azure Immutable Blob Storage for financial services
Azure Immutable Blob Storage is now in public preview – enabling financial institutions to store and retain data in a non-erasable and non-rewritable format – and at no additional cost. Azure Immutable Blob Storage meets the relevant storage requirements of three key financial industry regulations: the CFTC Rule 1.31(c)-(d), FINRA Rule 4511, and SEC Rule 17a-4. Financial services customers, representing one of the most heavily regulated industries in the world, are subject to complex requirements like the retention of financial transactions and related communication in a non-erasable and non-modifiable state. These strict requirements help to provide effective legal and forensic surveillance of market conduct.
Windows Server
Remote Desktop web client now generally available

On July 16th we announced the general availability of the Remote Desktop web client for Windows Server 2016 and Windows Server 2019 Preview. With a few simple PowerShell cmdlets, the client can be added to an existing Remote Desktop Services deployment, side by side with the RDWeb role.

Server Core and Server with Desktop: Which one is best for you

On March 20, 2018 we announced the availability of Windows Server 2019 preview, the next Long-Term Servicing Channel (LTSC) release in the Windows Insider program. Seven weeks later, we released Windows Server, version 1803, the latest release in the Semi-Annual Channel. The Semi-Annual Channel primarily focuses on rapid application development. New cloud-born applications or migrated (“lift-and-shift”) traditional applications benefit significantly from the isolation, predictability, and orchestration offered by containers. Of course, container orchestrators are also cloud-based, which means that there is very little need to run an interactive desktop on the host operating system in these scenarios, so we’ve only included the Server Core installation option in the Semi-Annual Channel. Now that we’re about to release on both channels, and that we’re including the Server with Desktop Experience on only one of the channels, it’s a good time to talk about Server Core versus Server with Desktop Experience.

Introducing the Windows Server Storage Migration Service

The Storage Migration Service is a new feature of Windows Server 2019 Preview which helps you migrate servers and their data without reconfiguring applications or users.

Windows Client
Windows 10 quality updates explained & the end of delta updates

With Windows 10, quality updates are cumulative. Installing the most recent update ensures that you receive any previous updates you may have missed. We used a cumulative update model to reduce ecosystem fragmentation, and to make it easier for IT admins and end users to stay up to date and secure. However, cumulative updates can prove challenging when it comes to the size of the update and the impact that size can have on your organization’s valuable network bandwidth.

Windows Autopilot: What’s new and what’s next

With Windows 10, we are focused on delivering a simpler, more powerful and intelligent IT experience by deepening integration across Microsoft’s products, creating a unified Microsoft 365 solution. Windows Autopilot simplifies the deployment of new Windows 10 devices in your organization by eliminating the need for IT to create, maintain and apply custom images, dramatically reducing the cost and complexity involved with custom imaging. You can now deliver new Windows 10 devices directly to your users without IT having to touch the device. With just a few simple clicks, your users can get up and running. With Windows Autopilot, the experience of deploying new Windows 10 devices is simple for end users and zero touch for IT—seamlessly integrated across Windows 10, Microsoft Intune, and Azure AD.

Security
Hawkeye Keylogger – Reborn v8: An in-depth campaign analysis

Hawkeye Keylogger is an info-stealing malware that’s being sold as malware-as-a-service. Over the years, the malware authors behind Hawkeye have improved the malware service, adding new capabilities and techniques. It was last used in a high-volume campaign in 2016.

Application whitelisting with “AaronLocker”
AaronLocker is designed to make the creation and maintenance of robust, strict, AppLocker-based whitelisting rules as easy and practical as possible. The entire solution involves a small number of PowerShell scripts. You can easily customize rules for your specific requirements with simple text-file edits. AaronLocker includes scripts that document AppLocker policies and capture event data into Excel workbooks that facilitate analysis and policy maintenance.
Microsoft Teams: Protecting against advanced threats
Office 365 Advanced Threat Protection (ATP) can help to safeguard your organization from this threat by “detonating” (executing) files uploaded to Microsoft Teams (specifically the SharePoint/Office 365 Group on the back-end) to validate it is a legitimate file and contains no malicious code that can do harm. This feature comes with Microsoft 365 E5, Office 365 E5, or available as an add-on to an existing Office 365 subscription.
Vulnerabilities and Updates
July 2018 servicing release for Microsoft Desktop Optimization Pack

The July 2018 MDOP servicing release is available for download. This release contains a hotfix package for MBAM 2.5 SP1 which fixes several issues and adds support for SQL server 2017.

Support Lifecycle
4 month retirement notice: Access Control Service
The Access Control Service, otherwise known as ACS, is officially being retired. ACS will remain available for existing customers until November 7, 2018. After this date, ACS will be shut down, causing all requests to the service to fail.
Announcing new options for SQL Server 2008 and Windows Server 2008 End of Support

It’s incredible how much and how rapidly technology evolves. Microsoft’s server technology is no exception. We entered the 2008 release cycle with a shift from 32-bit to 64-bit computing, the early days of server virtualization and advanced analytics. Fast forward a decade, and we find ourselves in a full-blown era of hybrid cloud computing with exciting innovation in data, artificial intelligence, and more.

Microsoft Premier Support News
We are happy to announce the release of Azure Active Directory Assessment. The purpose of this three-day Azure Active Directory (Azure AD) assessment is to provide you with a recommendation report and plan of action to improve your Azure AD environment based on best practices and expert knowledge. The assessment provides a detailed report for gap analysis of Azure Active Directory capabilities’ status, and guidance on how to utilize them, aiming to accelerate Microsoft Azure AD value to the business, improve consumption and productivity.
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Pulling Reports from a DSC Pull Server Configured for SQL

$
0
0

Hi! Serge Zuidinga here, SCOM PFE, and I would like to thank you for visiting this blog and welcome you to the second post about using a PowerShell DSC Web Pull Server with a SQL database. If you haven’t read through it already, you can find my first post on this topic here: Configuring a PowerShell DSC Web Pull Server to use SQL Database

Now that you’ve installed and configured a pull server, it’s time to do some reporting. After all, you do want to know if all connected nodes are compliant. We have several ways of going about it, and in this post, I will show you how you can get this information from the SQL database.

#Disclaimer
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages.#

 

Let’s get started

Using PowerShell to retrieve compliancy information

As you can see in the following screenshot, I’ve got my node configured to connect to my pull server that I created earlier:

I can easily check to see if I’m compliant (the Telnet client should be installed):

So far, so good!

You can even do this for multiple nodes that are connected to the pull server:

You can even do something like this:

But how do we go about getting compliancy information for hundreds of servers?

It’s stored in our SQL database so let’s head over there and get the information!

Prerequisites

We are going to create four different views within the DSC SQL database that we can query to see how are connected nodes are doing.

Before we can create those views and query them, we need to create three functions first.

Let’s get cracking!

Creating the three functions

Let’s open SQL Server Management Server and connect to our SQL server instance where the DSC SQL database is hosted.

Execute the following query which will create the three functions we need:

USE [DSC]
GO

CREATE FUNCTION [dbo].[Split] (
@InputString VARCHAR(8000),
@Delimiter VARCHAR(50)
)

RETURNS @Items TABLE (
Item VARCHAR(8000)
)

AS
BEGIN
IF @Delimiter = ‘ ‘
BEGIN
SET @Delimiter = ‘,’
SET @InputString = REPLACE(@InputString, ‘ ‘, @Delimiter)
END

IF (@Delimiter IS NULL OR @Delimiter )
SET @Delimiter = ‘,’

DECLARE @Item VARCHAR(8000)
DECLARE @ItemList VARCHAR(8000)
DECLARE @DelimIndex INT

SET @ItemList = @InputString
SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)
WHILE (@DelimIndex != 0)
BEGIN

SET @Item SUBSTRING(@ItemList, 0, @DelimIndex)
INSERT INTO @Items VALUES (@Item)

— Set @ItemList = @ItemList minus one less item
SET @ItemList = SUBSTRING(@ItemList, @DelimIndex+1, LEN(@ItemList)-@DelimIndex)
SET @DelimIndex = CHARINDEX(@Delimiter, @ItemList, 0)
END
— End WHILE

IF @Item IS NOT NULL
— At least one delimiter was encountered in @InputString
BEGIN
SET @Item = @ItemList
INSERT INTO @Items VALUES (@Item)
END

— No delimiters were encountered in @InputString, so just return @InputString
ELSE INSERT INTO @Items VALUES (@InputString)
RETURN
END
— End Function
GO

CREATE FUNCTION [dbo].[tvfGetRegistrationData] ()
RETURNS TABLE 
AS
RETURN
(
SELECT NodeName, AgentId,
(SELECT TOP (1) Item FROM dbo.Split(dbo.RegistrationData.IPAddress, ‘;’) AS IpAddresses) AS IP,
(SELECT(SELECT [Value] + ‘,’ AS [text()] FROM OPENJSON([ConfigurationNames]FOR XML PATH ())) AS ConfigurationName,
(SELECT COUNT(*) FROM (SELECT [Value] FROM OPENJSON([ConfigurationNames]))AS ConfigurationCount AS ConfigurationCount
FROM dbo.RegistrationData
)
GO

CREATE FUNCTION [dbo].[tvfGetNodeStatus] ()
RETURNS TABLE
AS
RETURN
(
SELECT [dbo].[StatusReport].[NodeName]
,[dbo].[StatusReport].[Status]
,[dbo].[StatusReport].[Id] AS [AgentId]
,[dbo].[StatusReport].[EndTime] AS [Time]
,[dbo].[StatusReport].[RebootRequested]
,[dbo].[StatusReport].[OperationType]
,(

SELECT [HostName] FROM OPENJSON(
(SELECT [value] FROM OPENJSON([StatusData]))
) WITH (HostName nvarchar(200‘$.HostName’)) AS HostName
,(

SELECT [ResourceId] ‘,’ AS [text()]
FROM OPENJSON(
(SELECT [value] FROM
OPENJSON((SELECT [value] FROM OPENJSON([StatusData]))) WHERE [key] ‘ResourcesInDesiredState’)
)
WITH (
ResourceId nvarchar(200) ‘$.ResourceId’
) FOR XML PATH ()) AS ResourcesInDesiredState
,(

SELECT [ResourceId] ‘,’ AS [text()]
FROM OPENJSON(
(SELECT [value] FROM OPENJSON((SELECT [value] FROM OPENJSON([StatusData]))) WHERE [key] ‘ResourcesNotInDesiredState’)
)
WITH (
ResourceId nvarchar(200) ‘$.ResourceId’
) FOR XML PATH ())
AS ResourcesNotInDesiredState
,(

SELECT SUM(CAST(REPLACE(DurationInSeconds,‘,’,‘.’AS float)) AS Duration
FROM OPENJSON(
(SELECT [value] FROM OPENJSON((SELECT [value] FROM OPENJSON([StatusData]))) WHERE [key] ‘ResourcesInDesiredState’)
)

WITH (
DurationInSeconds nvarchar(50) ‘$.DurationInSeconds’,
InDesiredState bit ‘$.InDesiredState’
)
) AS Duration
,(

SELECT [DurationInSeconds] FROM OPENJSON(
(SELECT [value] FROM OPENJSON([StatusData]))
) WITH (DurationInSeconds nvarchar(200‘$.DurationInSeconds’)) AS DurationWithOverhead
,(

SELECT COUNT(*)
FROM OPENJSON(
(SELECT [value] FROM OPENJSON((SELECT [value] FROM OPENJSON([StatusData]))) WHERE [key] ‘ResourcesInDesiredState’)
)) AS ResourceCountInDesiredState
,(

SELECT COUNT(*)
FROM OPENJSON(
(SELECT [value] FROM OPENJSON((SELECT [value] FROM OPENJSON([StatusData]))) WHERE [key] ‘ResourcesNotInDesiredState’)
)) AS ResourceCountNotInDesiredState
,(

SELECT [ResourceId] ‘:’ ‘ (‘ + [ErrorCode] ‘) ‘ + [ErrorMessage] ‘,’ AS [text()]
FROM OPENJSON(
(SELECT TOP 1 [value] FROM OPENJSON([Errors]))
)

WITH (
ErrorMessage nvarchar(200) ‘$.ErrorMessage’,
ErrorCode nvarchar(20‘$.ErrorCode’,
ResourceId nvarchar(200) ‘$.ResourceId’
FOR XML PATH ()) AS ErrorMessage
,(

SELECT [value] FROM OPENJSON([StatusData])
) AS RawStatusData
FROM dbo.StatusReport INNER JOIN
(SELECT MAX(EndTimeAS MaxEndTime, NodeName
FROM dbo.StatusReport AS StatusReport_1
WHERE EndTime ‘1.1.2000’
GROUP BY [StatusReport_1].[NodeName]AS SubMax ON dbo.StatusReport.EndTime = SubMax.MaxEndTime AND [dbo].[StatusReport].[NodeName] = SubMax.NodeName
)
GO

Note: In regards to line 103:

SELECT SUM(CAST(REPLACE(DurationInSeconds,‘,’,‘.’AS float)) AS Duration

Based on your regional settings, this can throw an error after executing this script.

Please consult your local SQL expert to fix the error if it is thrown.

 

Creating the four views

With the three functions created, we can now execute the following query to create the views that’ll give us the information about all our connected nodes:

USE [DSC]
GO
CREATE VIEW [dbo].[vRegistrationData]
AS
SELECT GetRegistrationData.*
FROM dbo.tvfGetRegistrationData() AS GetRegistrationData
GO

CREATE VIEW [dbo].[vNodeStatusSimple]
AS
SELECT dbo.StatusReport.NodeName, dbo.StatusReport.Status, dbo.StatusReport.EndTime AS Time
FROM dbo.StatusReport INNER JOIN
(SELECT MAX(EndTimeAS MaxEndTime, NodeName
FROM dbo.StatusReport AS StatusReport_1
GROUP BY NodeNameAS SubMax ON dbo.StatusReport.EndTime = SubMax.MaxEndTime AND dbo.StatusReport.NodeName = SubMax.NodeName
GO

CREATE VIEW [dbo].[vNodeStatusComplex]
AS
SELECT GetNodeStatus.*
FROM dbo.tvfGetNodeStatus()
AS GetNodeStatus
GO

CREATE VIEW [dbo].[vNodeStatusCount]
AS
SELECT NodeName, COUNT(*) AS NodeStatusCount
FROM dbo.StatusReport
WHERE (NodeName IS NOT NULL)
GROUP BY NodeName
GO

 

Creating a trigger

Almost there! We only need execute the following code which will create and enable a trigger to update the status report information if the DSC database gets an update:

USE [DSC]
GO

CREATE TRIGGER [dbo].[DSCStatusReportOnUpdate]
ON [dbo].[StatusReport]
AFTER UPDATE
AS
SET NOCOUNT ON
BEGIN
DECLARE @JobId nvarchar(50) (SELECT JobId FROM inserted);
DECLARE @StatusData nvarchar(MAX) (SELECT StatusData FROM inserted);
IF @StatusData LIKE ‘\[%’ ESCAPE ‘\’
SET @StatusData = REPLACE(SUBSTRING(@StatusData, 3Len(@StatusData 4), ‘\’)

DECLARE @Errors nvarchar(MAX) (SELECT [Errors] FROM inserted);
IF @Errors IS NULL
SET @Errors (SELECT Errors FROM StatusReport WHERE JobId = @JobId)

IF @Errors LIKE ‘\[%’ ESCAPE ‘\’ AND Len(@Errors> 4
SET @Errors = REPLACE(SUBSTRING(@Errors, 3Len(@Errors 4), ‘\’)

UPDATE StatusReport
SET StatusData = @StatusData, Errors = @Errors
WHERE JobId = @JobId
END
GO

ALTER TABLE [dbo].[StatusReport] ENABLE TRIGGER [DSCStatusReportOnUpdate]
GO

 

Getting information

We are now ready to get the information from the database!

Registration data

Example: SELECT FROM [DSC].[dbo].[RegistrationData]

Node status count

Example: SELECT FROM [DSC].[dbo].[vNodeStatusCount]

Node status (basic information)

Example: SELECT FROM [DSC].[dbo].[vNodeStatusSimple]

Node status (detailed information)

Example: SELECT FROM [DSC].[dbo].[vNodeStatusComplex]

 

Let’s summarize

Original source

I got the idea and all the SQL stuff from this great post by my esteemed colleague Raimund.

The difference

Whereas Raimund is focusing on visualizing data in Power BI, I am focusing on retrieving the data from SQL server itself and providing queries you can use in SQL Server Reporting Services.

The other difference

I also want to point out the different types of JSON data that is stored within our DSC database.

For example, if you run this code:

USE [DSC]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SELECT FirstStatusReport.NodeName, StatusDataInJSON.*, AdditionalDataInJSON.*
FROM dbo.StatusReport AS FirstStatusReport
INNER JOIN (
SELECT MAX(EndTime) AS EndTime, NodeName
FROM dbo.StatusReport AS SecondStatusReport
GROUP BY SecondStatusReport.NodeName)
AS MostRecentEntry ON FirstStatusReport.EndTime = MostRecentEntry.EndTime AND FirstStatusReport.NodeName = MostRecentEntry.NodeName
CROSS APPLY OPENJSON(StatusDataAS StatusDataInJSON
CROSS APPLY OPENJSON(AdditionalDataAS AdditionalDataInJSON

From that, you will see that the outcome is:

Focusing on the “Type”-columns you will see that the type for “ResourcesInDesiredState” equals to 4 and that the type for “MetaConfiguration” equals to 5.

So, the “ResourcesInDesiredState”-column holds JSON data of the “array”-data type and the “MetaConfiguration”-column holds JSON data of the “object”-data type.

The most important reasons for me to point out the differences is so that you are aware of these differences and are aware of the different ways you can pull date from the DSC database.

How does this affect me?

When creating the functions, views and trigger we have been using a lot of “SELECT”-statements but when we queried data to show the difference between the JSON data types, we used “CROSS APPLY”-statements.

For example, getting resources in desired state could also be done with this query (as opposed to the function mentioned earlier where we don’t use any “CROSS APPLY”-statements):

USE [DSC]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SELECT FirstStatusReport.NodeName, ResourcesInDesiredStateInJSONResults.*
FROM dbo.StatusReport AS FirstStatusReport
INNER JOIN (
SELECT MAX(EndTimeAS EndTime, NodeName
FROM dbo.StatusReport AS SecondStatusReport
GROUP BY SecondStatusReport.NodeName)
AS MostRecentEntry ON FirstStatusReport.EndTime = MostRecentEntry.EndTime AND FirstStatusReport.NodeName = MostRecentEntry.NodeName
CROSS APPLY OPENJSON(StatusData)
WITH (ResourcesInDesiredState NVARCHAR(MAX) ‘$.ResourcesInDesiredState’ AS JSONAS ResourcesInDesiredStateInJSON
CROSS APPLY OPENJSON(ResourcesInDesiredStateInJSON.ResourcesInDesiredState)
WITH (Feature NVARCHAR(MAX) ‘$.InstanceName’AS ResourcesInDesiredStateInJSONResults

The result is:

However, when using this query to pull data from the DSC database the outcome might not be what you expect:

USE [DSC]
GO

SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

SELECT FirstStatusReport.NodeName, ResourcesInDesiredStateInJSONResults.*, AdditionalDataInJSONResults.*
FROM dbo.StatusReport AS FirstStatusReport
INNER JOIN (
SELECT MAX(EndTimeAS EndTime, NodeName
FROM dbo.StatusReport AS SecondStatusReport
GROUP BY SecondStatusReport.NodeName)
AS MostRecentEntry ON FirstStatusReport.EndTime = MostRecentEntry.EndTime AND FirstStatusReport.NodeName = MostRecentEntry.NodeName
CROSS APPLY OPENJSON(StatusData)
WITH (ResourcesInDesiredState NVARCHAR(MAX) ‘$.ResourcesInDesiredState’ AS JSONAS ResourcesInDesiredStateInJSON
CROSS APPLY OPENJSON(ResourcesInDesiredStateInJSON.ResourcesInDesiredState)
WITH (Feature NVARCHAR(MAX) ‘$.InstanceName’AS ResourcesInDesiredStateInJSONResults

CROSS APPLY OPENJSON(AdditionalData)
WITH (PSVersion NVARCHAR(MAX) ‘$.PSVersion’ AS JSONAS AdditionalDataInJSON
CROSS APPLY OPENJSON(AdditionalDataInJSON.PSVersion)
WITH (PSVersion NVARCHAR(MAX) ‘$.PSVersion’AS AdditionalDataInJSONResults

The outcome is:

 

Conclusion

I hope you have gained more insight into how reporting data is stored in the DSC database, and how to pull data from your DSC database and create reports using just SQL server, SQL Server Reporting Services or even Power BI.

Happy reporting and stay tuned on more PowerShell Desired State Configuration!

 

References

These are sites I got information from the understand how to get data from our DSC database and use it.

JSON in SQL Server

Posts from fellow colleagues

My previous posts

Breaking Into Windows Server 2019: LEDBAT: Latency Optimized Background Transport

$
0
0

Happy Thursday all! Brandon Wilson here to give a heads up to all of our outstanding readers about a new post in the blog series by the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. To say it in the Product Group’s own words, here you go…

“This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #9 – LEDBAT – Latency Optimized Background Transport
Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!”

Here is an excerpt to give you a brief view of the topic at hand:

“Keeping a network secure is a never-ending job for IT Pros, and doing so requires regularly updating systems to protect against the latest threat vectors.  This is one of the most common tasks that an IT Pro must perform.  Unfortunately, it can result in dissatisfaction for end-users as the network bandwidth used for the update can compete with interactive tasks that the end user requires to be productive.

With Windows Server 2019, we bring a latency optimized, network congestion control provider called LEDBAT which scavenges whatever network bandwidth is available on the network, and uses it”

This is some more information you do not want to miss! If you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

How To Find Removable AppxPackages

$
0
0

My name is Benjamin Morgan and today I want to discuss the AppxPackages in Windows 10. Recently I had a customer looking at upgrading to Windows 10 1803 and they sent me a list of apps that they wanted to remove and asked what other apps could be removed as well. We have a list of typical apps located at https://docs.microsoft.com/en-us/windows/application-management/apps-in-windows-10, but the list does not include everything. To meet my customers’ requirement, I had to resort to writing my own PowerShell script to get them all the information that they needed. When I looked at the properties of one of the apps, ‘Get-AppxPackage -AllUsers *sol*’, I noticed the following, the “SignatureKind” was marked as “Store”

I ran the same command against Edge since I knew that was part of the OS and cannot be removed, so I wanted to see what the deltas were between an App that could be removed and one that cannot be removed. Running ‘Get-AppxPackage -AllUsers *edge*’ I noticed that “SignatureKind” was marked as “System”

Based upon this finding I made the hypothesis that any App marked as “Store” could be removed while an App marked as “System” would not be able to be removed. The next issue I saw was that not every App product name was the actual name of the App when looking through the Start menu I needed a way to marry the name of the AppxPackage with the actual product name of the App. Since PowerShell gives the location of the App’s installation directory I decided to parse that directory for .exe files and then pull the name of the App off that .exe file. The sample script that I used to create a test file that showed all the removable AppxPackages is:

<#Disclaimer:
The sample scripts are not supported under any Microsoft standard support program or service. The sample scripts are provided AS IS without warranty of any kind. Microsoft further disclaims all implied warranties including, without limitation, any implied warranties of merchantability or of fitness for a particular purpose. The entire risk arising out of the use or performance of the sample scripts and documentation remains with you. In no event shall Microsoft, its authors, or anyone else involved in the creation, production, or delivery of the scripts be liable for any damages whatsoever (including, without limitation, damages for loss of business profits, business interruption, loss of business information, or other pecuniary loss) arising out of the use of or inability to use the sample scripts or documentation, even if Microsoft has been advised of the possibility of such damages. #>

######### Create Variables ################
$appxpackages Get-AppxPackage -AllUsers % {if($_.SignatureKind -eq “store”){$_.name}}
$output $env:USERPROFILE\desktop\appxpackages.txt”
######### Get All AppxPackges That Are Removable ################
foreach ($app in $appxpackages){
######### Get The Location Of The AppxPackage ################
foreach ($location in (Get-AppxPackage -name $app).InstallLocation){
######### Get The Executable Files Of The AppxPackage ################
$exes Get-Childitem $location *.exe -Recurse
######### Get Each Individual Executable File Of The AppxPackage ################
foreach ($exe in $exes){
######### Get The Name Executable File Of The AppxPackage ################
$name = ($exe).name
######### Verify Accesabiliy To The Executable File Of The AppxPackage ################
$testpath Test-Path $location\$name
if ($testpath -eq $true){
foreach ($n in $name) {
######### Get The Application Name Of The Executable File Of The AppxPackage ################
$appname = (Get-Item $location\$name).versioninfo.productname
if ($appname -ne $null){
######### Output The Application Name And AppxPackage Name To The User’s Desktop ################
if ($appname -ne “”){
“Application Name: $appname| Out-File $output -NoClobber -Append
“AppxPackage Name: $appOut-File $output -NoClobber -Append
“———————————-” Out-File $output -NoClobber -Append
}
}
}
}
}
}
}

 

Using this script, I was able to give my customer the ability to create a text file to view the application product names and the AppxPackage name all the AppxPackages that are removable. This script has only been tested with Windows 10 1709 and Windows 10 1803, as it incorporates several items that were first introduced in Windows 10 1709.

Hopefully this helps you to understand and manage your own Appx packages. Thanks for reading!


Breaking Into Windows Server 2019: A Faster, Safer Internet

$
0
0

Happy Thursday all! Brandon Wilson back again to introduce you to yet another new post in the blog series by the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. Here is a light overview from the Product Group….

“This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #8 – A Faster, Safer Internet

Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!”

Here is an excerpt to give you a brief view of the topic at hand:

The Internet is part of our daily lives both at work and at home, in the enterprise and in the cloud.  We are committed to making your Internet experience faster and safer, and in this blog, we discuss how the features in Windows Server 2019 brings those goals to reality.  To do this we:

  • Improved coalescing of connections to deliver an uninterrupted and properly encrypted browsing experience.
  • Upgraded HTTP/2’s server-side cipher suite negotiation for automatic mitigation of connection failures and ease of deployment.
  • Changed our default TCP congestion provider to Cubic to give you more throughput!

If you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

Cryptojacking – Leeches of the Internet

$
0
0

Hello, this is Paul Bergson again with another topic on security. The threat of malware continues to impact business with no relief in sight. The latest topic brought back childhood memories of how the “Leeches” of the internet prey upon unsuspecting victims.

It has been a beautiful summer in the Minneapolis, MN area this year with plenty of opportunities to cool off in one of our thousands of lakes. I remember as a kid one day we went, the water was warm but not very clear and there was plenty of vegetation in the water where we were. One day in particular 2 brothers and 2 cousins of mine, were splashing and playing in the water without a care in the world. There weren’t any exposed threats that other parts of the country/world have to watch out for such as jelly fish, sharks or water snakes, etc…

We hung out and swam for an extended period of time before we decided to swim back to shore. I was the first one out and was drying myself off when I hear this scream from my cousin as he was stepping onto dry land. As I looked over at him, he had what initially looked like a bunch of small black mud spots stuck to his skin but under closer inspection were water leeches. The leeches had “Hijacked” his circulatory system for food (energy). Initially he yanked a couple off but that hurt him, so someone ran and got some salt. The salt got the leeches to release themselves but we decided to stay out of the lake the remainder of the day as well as stay away from the that part of the lake in the future.

Hopefully I haven’t lost any readers thinking they are on the wrong technical website. My point in the story above is how Cryptojacking malware authors can be equated to leeches of the animal kingdom. When someone swims by there malware on the web, and victims are susceptible to attack malware miners will latch onto you and start to leech away your computer resources.

What is “Cryptojacking” and malware miners you ask? Read on…

In 2017 there was an onslaught of Ransomware with several high-profile attacks, but recently Ransomware has taken a back seat to the assault of Cryptojacking where attackers are in the pursuit of cryptocurrency. This isn’t to state that Ransomware has gone away, it hasn’t but the level of Cryptojacking attacks is now being reported to be more prevalent than Ransomware attacks.

Cryptocurrencies are based upon solving complex mathematical problems with miners (Machines running to solve these mathematical problems) being rewarded with crypto coins for solving the problem on a blockchain. Bitcoin cryptocurrency for example has a finite number of coins that get more and more difficult to obtain as the pool of coins begins to exhaust. Since it becomes more difficult to solve the mathematical problems, more CPU/GPU’s cycles are needed to a mine a coin. This leads to a rise in energy costs to mine a coin. With the rise in demand for CPU/GPU cycles to solve the ever-growing mathematic complexity, most ordinary users can’t afford the equipment or the associated energy costs to mine on their own. On average Bitcoin miners, currently mine ~1,800/day and at the current rate of ~$6,000/coin (7/12/2018) this means there is $10 million in new Bitcoins mined every day. As the compute complexity increases so does the electrical energy required to complete the task, there are projections that put the price to mine a single Bitcoin by 2022, somewhere between $300,000 – $1.5 million. *1

Since attackers can’t afford the compute power nor the associated energy costs for cryptocurrency mining, they look for ways to gain access without having to pay for it (Steal it). The cryptocurrency creation market is a multi-billion-dollar market and there are over 1,000 different virtual coins. Some of these coins are more established and used for exchange of property and/or services.

Bitcoin has the largest Cryptocurrency exchange rate from virtual to physical, but the Monero crypto coin is the choice for malware mining, since it is easily mined with CPU’s. Monero transactions provide a greater veil of secrecy than Bitcoin and as such are becoming more established in the Dark market. Tracking the usage of Bitcoin transaction can be accomplished whereas Monero provides a more anonymous transaction. Anonymity is crucial to illegal activities such as Cryptojacking and Ransomware assaults, because of this the dark markets have seen a rise in the use of Monero. With increased use, comes increased demand which then drives up the value (Exchange rate) of the Monero crypto coin.

So why all this talk about crypto currencies and how they are mined? “The surge in Bitcoin prices has driven widescale interest in cryptocurrencies”. *2 Attackers need CPU/GPU cycles to mine and Crypto”Hi”jacking can provide this service. Cryptojacking occurs when a malware attacker hijacks a victims computer to mine for Cryptocurrency without their permission. In many instances it occurs within the browser of the victim (drivebys). Symptoms can include the computer heating up, the fan running at a high rate when there isn’t any real activity occurring on your device and/or response times are sluggish.

The attacker isn’t selective on the device, they just want CPU cycles to help them compute the algorithm, devices could be desktops, laptops, servers or even mobile devices. There have been reports of Android devices being damaged from the battery overheating, causing it to expand which results in physical damage to the device. *3

Consumers aren’t as apt to report a Cryptojacking attack. They haven’t physically lost anything, and the increased use of electrical energy (Energy costs) would be hard to itemize and like other forms of malware it is very difficult to trace the source back to the malware author. Cryptojacking is growing rapidly, according to a study released by McAfee in June 2018, “coin miner malware grew a stunning 629% to more than 2.9 million known samples in Q1 from almost 400,000 samples in Q4”. *4 Cryptojacking malware kits are now for sale on the Dark market, so many unscrupulous individuals with lesser technical skills can wage an attack.

How it works:

There are two forms in which Cryptojacking can be delivered:

  • Victims inadvertently load malware on their machines from a phishing attack. The code runs a process in the background that is unknown to the victim.
  • Victims visit an infected website that launches a fileless script (Usually JavaScript) within the browser (Drive by attack)
    • When an Advertisement pops up on a legitimate website, many times the owner of the website doesn’t have control over the script that runs in the pop-up. This pop-up can contain a Cryptojack script that can run until all threads of the browser have been terminated.

There is also a semi-legitimate form of remote mining that is being offered as a service. For example, Coinhive – Provides subscribers a JavaScript miner for the Monero Blockchain as a way to offer an alternative to have ads on their website. Most AdBlockers now block the use of Coinhive even if the user approves of it at the host site requiring approval of the coin miner running on your local machine while visiting their website.

Cryptojacking attacks aren’t just the problem for consumers, with cloud usage exploding, businesses need to protect ALL devices they manage. Cryptojacking malware was recently discovered running on an AWS hosted website. Imagine a farm of servers compromised with Cryptojacking malware, where costs for cloud resources is measured by the usage of compute resources. *5 Left unchecked this malware infection could have a measurable impact on the budget of the victim’s server farm.

Cryptojacking is no different than any other malware. Systems can be protected from it and the steps required are mostly the same as other forms of malware.

Defenses:

  • Ensure systems are up to date on patching
  • Ensure systems are up to date on AV signatures
  • Blacklist known mining sites
    • Chrome, Firefox and Opera users can install the extension “No Coin” (Open source from MIT) to block miner malware
  • Adblockers can prevent the loading of mining scripts, but Malware is learning how to bypass them
  • Disable JavaScript in Browsers
    • Some browsers have extensions to control blockage (whitelist sites) of scripting engines such as JavaScript
      • Example “No Script” *6
  • Remove any browser extensions that may have been compromised
  • Server monitoring
    • Note any unexpected/radical changes in CPU usage
  • User education
    • Watch for changes in CPU use
    • Device heats up/Fan speed increases
    • Internet browsing/computer response slows down

Microsoft Defenses:

  • Windows Defender SmartScreen *7
  • Windows Defender Exploit Guard – Network Protection *8
  • Windows Defender Anti-Virus (WD AV)
    • Signature based malware protection
    • Enable “Potentially Unwanted Applications” (PUA) *9
  • Windows Defender Advanced Threat Protection (WD ATP)
    • Invisible Resource Thief’s *10
  • Whitelisting approved scripts, executables and DLL’s
    • AppLocker *11 *12
    • Windows Defender Application Control *13

Hopefully readers are better informed and prepared to protect themselves against these “Leeches of the Internet”. After all, Cryptojacking is just another form of malware, Malware authors use to steal people’s money and/or possessions. Please read over & put into practice the defenses called out in this Blog and protect your business, family, friends and your own equipment.

References:

  1. https://www.bloomberg.com/news/articles/2017-11-09/bitcoin-s-exorbitant-energy-costs-may-prove-to-be-biggest-risk
  2. https://cloudblogs.microsoft.com/microsoftsecure/2018/03/13/invisible-resource-thieves-the-increasing-threat-of-cryptocurrency-miners/
  3. http://www.ibtimes.com/android-malware-mines-monero-can-literally-destroy-phones-2629933
  4. https://www.mcafee.com/enterprise/en-us/assets/reports/rp-quarterly-threats-jun-2018.pdf
  5. https://nakedsecurity.sophos.com/2018/02/27/unsecured-aws-led-to-cryptojacking-attack-on-la-times/
  6. https://en.wikipedia.org/wiki/NoScript
  7. https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-smartscreen/windows-defender-smartscreen-overview
  8. https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-exploit-guard/network-protection-exploit-guard
  9. https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-antivirus/detect-block-potentially-unwanted-apps-windows-defender-antivirus
  10. https://cloudblogs.microsoft.com/microsoftsecure/2018/03/13/invisible-resource-thieves-the-increasing-threat-of-cryptocurrency-miners/
  11. https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/applocker/applocker-overview
  12. https://blogs.msdn.microsoft.com/aaron_margosis/2018/06/26/announcing-application-whitelisting-with-aaronlocker/
  13. https://docs.microsoft.com/en-us/windows/security/threat-protection/windows-defender-application-control/windows-defender-application-control

Breaking Into Windows Server 2019: Network Features: Software Defined Networking (SDN)

$
0
0

Happy Wednesday to all of our great readers! Brandon Wilson here once again to give yet another pointer to some more outstanding content/information from the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. This time around, they are covering some of the new Software Defined Networking (SDN) capabilities in Windows Server 2019, and its an excellent read in my humble opinion, but don’t take my word for it! Here is some initial information straight from the product group:

“This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #7 – SDN Goes Mainstream

Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!

Here’s an Excerpt:

If you’ve ever deployed Software Defined Networking (SDN), you know it provides great power but is historically difficult to deploy. Now, with Windows Server 2019, it’s easy to deploy and manage through a new deployment UI and Windows Admin Center extension that will enable anyone to harness the power of SDN.  Even better, these improvements also apply to SDN in Windows Server 2016!


As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

Tick Tock, Time to Catch Up: What’s New with Windows Time in Windows Server 2019

$
0
0

Stay a while and….oh wait wrong place. Welcome! My name is Tim Medina, Senior PFE with Microsoft and today we are going to look at what’s new with Windows Time for Server 2019 (part 1 of a 3 part series). As with everything, time has marched on and we are looking forward with the way we provide to time services to your environments. We did release some information on what’s to come here.

So, building on the introduction of highly accurate time, we have implemented this to use the same configuration and gotten some of that space down to the 1s, 50s, and 1ms accuracy. This has a high impact on time sensitive businesses and requirements. It should also be noted that it may extend to normal operations and keeping control of ticket lifetimes to the millisecond can further control of organizations identities.

As with previous releases, the configuration can be controlled via the registry, time commands, and Group Policy. The extensions have been made for the configuration of highly accurate time in the registry as well. Also note there are some requirements and restrictions.

We have also worked inside the service controls to reduce the possible impact of using the SpecialPollInterval to work in conjunction with high accuracy requirements. This should move the flag controls as well to a more uniform usage.

All and all we are continuing the work we started in Server 2016 and moving things forward from there. Next blog we will be doing a dive into the configuration paths outlined above. Then to wrap things up we will look into the deeper functions and flows inside the service. So as the bell tolls, we will see you next time.

 

For some extra information, you can also take a look at the product group’s post on this topic over at https://blogs.technet.microsoft.com/networking/2018/07/18/top10-ws2019-hatime/!

 

Breaking Into Windows Server 2019: Network Features: High Performance SDN Gateways

$
0
0

Another happy Wednesday to all of our great readers! Brandon Wilson here again to give you another pointer to some more information from the Windows Core Networking team on the Top 10 networking features in Windows Server 2019. This time around, they are once again covering some of the new Software Defined Networking (SDN) capabilities in Windows Server 2019, however this time, they are touching on SDN gateways. Here is some initial information straight from the product group:

“This week, the Windows Core Networking team continues their Top 10 Networking features in Windows Server 2019 blog series with: #6 – High Performance SDN Gateways

Each blog contains a “Try it out” section so be sure to grab the latest Insider’s build and give them some feedback!  Don’t forget to check out all the features in the Top 10!

Here’s an Excerpt:

Last we announced vast improvements to the management and deployment experience for SDN including Windows Admin Center interfaces!  This week we’re excited to announce SDN performance improvements in hybrid connectivity scenarios!

Organizations today deploy their applications across multiple clouds including on-premises private clouds, service provider clouds, and public clouds such as Azure. In such scenarios, enabling secure, high-performance connectivity across workloads in different clouds is essential. Windows Server 2019 brings huge SDN gateway performance improvements for these hybrid connectivity scenarios, with network throughput multiplying by up to 6x!!!


As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again soon!

Brandon Wilson

Tick Tock, Time to Configure the Clock: Time Settings and Controls in the Modern Environment

$
0
0

When the clock strikes twelve…it’s time to get to work. Hello, and welcome back to this ongoing series discussion about Windows Time. My name is Tim Medina, and today we are going to look at some of the configuration items for Windows Time and put them in context of the modern infrastructure.

First, for those that may need it please have a look at the base articles on the Windows Time. There is a good deal of information to take in and work with there. So let’s look at some of the items of note as it deals with the modern infrastructures. For that I mean both an on prem and IaaS or cloud-based perspective.

So, when we look at the configuration items we generally are dealing with stratum (source) targets and how it propagates in the environment. Working with on-prem systems is pretty tried and true of setting a DC to target a NTP source and then letting the organic domain time flow the time throughout the environment. In the world of 2019 that can mean that all systems are in a ~50ms (or less) time sync making audit transactions and controls more finite than ever before. If we are leveraging the modern homogenous of multiplatform, we can use the base system as the strata propagation source to push time to non-Windows systems. They will treat it as a regular times source and with some configuration changes can keep up with the high accuracy constraints if configured properly.

With all that in hand, we should mention a few things. It is recommended to use one NTP source inside the domain and have the rest of the members set to NT5DS. It is also recommended to ensure your source configuration is properly set. Meaning that the source address should be a URL not an IP and the flag set properly. For the flags, you should keep in mind what each of them do. Looking at the parameters, it is generally directed to use 0x1 for the primary source and then if configured 0x2 for the secondary. The other item of note is the time source type. Generally, in a domain you should see NTP for a single source and NT5DS as all other sources. The use of AllSync can cause issues as it will attempt to build an amalgamation of all configured source the system is configured to use based on response from the source. With this in mind, the use of AllSync is generally discouraged in the modern environment.

On to IaaS and VM configuration notes. There is scarcely an enterprise today that does not have some form of virtualized system. If on prem they generally follow the rules above if inside of a domain. It should be noted that the time correction here can sometimes have an issue and in that event (and only that event) we should look at adjusting the configuration on the hosts and guests to reconfigure the VMNICTimeProvider.

For cloud based IaaS solutions it should be noted that most providers control time via the instance you are building in. This holds true in Azure as the IaaS based systems unless configured otherwise will pull time from the Azure Data Center they are housed in. This is also true for PaaS solutions and other cloud based initiatives.

That about wraps things up for this installment in this three part series. We will going over configuration and a deeper technology dive in our final installment. Hope to see you all there!

 

For some extra information on the new Windows Time features for Windows Server 2019, you can also take a look at the product group’s post on this topic over at https://blogs.technet.microsoft.com/networking/2018/07/18/top10-ws2019-hatime/!

 

Thanks for reading!

Tim Medina

Breaking Into Windows Server 2019: Network Features: Network Performance Improvements for Virtual Workloads

$
0
0

Hello World! Brandon Wilson here with another pointer to some of the new networking features in Windows Server 2019 straight from the Windows Core Networking team!

In this week’s posting, the discussion surrounds enhancements to network performance for virtual workloads. Here is an excerpt straight from the product group:

Top 10 Networking Features in Windows Server 2019: #5 Network Performance Improvements for Virtual Workloads (https://blogs.technet.microsoft.com/networking/2018/08/22/netperf4vw/)

Excerpt: Whether you have compute workloads like File, SQL, and VDI, you run an S2D cluster, or perhaps you’re using your SDN environment to bring hybrid cloud to a reality, no doubt we crave network performance – we have a “need for speed” and no matter how much you have you can always use more.  However, high-speed network throughput came at the additional cost of complex planning, baselining, tuning, and monitoring to alleviate CPU overhead from network processing.

In Windows Server 2019, virtual workloads will reach and maintain 40 Gbps while lowering CPU utilization and eliminate the painful configuration and tuning cost previously imposed on you, the IT Pro.

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again next week!

Brandon Wilson


A New Tool for your Toolbox: SCOM Dashboard Report Template in PowerBI

$
0
0

Hello again everyone! Christopher Scott, Premier Field Engineer here. Recently I have been developing a lot of data insight reports for various datasets and customers and thought I would wrap them up and share the wealth. The first report template I am sharing was made to provide a general overview of the SCOM environment. Below are some important configurations that can be tailored to your environment as well as an outline of the various pages and the data represented within them as well as the download links for the template and PowerBI desktop.

Template can be downloaded from https://gallery.technet.microsoft.com/SCOM-Overview-Dashboard-fd37a6f3

PowerBI Desktop can be downloaded for free from https://powerbi.microsoft.com/en-us/downloads/

Important Configurations:

Data-Source Parameters:

If you are importing from the template file you will be prompted for the “SCOM DB Instance”, “SCOM DB”, “SCOM DW Instance” and “SCOM DW“. Fill these fields with the appropriate SQL information for your environment and click ok.

If you are using the PBIX file:

Once you open the PowerBI file the first thing you will need to do is configure the data source by editing the parameters. You can do this simply by clicking the “Edit Queries” button ion the Home Toolbar and then selecting “Edit Parameters”

Replace the “SCOM DB Instance”, “SCOM DB”, “SCOM DW Instance” and “SCOM DW” fields with the appropriate SQL information for your environment and click ok. Continue to click OK or Run to allow the Native queries to run and import the data.

Conditional Columns:

There are 3 tables that we have implemented conditional columns to generalize or group data for easier viewing. These settings may need to be altered to meet your needs.

Agents Requiring Attention:

I use a conditional column here to translate between different health state codes into a friendly name. Below I outline where to find the specific settings.

Application Group Transforms:

To limit the number of redundant groups listed ion the filter views we created a conditional column to group like groups by display names. These will most likely need to be edited to fit the needs of your environment. Access to these settings are outlined in the images below.

Report Previews:


  • Page 1 is a General Overview page consisting of 2 filters, a date slider and application group filter that is derived from the Management Groups within SCOM. Also, on the page is pie chart showing alerts by severity for critical and warning only. A bar graph outlining agent versions within the environment (See important configurations for details). On the bottom right I’ve added a last refresh timestamp for those of you without PowerBI gateways to schedule refreshes.

  • Page 2 is a Alert Summary details page. On this page of the report I carried over the filter visuals from page 1(See Important configurations for details). The page also has an alert severity filter for additional scoping, an agent list on the bottom right will give a visual representation of the filtered objects(Computer FQDN’s). At the top right of the page is a Count of Alerts over the last seven days for trending data. The other two visuals depict Top 20 alert generators and alerts by SCOM Management Pack.

  • Page 3 the last page of the report provides agent summary data including the total number of agents, agents requiring attention and those set to manually installed along with visual representations of systems going in and out of maintenance mode over the last year, agent health and version statsistics.

I hope these reports give you the necessary data points to empower your organization. Please feel free to comment below with any feedback or ideas that you would like to see templated or improved.

I also want to thank some fellow PFE’s in the SCOM field so shout out to Ted Teknos, and Mark Hawver for your guidance and insight into what data is valuable and what is not.

Infrastructure + Security: Noteworthy News (August, 2018)

$
0
0

Hi there! Stanislav Belov here to provide you with the next issue of the Infrastructure + Security: Noteworthy News series!  

As a reminder, the Noteworthy News series covers various areas, to include interesting news, announcements, links, tips and tricks from Windows, Azure, and Security worlds on a monthly basis.

Microsoft Azure
Azure management groups now in general availability
Management groups allow you to organize your subscriptions and apply governance controls, such as Azure Policy and Role-Based Access Controls (RBAC), to the management groups. All subscriptions within a management group automatically inherit the controls applied to the management group. No matter if you have an Enterprise Agreement, Certified Solution Partner, Pay-As-You-Go, or any other type of subscription, this service gives all Azure customers enterprise-grade management at a large scale for no additional cost.
Azure File Sync is now generally available!
Azure File Sync replicates files from your on-premises Windows Server to an Azure file share. With Azure File Sync, you don’t have to choose between the benefits of cloud and the benefits of your on-premises file server – you can have both! Azure File Sync enables you to centralize your file services in Azure while maintaining local access to your data.
New customizations in Azure Migrate to support your cloud migration
Azure Migrate discovers servers in your on-premises environment and assesses each discovered server’s readiness to run as an IaaS VM in Azure. In addition to Azure readiness, it helps you identify the right VM size in Azure after considering the utilization history of the on-premises VM.
Windows Server
Everything you need to know about Windows Server 2019

You should know by now that Windows Server 2019 is available as a preview in the Windows Insiders program. In the last few months, the Windows Server team has been working tirelessly on some amazing new features. We wanted to share the goodness that you can expect in the product through a series of blog posts. This is the first in the series that will be followed by deep-dive blog posts by the engineering experts. Part 1, Part 2.

Windows Client
Windows 10 Servicing and In-Place Upgrades In Microsoft SCCM

In this video guide, we will be covering how you can manage Windows as a service using System Center Configuration Manager. This video will cover deploying Windows 10 Upgrades using the software updates feature for Windows 10 Upgrades. We will also review how you could use task sequences and operating system upgrade packages to upgrade Windows 10 with to allow custom actions. This will cover how to service (upgrade) existing Windows 10 machines to the latest builds as well of upgrade Windows 7 to Windows 10 using an in-place upgrade task sequence.

Security
Respond to threats faster with Security Center’s Confidence Score

Azure Security Center provides you with visibility across all your resources running in Azure and alerts you of potential or detected issues. The volume of alerts can be challenging for a security operations team to individually address. Due to the volume of alerts, security analysts have to prioritize which alerts they want to investigate. Investigating alerts can be complex and time consuming, so as a result, some alerts are ignored.

Reduce your exposure to brute force attacks from the virtual machine blade
Attackers commonly target open ports on Internet-facing virtual machines (VMs), spanning from port scanning to brute force and DDoS attacks. In case of a successful brute force attack, an attacker can compromise your VM and establish a foothold into your environment. Once an attacker is in your environment, he can profit from the compute of that machine or use its network access to perform lateral attacks on other networks.
Cybersecurity threats: How to discover, remediate, and mitigate
Constantly evolving threats to your company data can cause even the most conscientious employee to unknowingly open infected files or click on malicious web links. Security breaches are inevitable. You need to discover threats quickly, remediate immediately, and mitigate the impact of malware and breaches. Many common types of threats target attack vectors such as email, network endpoints, and user credentials. In this blog, we explain how Microsoft 365 threat protection solutions interoperate threat detection across these attack vectors.
Protecting the protector: Hardening machine learning defenses against adversarial attacks
Harnessing the power of machine learning and artificial intelligence has enabled Windows Defender Advanced Threat Protection (Windows Defender ATP) next-generation protection to stop new malware attacks before they can get started – often within milliseconds. These predictive technologies are central to scaling protection and delivering effective threat prevention in the face of unrelenting attacker activity.
How Microsoft 365 Security integrates with the broader security ecosystem
Last year at Inspire, we announced Microsoft 365, providing a solution that enables our partners to help customers drive digital transformation. One of the most important capabilities of Microsoft 365 is securing the modern workplace from the constantly evolving cyberthreat landscape. Microsoft 365 includes information protection, threat protection, identity and access management, and security management—providing in-depth and holistic security.
Email Phishing Protection Guide – Enhancing Your Organization’s Security Posture
The Email Phishing Protection Guide is a multi-part blog series written to walk you through the setup of many security focused features you may already own in Microsoft Windows, Microsoft Office 365, and Microsoft Azure. By implementing some or all of these items, an organization will increase their security posture against phishing email attacks designed to steal user identities. This guide is written for system administrators with skills ranging from beginner to expert.
Attack inception: Compromised supply chain within a supply chain poses new risks
A new software supply chain attack unearthed by Windows Defender Advanced Threat Protection (Windows Defender ATP) emerged as an unusual multi-tier case. Unknown attackers compromised the shared infrastructure in place between the vendor of a PDF editor application and one of its software vendor partners, making the app’s legitimate installer the unsuspecting carrier of a malicious payload. The attack seemed like just another example of how cybercriminals can sneak in malware using everyday normal processes.
Protecting the modern workplace from a wide range of undesirable software
To protect our customers from the latest threats, massive amounts of security signals and threat intelligence from the Microsoft Intelligent Security Graph are processed by security analysts and intelligent systems that identify malicious and other undesirable software. Our evaluation criteria describe the characteristics and behavior of malware and potentially unwanted applications and guide the proper identification of threats. This classification of threats is reflected in the protection delivered by the Windows Defender Advanced Threat Protection (Windows Defender ATP) unified endpoint security platform.
Vulnerabilities and Updates
System Center 1807 available now

Earlier this year, we added a semi-annual release cadence to System Center so that we can bring new capabilities to customers at a faster pace. We made the first semi-annual release, System Center 1801, available on February 8, 2018. Semi-Annual Channel releases have an 18-month support policy. In addition, we will continue to release in the Long-Term Servicing Channel (LTSC). The LTSC will continue to provide 5 years of mainstream support followed by 5 more years of extended support. Keeping with the promise of feature updates in each Semi-Annual Channel (SAC) release, today we are delighted to announce the release of System Center 1807.

Exchange 2010 SP3 RU23 Released

August 2018 update cycle contains a security advisory bulletin for Exchange 2010. Due to the way that Exchange 2010 is serviced, security updates are released as a new update rollup (RU). Separate updates were also released for Exchange 2013 and Exchange 2016.

Update 1806 for Configuration Manager current branch is now available

With the 1806 update for Configuration Manager current branch, we continue to invest in providing cloud powered value to your existing Configuration Manager implementation with additional co-management workloads and simplified cloud services. We’re also very excited to announce a powerful new capability that we call CMPivot, building off our real-time script capability. CMPivot is a new in-console utility that provides access to real-time state of devices in your environment.

Support Lifecycle
Announcing new options for SQL Server 2008 and Windows Server 2008 End of Support

It’s incredible how much and how rapidly technology evolves. Microsoft’s server technology is no exception. We entered the 2008 release cycle with a shift from 32-bit to 64-bit computing, the early days of server virtualization and advanced analytics. Fast forward a decade, and we find ourselves in a full-blown era of hybrid cloud computing with exciting innovation in data, artificial intelligence, and more.

Microsoft Premier Support News
Check out Microsoft Services public blog for new Proactive Services as well as new features and capabilities of the Services Hub, On-demand Assessments, and On-demand Learning platforms.

Breaking Into Windows Server 2019: Network Features: Security with Software Defined Networking (SDN)

$
0
0

Hello, and a happy Wednesday to our outstanding readers! Brandon Wilson here with a pointer to some more of the new networking features in Windows Server 2019 coming to you straight from the Windows Core Networking team!

In this week’s posting, the discussion surrounds software defined networking (SDN) security. Here is an excerpt straight from the product group:

Top 10 Networking Features in Windows Server 2019: #4 Security with SDN

https://blogs.technet.microsoft.com/networking/2018/08/29/sdnsecurity/

Excerpt: In this modern era of cloud computing, more and more customers are looking to move their workloads to public, private or hybrid clouds. Security is one of their main inhibitors in moving to cloud. How secure are their workloads in the cloud? Is their data safe from theft and tampering? Windows Server 2019 SDN delivers new SDN security features to increase customer confidence whether running workloads on-premises or as a service provider in the cloud.

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again next week!

Brandon Wilson

Is Group Policy Slowing Me Down?

$
0
0

Hello all, Nathan Penn back again to cover one of the more frequently brought up topics: System bootup and user logon performance – Specifically “Is group policy slowing me down in my environment?” Luckily, if we know where to look we can get right to that answer quickly.

To start, let clarify a couple of semantics so that we are all on the same page. For our purpose boot time is how long it takes for the computer to load the Windows Operating System from power on, or Reboot, and get to the Interactive logon screen (Ctrl + Alt + Del / Windows Hello). Logon time is how long it takes, after a user initiates the interactive logon with appropriate credentials/biometrics, to present the desktop and applications for use. With that out of the way, lets jump into how we can identify if Group Policy is causing delays… Off to the Windows Event Logs.

First stop is the Windows System log. Here we can find Event ID 12 from the Kernel-General source, that details the system start time following a power on or reboot.

Scrolling up from the Event ID 12 we will notice several Event ID 7036 from Service Control Manager source. Contained within one of the 7036 events will be an entry with the following description “The Group Policy Client service entered the running state.”

In this instance, the computer started at 1:51:12 and the Group Policy Client service started at 1:51:25, or 13 seconds later. Group Policy has had no effect on the boot time, but now things are about to change. To see what affect Group Policy has on system boot time, we need to move to the Group Policy Operational log found in the Event Viewer under Applications and Services -> Microsoft -> Windows -> Group Policy -> Operational.

In the Group Policy Operational log if we go to the time of the Group Policy Client service starting we will find several 5320 Event IDs, one of which will have the Description “Initializing and reading current service configuration for the Group Policy Client service.”

Now we know the system is powering up and starting services, the Group Policy Client service has started, and Group Policy Client service has initialized its startup configuration, and we are ready to begin processing some GPOs. To simplify things a bit here are the four primary events contained in the Group Policy Operational log that I am sure everyone is most interested in, but we will also walk through some of the other important ones further below:

  • Event ID 4000 – Starting computer boot policy processing
  • Event ID 8000 – Completed computer boot policy processing for XXX in # seconds.
  • Event ID 4001 – Starting user logon policy processing
  • Event ID 8001 – Completed user logon policy processing for XXX in # seconds.

With these events, we can see exactly when both computer and user policy processing started, finished, as well as how long it took for each. I’m sure you are all thinking that is fantastic by itself, right.. But what if we want a little more detail, like why did it take X number of seconds? That’s where the events in between matter, so let’s walk through a few of the important ones and the details provided.

Following the Event ID 4000 (Starting computer boot policy processing) and 4001 (Starting user logon policy processing) we need to identify where the associated computer or user objects are located in the Active Directory (AD) structure to determine which policies will be applied. This system call to a Domain Controller (DC) is captured in an Event ID 5017 that also details how long it took.

The next thing that needs to happen is to discover and connect to a DC that we can use to perform the Group Policy requests against. Event ID 5326 captures this and defines the time it took to make that connection.

Moving from there we have an Event ID 5310 that details what we have found and will be using, from the location of the computer or user object in AD as well as the DC it will be using for the GPO requests.

A look at the following Event ID 5312 shows a list of all policies that apply to the computer or user object based on its AD placement.

Event ID 5313 will show if any policies have been filtered out as not applicable due to a security filter (i.e. A policy that is targeted to a specific list or group of computer or user objects).

As you can see there are several events that can give a clear picture if Group Policy is causing delays by reviewing the Group Policy Operational log. There are many, many more entries contained in the Group Policy Operational log that I didn’t have time to cover in this post, that can determine how long a policy took to process, to logon/startup script execution time, as well as system timeouts. Hopefully, this helps and removes some of the mystery of whether Group Policy is slowing you down.

Thanks for reading everyone!

Breaking Into Windows Server 2019: Network Features: Azure Network Adapter

$
0
0

Happy Saturday to our outstanding readers! Brandon Wilson here with a pointer to some more of the new networking features in Windows Server 2019 coming to you straight from the Windows Core Networking team!

In this week’s posting, the discussion surrounds something that the importance of will become more and more visible over time, the Azure network adapter. Here is an excerpt straight from the product group:

Top 10 Networking Features in Windows Server 2019: #3 Azure Network Adapter

https://blogs.technet.microsoft.com/networking/2018/09/05/azurenetworkadapter/

More and more on-premises workloads require connectivity to Azure resources.  Connecting these on-premises workloads to their Azure resources traditionally requires an Express Route, Site-to-Site VPN, or Point-to-Site VPN connection.  Each of these options require multiple steps and expertise in both networking and certificate management, and in some cases, infrastructure setup and maintenance.

Now, Windows Admin Center enables a one-click experience to configure a point-to-site VPN connection between an on-premises Windows Server and an Azure Virtual Network.  This automates the configuration for the Azure Virtual Network gateway as well as the on-premises VPN client. Windows Admin Center and the Azure Network Adapter makes connecting your on-premises servers to Azure a breeze!

As always, if you have comments or questions on the post, your most direct path for questions will be in the link above.

Thanks for reading, and we’ll see you again next week!

Brandon Wilson

Viewing all 501 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>