Monday, 26 September 2016

Pain-points Pester and Perseverence

Listen up dudes! We need to write some unit tests for all these PowerShell functions we have been writing. Why? Because we need to make sure our code does what we think it does and only what we think it does. Yeah, I know we test the functions manually when we use them but we are pretty limited in our ability to monitor what our code is doing under-the-hood unless we get some kind of verbose output. Even then, we don't know what we don't know. With the use of a testing framework (like Pester) we can test each function and isolate its inputs and outputs to make sure that thing does only what it is told!

If you have got this far, like I did a while ago, and you have watched a few intro videos (like this one https://www.youtube.com/watch?v=gssAtCeMOoo and this one https://www.youtube.com/watch?v=0fFrWIxVDl0 ) and you have downloaded Pester from GitHub (https://github.com/pester/Pester) and you have copied the extracted folder to the PowerShell modules directory; then you may just hit these issues working in Visual Studio:

  1. VS Test Runner fails your embryonic unit test because it can't find the Pester module:
    Check the name of the folder in the PS Module directory and make sure it is called "Pester" rather than "pester-master" which is what it extracts to
  2. VS Test Runner fails your test because PowerShell's Execution Policy doesn't allow Pester's scripts to run:
    The PS execution policy is causing the issue. Unblock the Pester ps1 files using the Unblock-File cmdlet.
  3. Your test doesn't show up in the VS Test Explorer
    Check that you are not using dynamic names for your Describe block, the test explorer doesn't know what to do with those.
Now to write some grown up unit tests...

Thursday, 22 September 2016

Pulling teeth!

Ok, let me just say up front, the Pull Server rocks! The centralised management and scale that it delivers is great. It is all good dude, but...

When learning how to use a Pull Server to distribute node configuration you have to keep a few things in mind in order to make things work. This is what I have learnt so far:

  • The event log is your friend, get to know it well
    • There is a resource available to help diagnose DSC issues called xDscDiagnostics - it is a good place to start.
  • Remember that publishing a new configuration to the pull server does not mean it is just going to get applied on the target the next time the target node LCM fires up, it depends what the state of the Pending Configuration is.
    • Remove-DscConfigurationDocument can be used to pull the plug on a misbehaving config
    • Removing a misbehaving config will not remove a misbehaving Resource (ermm, deleting the resource folder on the target node seems an easy way to fix this but not necessarily the right way)
      • You can delete a Resource from the module directory on the Target Node but the old one will still be in memory (it's cached) so you may have to bounce the node for it to start using the new one {I'm open to suggestions here}
      • I am playing with versioning to resolve this one too
      • DebugMode = 'All' breaks the LCM with Custom Resources that are classes :-(
So here is my plan of action for a Pull Server Scenario
  1. Configure the LCM on the Target Node for PULL
  2. Use Get-DscLocalConfigurationManager cmdlet to verify LCM settings
  3. Publish necessary Resources to Pull Server
  4. Publish the Target Node Configuration to the Pull Server
  5. Use Update-DscConfiguration cmdlet to get LCM on Target Node to look for the configuration
  6. Use Get-DscLocalConfigurationManager cmdlet to check that LCM is Busy applying the configuration
  7. Use xDscDiagnostics to query the event log of the Target Node to see what is going on
This is all good except for the last step. One thing I really like about the PUSH scenario is that you can use the -verbose and -wait parameters of the Start-DscConfiguration cmdlet to watch what is going on as it happens. I decided to write a function that would give me similar output in a PULL scenario so that I don't have to use different tools and techniques for diagnostics just because I changed from PUSH to PULL.

So far, this helps:

function Get-DscOperationsEventInfo
{
    Param(
        [String]$ComputerName = $env:COMPUTERNAME,
        
        [Int]$Range = 30
    )

 # ensure firewall is configured
 [String]$firewall_group = 'Remote Event Log Management';
 if (Get-NetFirewallRule -CimSession $ComputerName -DisplayGroup $firewall_group | Where-Object { $_.Enabled -ne $true})
 {
  Write-Host "Enabling Firewall for $firewall_group on $ComputerName" -ForegroundColor Cyan;
  Enable-NetFirewallRule -CimSession $ComputerName -DisplayGroup $firewall_group;
 }

    # get all dsc events   
    $dsc_events = [System.Array](Get-WinEvent -ComputerName $ComputerName -LogName "Microsoft-Windows-Dsc/Operational");
             
    $low_boundry_date = (Get-date).AddMinutes(-$Range);    
    $filtered_dsc_events = $dsc_events | Where-Object {$_.TimeCreated -gt $low_boundry_date} | Sort-Object TimeCreated;
    
    foreach($event in $filtered_dsc_events){
        $output = "$($event.TimeCreated):`n`r$($event.Message)`n`r";
        if($event.LevelDisplayName -eq 'Error'){
            Write-Host $output -ForegroundColor Red;
        } else {
            Write-Host $output -ForegroundColor Cyan;
        }
    }
}

Wednesday, 14 September 2016

Check it out...err I mean 'in', dude!

I love source control (not agape love or anything like it but I really do dig it). It gives me great peace of mind knowing that if I make a stupid mistake in my code (and let's face it, it's gonna happen) I can compare the current version of the code with any previous version and revert back if I need to.

Thing is, I don't like having to do that - I want my code to be sharp, readable, cool but most of all right. And I want this to be the case before I check it in (commit). The problem is that there is not always a bud available who has the time and is in the frame of mind to review my code when I need him\her to. Don't even think about scheduling code reviews, that is just a pain in the rectum - it messes with my mojo when someone interrupts me when I am in the zone just because 4:30 has arrived on their watch.

Then there is the problem of dudes who haven't read Code Complete and like to write 2000 line functions because they haven't learnt abstraction. Who wants to be the poor sod who has to review that mountain so that Harry fancy pants can do his check in? No thanks mate, I got a life.

Ok, so let's assume I've done my unit tests and I am happy with my code and am ready to check it in but want a second pair of eyes to quickly review it before I do. Well, this is DevOps baby, just automate it! "How?" you say. Well the most excellent dudes on the PowerShell Team have written a script analyzer that will do just that.

PS   C:\>Install-Module -Name PSScriptAnalyzer
Once you've got that installed you run the following command:

PS   C:\>Invoke-ScriptAnalyzer -Path $script_path


and presto, you get a list of best practice rules that you have violated. I had to google each one of them but there is usually an explanation handy that will tell you what to do to correct your code. And, if you don't agree with said advice you can tell the analyzer to exclude certain rules like so:

PS   C:\>Invoke-ScriptAnalyzer -Path $script_path -ExcludeRule PSUseShouldProcessForStateChangingFunctions
Neat huh?

Dude! Where's my config?

One of the first things I have learnt working with DSC is that the Local Configuration Manager (LCM) does things on a schedule and that it is quite a patient little service. This is most obvious in a Pull server setup.

I copy a config (and checksum file) to the Pull server and then wonder why nothing is happening on the target node. The first thing that comes to mind is "my Pull server ain't configured right". So, naturally I have a look at the LCM on the target node:

PS   C:\>Get-DscLocalConfigurationManager -CimSession $target_node
The LCM says it is configured for Pull and looks happy so why the heck is my configuration not being applied? Well, the answer is in the RefreshFrequencyMins property of the LCM (which is 30 by default). I could end up waiting for 30 minutes for anything to happen, or I could just tell the LCM to get on with it:

PS   C:\>Update-DscConfiguration -ComputerName $target_node
That will output some details on the job that is carrying out the work, which you can interrogate like so:

PS   C:\>Receive-Job -Id $job_id -keep

Tuesday, 13 September 2016

PS C:\>$Future = DevOps | where{$_.Attitude –eq $ForwardThinking}

Four years ago I sat in a conference room in Barcelona and listened to a Gartner analyst talk about the emerging practice of DevOps. I was excited because the conflict between developers and the infrastructure\operations guys was not new to me. I was sitting next to my manager, the Group IT Manager, and fully expected to have a really positive discussion about it after. We both walked away scratching our heads.

The problem with paradigm shifts in IT is that the success or uptake of the change really depends on who delivers the message. If it couched in too much highfalutin mumbo-jumbo then the people who really make it happen (the folks doing the daily grind) just miss it and carry on as normal, or they get it wrong.

Today I believe I understand what DevOps is (at least from one perspective) and the thing that has helped me to understand it, is Powershell Desired State Configuration. DSC is the embodiment of declarative configuration automation for IT infrastructure. In learning how to build upon this simple, yet powerful framework I have learnt the reason to embrace DevOps.

DevOps, from the infrastructure dudes perspective, is the practice and discipline of delivering Infrastructure as Code and it is made possible by the codification of infrastructure interfaces, which in the Microsoft world means PowerShell. By enabling infrastructure such as servers and virtual machines, databases and web services etc to be installed and configured through code, rather than by a mouse, IT professionals (deliberately generic) are able to write re-usable, scale-able, robust code to manage their environments that is orders of magnitude more efficient and less error prone than the manual approach. This practice then transforms infrastructure and operations people into 'developers' that gather requirements, design and develop code based solutions, use source control and unit testing frameworks, and deploy their infrastructure in the same way that app developers do...hence the paradigm shift.

So, I think I get it...what now? Well, there is inevitably a learning curve involved here and I have started with the guy who invented PowerShell, Jeffrey Snover (@jsnover):



The 2 day DSC course on Microsoft Virtual Academy is a great place to start:
https://mva.microsoft.com/en-US/training-courses/getting-started-with-powershell-desired-state-configuration-dsc-8672