Friday, 16 December 2016

Bonkers Build Server

Setting up a VSTS Build Server to do automated builds and run unit tests. Life is good, I have provisioned the VM and installed all the necessary tools required to build and test my solution. I queue a new build and watch helplessly as my build fails on the step to execute my lovely unit tests (SSDT unit tests if you must know).

This is the exception that I get:

Microsoft.Build.Exceptions.InvalidProjectFileException: The imported project "C:\Program Files (x86)\MSBuild\Microsoft\VisualStudio\v11.0\SSDT\Microsoft.Data.Tools.Schema.SqlTasks.targets" was not found. Confirm that the path in the <Import> declaration is correct, and that the file exists on disk.  ... Aborting test execution.

But wait, I have logged on to the build server vm and built and tested my solution manually so I know all the bits are on the machine, so what's up dude, you're ruining my day!!

The key word in the exception is "<Import>". My test project is configured to deploy a database project before executing the tests. If I unload my database project and search for 'Import' this is what I find:



Since the Visual Studio version in which the SSDT target exists is 14.0, and the test project clearly doesn't specify a version when it references the database project, I only needed to change the default VisualStudioVersion in the database project file to 14.0 rather than 11.0 for the build server to find it and run my tests. Yeehhhaa!!

Now to figure out why the Build Agent can't upload the test results to VSTS :-(

Later dudes and dudettes.

Thursday, 15 December 2016

VSTS Agent Proxy Hell

Well, not quite fire and damnation but quite a lot of grief.

Let me start by saying how much I appreciate Visual Studio Team Services (aka VSTS) - it is a fantastic product and although stuff keeps changing (and you need to remember that) it is usually for the better.

For example: the build agent that you can download and install on a local build server is often changing. I can tell this because the folder structure changes almost every time I set up a new build server. I think this is great! It means that the VSTS team are continually improving the product.

However, one issue relating to agents that can be quite annoying and time consuming is trying to figure out how to tell it to use a proxy server to connect to VSTS. In the old days an agent would use the default IE proxy settings, then came the 'modify the agent config file' approach which meant adding the following to the VsoAgent.exe.config and VsoAgentService.exe.config files:



But now there is a different approach (and it is a little Git-ty) where you simply add a .proxy file to the root of your agent folder. Really simple if you know to do it and rather painful if you don't because the previous approaches just won't work :-(

Check this out https://www.visualstudio.com/en-us/docs/build/admin/agents/v2-windows#web_proxy

Happy days dudes and dudettes, it now works!

Monday, 5 December 2016

"Microsoft" -and "Licensing" -eq "Pain"

This is new to me...

Apparently, you can't use Reporting Services to connect to an incorrectly matched SQL Database SKU: https://blogs.msdn.microsoft.com/psssql/2013/02/20/rs-database-engine-does-not-meet-edition-requirements/

If you try to, you get this lovely exception in SSRS:


The interesting thing is I got this exception trying to setup a SQL Express Reporting Service data source connecting to a Developer edition of the database server. I thought Express was free so why would they put restrictions on what database server it connects to?

Ahh well...time to install SSRS Developer edition I guess :-)

Monday, 28 November 2016

Splat! Ahhhaaaa!!!

Ever had a situation where you need to execute a cmdlet in PowerShell but also needed to pass in different parameters depending on what parameters had been passed to your code? So the first thing you try is using a whole lot of if else statements...but that is butt ugly and verbose right?

Well, PowerShell uses a technique called Splatting to resolve this. Here is a simple enough post to explain it: https://ramblingcookiemonster.wordpress.com/2014/12/01/powershell-splatting-build-parameters-dynamically/

Thanks rambling cookie monster!

Monday, 21 November 2016

Migrating from TFS to Git

Ok, so I want to migrate my TFS source code to a Git repository but I don't want to lose any history - how do I do it? Git-tf will do the job quite nicely and this blog post does a decent job of explaining it: https://chriskirby.net/blog/migrate-an-existing-project-from-tfs-to-github-with-changeset-history-intact

Only thing to keep in mind is that Git-tf uses Java so that will need to be installed first.

Happy days!

Thursday, 3 November 2016

Deploying Master Data Services (MDS) Models using VSTS!

If you need to know how and why and what when it comes to MDS model deployment then this article is the best I have read to help you understand the basics: http://www.sqlchick.com/entries/2015/3/16/how-to-deploy-master-data-services-models-between-environments

However, what it doesn't cover is how to get a Build or Release engine like VSTS to deploy the models remotely. Fortunately VSTS provides a PowerShell on Target Machines Build Task that enables us to execute the MDSModelDeploy.exe command remotely.

I have a PowerShell script in my TFS repository that I copy to the target machine using a Copy Files Build Task and then invoke that script using the PowerShell on Target Machines task.


Now all I need to do is figure out why PS-Remoting is not working on my MDS servers...oh well, next blog post dudes :-)

Friday, 28 October 2016

Let's talk about Nano baby!

Just watched a demo of a Windows Server 2016 Nano vm deployment using PowerShell by one of our very own DevOps Dudes (Si - you da man)...and it rocked! I absolutely love the idea of Nano and to see it up an running within a few minutes with a few lines of Posh code...well, let's just say the brain was on fire!

If you haven't looked into Nano yet, you might want to take a look at Channel 9's dedicated channel: https://channel9.msdn.com/Series/Nano-Server-Team

This is the future dudes and dudettes!

Thursday, 20 October 2016

CredSSP...say what?

I'm working on a DSC configuration for SQL Server Reporting Services and it is quite a simple configuration with very few required resources. However, one of the resources (xSQLServerRSConfig) does something rather odd in it's Set-TargetResource function - it uses Invoke-Command to loop back to the same server and specifies the -Authentication parameter as CredSSP.

Now up until the other day I had no idea what CredSSP was and the exception that was raised by the LCM was rather new to me:



After spending ages trying to learn what the CredSSP protocol is and why someone would use it (see https://4sysops.com/archives/using-credssp-for-second-hop-powershell-remoting/ and https://technet.microsoft.com/en-us/library/hh849872.aspx ) I decided to check for a DSC resource to enable it...and there is one, xCredSSP. Happy days!

Then came the joy and agony of seeing DSC in action. Joy because the resource is so simple to configure:

xCredSSP Client
{
 Ensure               = 'Present';
 Role                 = 'Client';
}
Or
xCredSSP Server
{
 Ensure               = 'Present';
 Role                 = 'Server';
}
Seems simple enough, and it just works (the resource that is) but the resource that needed it still failed with the same error. I tried setting my node as the client and then as the server and then as both (by specifying both configurations above) and still nada, nothing, zip - the exception was the same.

Then finally (after getting on my knees) it dawned on me that I was right to specify the node as client and server was the right thing to do because of the loop back nature of the Invoke-Command (notice the . after -ComputerName in the example above) but what was missing was the delegation rights that the xCredSSP resource needed to apply. So I added the following:

xCredSSP Client
{
 Ensure               = 'Present';
 Role                 = 'Client';
 DelegateComputers    = 'myvm.mydomain.com';
}
Well, that did it. By telling the security provider that implements CredSSP what servers were allowed to be delegated to, it just worked. It is worth noting that the DelegateComputers property supports an array of computer names or even the *.mydomain.com wildcard.

Getting to grips with Git

If you are new to version control or have grown up using something other than Git (like TFS) then this video Build Conference video is going to help a lot!


Be warned, there are some very important conceptual differences between Git and TFS, so watch it dude! (or dudette :-)

Monday, 17 October 2016

Who Ate All The Disk Space? (Yeah, It Was BizTalk)

This post is for newbie BizTalk users who have installed their environment using the Next, Next, Finish approach.

Today you have your shiny new development sandbox* with a nice chunk of disk space. Tomorrow you will have slightly less disk space. Next week a little less still. In a month, no disk space. This will be a bad place to be.

There are a number of things that eat up disk space in the world of BizTalk:
  1. BizTalk uses a number of SQL Server databases at its heart. These databases are NOT backed up by default and you should NOT use database maintenance plans or third party backup software to back them up. You'll see why in a minute.
  2. BizTalk normally reads messages, processes them, delivers them and then deletes them. It keeps the databases surprisingly small. However if your BizTalk solutions have issues the messages will be "Suspended" which means "saved to the database until the problem is fixed". At this point your database is now growing.
  3. BizTalk normally reads messages, processes them, delivers them and then deletes them. This is great until someone asks you what happened to a message from, for example, the last fiscal month. At this stage you will turn on various tracing options, add logging to your process and add archiving ports to keep copies of messages. At this point you are likely to persist multiple copies of every message flowing through BizTalk which is going to get very big, very quickly.
Before you do any other work on your sandbox, get your house in order. I would heartily recommend performing these steps as a minimum:
  1. If you use the BizTalk Administration Console to switch on tracking of messages for any BizTalk component, use it sparingly and then switch it off as soon as you have diagnosed your issue (later we will see how debugging works and how to avoid using tracking anyway). Never use the tracking options in your production environment.
  2. Create a folder, for example, C:\FileDrop, to be the single location for all incoming and outgoing files on your sandbox. Then create a Windows Scheduled Task to keep that folder free of old, junk files.
  3. Configure the BizTalk SQL Agent jobs that the BizTalk installer created and then disabled.
The rest of this post covers what to do with the BizTalk backups and why.

First of all BizTalk uses multiple databases. There are a number of reasons for this including the theoretical option of putting the databases on multiple servers. The downside is that a normal backup script will back up the databases sequentially, meaning you have no single point in time to recover two. The backups will be out of sync and potentially useless in the event of a disaster.

Out of the box, BizTalk provides backup jobs to work around this complexity.
  1. Backup BizTalk Server (BizTalkMgmtDb)
  2. DTA Purge and Archive (BizTalkDTADb)
The BizTalk backup jobs deal with this using a feature in SQL Server referred to as checkpoint marks. The first step performed by a BizTalk backup job is to create a single, simultaneous checkpoint mark in all the BizTalk databases. The next steps then back up each database up as at the checkpoint.

This is all pre-written for you in a bunch of stored procedures which are called from the two jobs I've mentioned above. However the job steps are not configured automatically. You need to edit the steps to put fill in a couple of parameters, namely the backup location and the retention period. Here's an example of how those steps may look for the Backup BizTalk Server (BizTalkMgmtDb) job.
  • exec [dbo].[sp_BackupAllFull_Schedule] 'd', 'BTS' , 'E:\Backup'
  • exec [dbo].[sp_MarkAll] 'BTS', 'E:\Backup'
  • exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=2
Here we are backing up to the E:\Backup folder. We are backing up every day ('d') and we are creating a checkpoint mark called 'BTS'. We are also going to delete any logs over two days old.

These steps will back up all of your critical, frequently changing BizTalk databases. The stored procedures take more options but for a sandbox server these should do you well. For the "less critical" tracking databases here is an example of how to configure the DTA Purge and Archive (BizTalkDTADb) job.
  • exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 7, 'E:\Backup'
These jobs will keep your databases backed up. There are now a couple of final tasks you will need to do.
  1. Create a Windows Scheduled Task to delete old backup files
  2. Periodically shrink the databases if required (unlikely and usually not essential)
Finally a little tip for you. How does BizTalk know what databases to actually back up when it runs these stored procedures? It has a list, stored in the database BizTalkMgmtDb. And, even better, it has an extra list for adding custom databases to the overall backup job. Now you can keep your own solution databases in sync with the BizTalk databases for perfect point of time recovery from disaster.

* If you have wedged BizTalk on a pre-loved, tried and trusted sandbox see my upcoming post, "Why Not To Wedge BizTalk Onto A Pre-loved, Tried And Trusted Sandbox"

* There are actually two jobs which deal with "priority" and "non-priority databases

Wednesday, 12 October 2016

Stuff about SCVMM Templates

Revisiting previous code projects can be an exercise in hair-pulling. For example, in my previous blog post I mentioned creating a VM deployment script. Due to some issues with binaries, the need to revisit the code has come up recently, to add an option for creating VMs with a GUI, rather than as Core. The code worked when it was just dealing with one choice, but adding a very simple -InstallGui switch turned out to be a little more complex than originally thought.

Constant errors stating the following were the main contention:

 New-SCVMTemplate : VMM is unable to process one or more of the provided cmdlet parameters. (Error ID: 1600)  

Needless to say, this is a bit non-specific! After a lot of frantic searching of blog posts and whatnot, and complete inability to find a relevant fix, I decided to scrape through the code and remove parameters one by one. Eventually, the Template parameter seemed to show up as a potential problem.

Hmm.

Inspecting the template I was intending to use, I noticed something - there was no OS Configuration section, therefore no ability to pass parameters such as Domain or DomainJoinCredential. Fixing this resolved the above error.

Now, on to simultaneous deployments!

Friday, 7 October 2016

What's up with my Config dude?

When you want to know the current configuration of a DSC node you use the very aptly named Get-DscConfiguration cmdlet like so:
PS C:\>Get-DscConfiguration -CimSession $target_node
This cmdlet will execute the Get-TargetResource function for every configured resource on the node and return details on its current state like so:

















And life is good until you add a new resource to your configuration, push it to the target node and run the cmdlet again, only to be faced with an ugly exception like this:











I know what you're thinking, "say whaaaat!!". You just want to see what the current config is and because of one dodgy resource you can't see anything - that sucks. However, the solution is quite simple. The words that stand out are key is not a valid property in the corresponding DSC resource schema file. The preceding word (AvailabilityGroupNameDatabase in this case) is the property that the resource Get-TargetResource function is trying to set which is not defined in the schema.mof file for the resource. What you need to do is add the property to the schema.mof file or modify the Get-TargetResource function of the offending resource.

On learning new skills...


I'm your typical infrastructure guy - design the platform, run through the installers, use PowerShell to do post-setup and admin tasks, maybe script something to make a boring job easier - you know the drill. This DevOps thing is kinda cool to see working, but how does that fit in with the way I'm trying to do in my projects? As it happens, it fits in really, really well. We already use Agile methods in our project delivery team, and the Ops team have just adopted Scrum as well. The DevOps mindset isn't terribly difficult to adopt when you're already used to delivering small improvements often. What is different about all of this is, for me anyway, is the Dev part of DevOps.

I've had my PowerShell knowledge tested and expanded whilst working with this - first, by writing a deployment function to take what System Center Virtual Machine Manager does and making it fit into our deployment requirements and under source control; secondly, working on the SharePoint 2016 platform we've been tasked to deploy by using Desired State Configuration. I've had to learn about source control, about commits and why you attach them to work items, about injection, about the separation of infrastructure and application, the whole nine yards, and I'm not even halfway done, if you follow some of the guidance out there. 

One of the things I've learned during this entire process is that red in your PowerShell console isn't a Bad Thing. Not even close - it's usually helpful when dealing with a complex beast like SharePoint. Granted, some of the errors during testing haven't been PowerShell-based, and some have been really, really odd. 

For example, the Event Viewer on the primary application server logs Event ID 3351 in the Application Log (SQL Error 18456, State 5 (Invalid user ID) in the SQL logs), stating that the SP Farm account is a bad login. However, the account is present as a security principal in the SQL instance AND the SP_Config database. What? 

Turns out the issue was caused by the script execution speed beating Active Directory replication and adding in a tombstoned SID to the database. Slowing down the reset process during testing was all that we needed, but it caused a lot of head-scratching! 

But what we do is hard, right? As my esteemed colleague said to me, if you run it and it goes perfectly, first time - what did you learn? The fact that we can review and change the code, chipping away at the problem one red line at a time, that we can repeat it again and again and again until we get it right - I've found that it's really important. We learn by doing. 

I've been involved with this DevOps methodology for a month now, and I'm enjoying every minute of it. 

Wednesday, 5 October 2016

Do you know what a Paradigm Shift is?

Well, it'll take you 5 seconds to google it but here you go:
Paradigm Shift: a fundamental change in approach or underlying assumptions.

The first time I encountered that phrase was with the release of Visual Studio Team Edition for Database Professionals. This excellent VS add-in was the very first tool that enabled SQL developers and DBAs to put their database schemas under source control using an integrated development environment and to validate the schema before deploying it to a server! This was something that application developers had been working with for ages and provided the opportunity for the database to join all stages of the software development life-cycle. The problem however, was that SQL developers and DBAs were not accustomed to using source control, to 'compiling' their code or to 'kicking off a build'. What was needed was a radical some significant upskilling and a complete change of mindset - they needed to start thinking like application developers.

DevOps is to infrastructure and operations guys the same kind of shift in thinking and skills. The things you need to learn are:

  • obviously PowerShell - including how to write advanced functions, modules and package management concepts
  • Desired State Configuration - including class-based custom resources, composite resources and composite configurations
  • Source Control - which should really be Git rather than TFS, if you want to be in with the crowd
  • Unit Testing with Pester
  • Continuous Integration
  • Release Management
...and that is not exhaustive. However, if you are thinking "no way Josẻ!" then just hold your horses dude, it ain't that bad. Just start with PowerShell and slowly build up your knowledge; within a few months you'll be cooking with gas!


As for the mind-shift, well that is really down to you and how adaptable you are to change. This is where I recon most will struggle. Building a pipeline for automation is easy to understand (if not to implement) and difficult to argue against (as anything else must involve repetitive, error-prone manual effort) but continual, frequent releases and relying on open source DSC resources to configure infrastructure! If  all the other stuff hasn't pushed your neurons to the limit then the obvious risk of these two approached may just push you over. But this would be an error in judgement because the risks involved in the alternative are far greater.

Think about it this way, which is riskier: to deploy a large change including months of effort using a mostly manual process that is difficult to repeat exactly or to deploy a small change using an automated, 100% repeatable process? No rocket scientist required.

And as for relying on community based, open source PowerShell resource modules to install and configure infrastructure...ask yourself which is better: to manually install software using wizards that are mostly inflexible and time consuming or to use pre-written PowerShell scripts that are readable, easy to edit and must faster to use?

So far we have used about 10 DSC resources modules to deploy a clustered SQL Server and SharePoint and so far I have found 4 syntax errors in 3 different modules - but the thing is, they are easy to find and to fix and I have the opportunity to contribute to their improvement using GitHub. That is a powerful mechanism for improvement given the hundreds of thousands of PowerShell users there are in the world.

Monday, 26 September 2016

Pain-points Pester and Perseverence

Listen up dudes! We need to write some unit tests for all these PowerShell functions we have been writing. Why? Because we need to make sure our code does what we think it does and only what we think it does. Yeah, I know we test the functions manually when we use them but we are pretty limited in our ability to monitor what our code is doing under-the-hood unless we get some kind of verbose output. Even then, we don't know what we don't know. With the use of a testing framework (like Pester) we can test each function and isolate its inputs and outputs to make sure that thing does only what it is told!

If you have got this far, like I did a while ago, and you have watched a few intro videos (like this one https://www.youtube.com/watch?v=gssAtCeMOoo and this one https://www.youtube.com/watch?v=0fFrWIxVDl0 ) and you have downloaded Pester from GitHub (https://github.com/pester/Pester) and you have copied the extracted folder to the PowerShell modules directory; then you may just hit these issues working in Visual Studio:

  1. VS Test Runner fails your embryonic unit test because it can't find the Pester module:
    Check the name of the folder in the PS Module directory and make sure it is called "Pester" rather than "pester-master" which is what it extracts to
  2. VS Test Runner fails your test because PowerShell's Execution Policy doesn't allow Pester's scripts to run:
    The PS execution policy is causing the issue. Unblock the Pester ps1 files using the Unblock-File cmdlet.
  3. Your test doesn't show up in the VS Test Explorer
    Check that you are not using dynamic names for your Describe block, the test explorer doesn't know what to do with those.
Now to write some grown up unit tests...

Thursday, 22 September 2016

Pulling teeth!

Ok, let me just say up front, the Pull Server rocks! The centralised management and scale that it delivers is great. It is all good dude, but...

When learning how to use a Pull Server to distribute node configuration you have to keep a few things in mind in order to make things work. This is what I have learnt so far:

  • The event log is your friend, get to know it well
    • There is a resource available to help diagnose DSC issues called xDscDiagnostics - it is a good place to start.
  • Remember that publishing a new configuration to the pull server does not mean it is just going to get applied on the target the next time the target node LCM fires up, it depends what the state of the Pending Configuration is.
    • Remove-DscConfigurationDocument can be used to pull the plug on a misbehaving config
    • Removing a misbehaving config will not remove a misbehaving Resource (ermm, deleting the resource folder on the target node seems an easy way to fix this but not necessarily the right way)
      • You can delete a Resource from the module directory on the Target Node but the old one will still be in memory (it's cached) so you may have to bounce the node for it to start using the new one {I'm open to suggestions here}
      • I am playing with versioning to resolve this one too
      • DebugMode = 'All' breaks the LCM with Custom Resources that are classes :-(
So here is my plan of action for a Pull Server Scenario
  1. Configure the LCM on the Target Node for PULL
  2. Use Get-DscLocalConfigurationManager cmdlet to verify LCM settings
  3. Publish necessary Resources to Pull Server
  4. Publish the Target Node Configuration to the Pull Server
  5. Use Update-DscConfiguration cmdlet to get LCM on Target Node to look for the configuration
  6. Use Get-DscLocalConfigurationManager cmdlet to check that LCM is Busy applying the configuration
  7. Use xDscDiagnostics to query the event log of the Target Node to see what is going on
This is all good except for the last step. One thing I really like about the PUSH scenario is that you can use the -verbose and -wait parameters of the Start-DscConfiguration cmdlet to watch what is going on as it happens. I decided to write a function that would give me similar output in a PULL scenario so that I don't have to use different tools and techniques for diagnostics just because I changed from PUSH to PULL.

So far, this helps:

function Get-DscOperationsEventInfo
{
    Param(
        [String]$ComputerName = $env:COMPUTERNAME,
        
        [Int]$Range = 30
    )

 # ensure firewall is configured
 [String]$firewall_group = 'Remote Event Log Management';
 if (Get-NetFirewallRule -CimSession $ComputerName -DisplayGroup $firewall_group | Where-Object { $_.Enabled -ne $true})
 {
  Write-Host "Enabling Firewall for $firewall_group on $ComputerName" -ForegroundColor Cyan;
  Enable-NetFirewallRule -CimSession $ComputerName -DisplayGroup $firewall_group;
 }

    # get all dsc events   
    $dsc_events = [System.Array](Get-WinEvent -ComputerName $ComputerName -LogName "Microsoft-Windows-Dsc/Operational");
             
    $low_boundry_date = (Get-date).AddMinutes(-$Range);    
    $filtered_dsc_events = $dsc_events | Where-Object {$_.TimeCreated -gt $low_boundry_date} | Sort-Object TimeCreated;
    
    foreach($event in $filtered_dsc_events){
        $output = "$($event.TimeCreated):`n`r$($event.Message)`n`r";
        if($event.LevelDisplayName -eq 'Error'){
            Write-Host $output -ForegroundColor Red;
        } else {
            Write-Host $output -ForegroundColor Cyan;
        }
    }
}

Wednesday, 14 September 2016

Check it out...err I mean 'in', dude!

I love source control (not agape love or anything like it but I really do dig it). It gives me great peace of mind knowing that if I make a stupid mistake in my code (and let's face it, it's gonna happen) I can compare the current version of the code with any previous version and revert back if I need to.

Thing is, I don't like having to do that - I want my code to be sharp, readable, cool but most of all right. And I want this to be the case before I check it in (commit). The problem is that there is not always a bud available who has the time and is in the frame of mind to review my code when I need him\her to. Don't even think about scheduling code reviews, that is just a pain in the rectum - it messes with my mojo when someone interrupts me when I am in the zone just because 4:30 has arrived on their watch.

Then there is the problem of dudes who haven't read Code Complete and like to write 2000 line functions because they haven't learnt abstraction. Who wants to be the poor sod who has to review that mountain so that Harry fancy pants can do his check in? No thanks mate, I got a life.

Ok, so let's assume I've done my unit tests and I am happy with my code and am ready to check it in but want a second pair of eyes to quickly review it before I do. Well, this is DevOps baby, just automate it! "How?" you say. Well the most excellent dudes on the PowerShell Team have written a script analyzer that will do just that.

PS   C:\>Install-Module -Name PSScriptAnalyzer
Once you've got that installed you run the following command:

PS   C:\>Invoke-ScriptAnalyzer -Path $script_path


and presto, you get a list of best practice rules that you have violated. I had to google each one of them but there is usually an explanation handy that will tell you what to do to correct your code. And, if you don't agree with said advice you can tell the analyzer to exclude certain rules like so:

PS   C:\>Invoke-ScriptAnalyzer -Path $script_path -ExcludeRule PSUseShouldProcessForStateChangingFunctions
Neat huh?

Dude! Where's my config?

One of the first things I have learnt working with DSC is that the Local Configuration Manager (LCM) does things on a schedule and that it is quite a patient little service. This is most obvious in a Pull server setup.

I copy a config (and checksum file) to the Pull server and then wonder why nothing is happening on the target node. The first thing that comes to mind is "my Pull server ain't configured right". So, naturally I have a look at the LCM on the target node:

PS   C:\>Get-DscLocalConfigurationManager -CimSession $target_node
The LCM says it is configured for Pull and looks happy so why the heck is my configuration not being applied? Well, the answer is in the RefreshFrequencyMins property of the LCM (which is 30 by default). I could end up waiting for 30 minutes for anything to happen, or I could just tell the LCM to get on with it:

PS   C:\>Update-DscConfiguration -ComputerName $target_node
That will output some details on the job that is carrying out the work, which you can interrogate like so:

PS   C:\>Receive-Job -Id $job_id -keep

Tuesday, 13 September 2016

PS C:\>$Future = DevOps | where{$_.Attitude –eq $ForwardThinking}

Four years ago I sat in a conference room in Barcelona and listened to a Gartner analyst talk about the emerging practice of DevOps. I was excited because the conflict between developers and the infrastructure\operations guys was not new to me. I was sitting next to my manager, the Group IT Manager, and fully expected to have a really positive discussion about it after. We both walked away scratching our heads.

The problem with paradigm shifts in IT is that the success or uptake of the change really depends on who delivers the message. If it couched in too much highfalutin mumbo-jumbo then the people who really make it happen (the folks doing the daily grind) just miss it and carry on as normal, or they get it wrong.

Today I believe I understand what DevOps is (at least from one perspective) and the thing that has helped me to understand it, is Powershell Desired State Configuration. DSC is the embodiment of declarative configuration automation for IT infrastructure. In learning how to build upon this simple, yet powerful framework I have learnt the reason to embrace DevOps.

DevOps, from the infrastructure dudes perspective, is the practice and discipline of delivering Infrastructure as Code and it is made possible by the codification of infrastructure interfaces, which in the Microsoft world means PowerShell. By enabling infrastructure such as servers and virtual machines, databases and web services etc to be installed and configured through code, rather than by a mouse, IT professionals (deliberately generic) are able to write re-usable, scale-able, robust code to manage their environments that is orders of magnitude more efficient and less error prone than the manual approach. This practice then transforms infrastructure and operations people into 'developers' that gather requirements, design and develop code based solutions, use source control and unit testing frameworks, and deploy their infrastructure in the same way that app developers do...hence the paradigm shift.

So, I think I get it...what now? Well, there is inevitably a learning curve involved here and I have started with the guy who invented PowerShell, Jeffrey Snover (@jsnover):



The 2 day DSC course on Microsoft Virtual Academy is a great place to start:
https://mva.microsoft.com/en-US/training-courses/getting-started-with-powershell-desired-state-configuration-dsc-8672