Saturday, 30 September 2017

Project Honolulu "as a server"


So with MS Ignite behind us with and with some great new revelations, I thought I'd take a look at the newly announced "Project Honolulu".  Project Honolulu (which can be downloaded here, is going to be the replacement to Microsoft's administrative MMC snap-in tools like Device Manager, Disk Manager etc.  Its not replacing RSAT tools or the system center suit but to compliment them and fill the void in simply and basic administration of server management particularly for Windows Server Core.  Before I begin, please remember that any thoughts/opinions in this blog are my own and are not influenced in any way but feel free to comment - I love "nerd talk", and any code posted comes "AS-IS" and you, the executor are responsible for your own stuff.

Project Honolulu can be installed on Windows 10 as an application. This will then install as running process and rely on the application to be running in the backgroud. The alternative method is to install Project Honolulu on a server which will install as two components, a gateway (collecting data from server nodes) and a webserver (to present the webapp).  Today I'm going to look at a server installation.
 
Environment
To get PH up and running in gateway mode you will need…. A server.

Host
Hyper-v
CPU
2
Memory
2-4GB Dynamic
HDD
40GB vhdx
OS
Windows Server 2016

Simples!

Installation
My first instinct was to secure the webserver with my on enterprise CA so, using the remote certificate snap-in (ironically) I generated a cert request file. There are other methods i.e. certreq.exe but this for me was the quickest.  Logged onto my CA and completed the request. I then imported the certificate to the servers personal certificate store.

Now I launched the MSI and agreed to the licensing agreement. I also agreed to allow my server the ability to update its TrustedHosts list.  This is needed for managing workgroup servers.  After this, I was prompted to provide a port for the web application and either allow the application to generate a self-signed certificate or specify one of my own.  As I've generated my one certificate I chose to use that.

I specified my preferred port and took the thumbprint of my generated certificate and entered it. Then hit my first hurdle… :(

Hmmm? 

Attempted to install with other ports, registered and ephemeral but with no joy? With very little details on the webserver and its configuration, my troubleshooting was pretty limited.  Just as a off-change (and time being against me) I decided to install the application using the built-in self-sign function, which installed with no errors…. Odd, thoughts @Microsoft.

After this, the installation went through with no further issues.  Upon completion I opened a web browser and browsed to https:\\<PH SERVER UNC>:<CONFIGURED PORT>.  There's a short tour to skip through and that's it.



Add Managed Endpoints

At this point I am now ready to add endpoints to be manage.  By clicking on the plus sign on the right-hand side you can enter server details and credentials to be used to manage it with, awesome!  At which point the PH server will gather data and manage the server using the credentials provided.

Alternatively, you can give the Honolulu servers' AD object the permission to the able to manage the endpoints.  As the Honolulu service runs as the NT SERVICE\Network Service, configuring the computer account with delegate permissions to manage the endpoints allows the endpoints to be managed automatically.   However, my immediate reaction to this was that there appears to be no sense of roll based access control so if a Honolulu server has access to manage other servers on a network, Sys admins would quickly lose sight on what admins can do…. For example, a low-level summer intern would have the same abilities as a second-line support engineer.

This, thankfully, is not the case.  The NT SERVICE/Network Service simply discovers the endpoints.  The administration of such endpoints is executed via remote PowerShell and WinRM but the admin executing the commands still have their credentials passed through.  So, as long as our low-level summer intern doesn’t have the rights to shut down that business critical ERP server, they won't be able to using Honolulu.

To enable the Honolulu server delegate permissions to discover the end points, the Honolulu server needs to be added to the end points PrincipalsAllowedToDelegateToAccount setting.  To do this MS have kindly documented this here.  I however, have taken it one step further.

$gateway = "HONOLULU SERVER"
$gatewayObject = Get-ADComputer -Identity $gateway

$nodeObjects = Get-ADComputer -Filter * -SearchBase "TOP LEVEL AD OU OF ENDPOINTS"

foreach ($nodeObject in $nodeObjects){

    Set-ADComputer -Identity $nodeObject -PrincipalsAllowedToDelegateToAccount $gatewayObject
    $output += ($nodeObject.DNSHostName + ",")
}
$filePath = $Env:USERPROFILE  + '\Desktop\UploadToHonolulu.txt'
$output | Out-File -FilePath $filePath -NoClobber

This PS script will add the Honolulu to all servers in a desired OU, then take the FQDNs of all endpoints and compile them into a .txt file which is outputted to the user's desktop.  Open the Honolulu server and import all the servers by importing the .txt file

"Voila!"


Managing an End Point
Now that your endpoints have been discovered, simply click on to it to see the vast amount of administration that can be done from this simply console.  Along with a simply but affective overview of the endpoints performance, its quick to see how this tool will help many sys admins going forward.



Conclusion
It has been a while since Microsoft have developed a tool that fills a genuine gap.  Since the uptake on Windows server core, a number of companies having the confidence to deploy it to production enviornments has been slow due to lage learning curve needed to achieve simple/basic administration task on an endpoints.  MMC snap-in fills the majority of that gap but not entirely and it is clunky at best.  With project Honolulu Sys admins can now preform most (if not all administration) task from a web console….

Good Work Mr. Microsoft!

Further info found here:


 

Wednesday, 13 September 2017

SSDT - to script or not to script!

I have been using SSDT for years, through its various incarnations, and I am a huge fan. I can say I have fond memories of Gert the Data Dude posting his way to blogger awesomeness and me being extremely grateful that he did. Gert has moved on to other parts of the Microsoft universe but the product has survived and seems to be a fully-fledged senior citizen in the Visual Studio landscape. Worryingly, Visual Studio has also started to entertain a new suitor, Red-Gate, and their devops offering is quite different from the model-based SSDT project...we shall see what happens there.

Anyway, the reason for the post is that I have just learned something rather interesting about how SSDT, VS, MSBuild and SqlPackage.exe co-operate to get scripts added to the beginning and end of a database change script.

The Requirement:
I have just started using tSQLt to write database unit tests after years of using the SQL Server Unit Test in the Visual Studio Test Project and my plan is to integrate the two different frameworks so that I can benefit from the fakes and assertion utilities in tSQLt but still have all my tests visible in the Visual Studio Test Explorer. I needed to have tSQLt deployed as part of the database project to make this happen and I wanted it to be extremely easy to upgrade tSQLt when a new version is released.

The Plan:
Add the tSQLt.class.sql downloaded from tSQLt.org as a Post-Deployment script and have the project decide whether to include it based on a project variable. Sounds simple but there is a catch - you can't simple add some conditional logic to the Post-Deployment script like this:

IF ('$(UseTestFramework)' = 'true')
:r .\tSQLt.class.sql

The Problem:
It would be nice if you could but by adding a TSQL statement to the script SSDT treats the tSQLt.class.sql as embedded TSQL and throws an exception because the file is crammed with GO statements. So you may try this:

:r .\$(TestFrameworkFileName).sql

In this case the sqlcmd variable value can be set differently in each environment publish settings file and for the environments where it is not needed an empty substitute file can be used. The problem is that SqlPackage.exe uses the DEFAULT value of the sqlcmd variable to evaluate the expression, not the value set in a publish settings file; so you end up with the the same result whatever you do.

The Solution:
It is similar but with a twist: you need to set the value of a sqlcmd variable in the database project file using an MSBUILD variable that can be determined at build time. The legendary Gert describes the solution here: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/745a2189-62db-4739-8437-8aac16affa05/conditional-post-deployment-for-unit-test?forum=ssdt

So, the steps to use conditional logic to determine if a particular script is included are quite simple:

  1. Add a database project variable named Configuration
  2. Unload the project
  3. Set the value of the variable to $(Configuration) - that's the MSBuild variable
  4. Reload the project
  5. Add a Debug.sql and a Release.sql file as post-deployment scripts
  6. Use the $(Configuration) sqlcmd variable in the post-deployment script to include the correct file based on the configuration of the build
The downside is that your optional execution paths are determined by the number of configuration types you have, rather than by the content of your publish settings file...but it is better than nothing!

Thursday, 10 August 2017

The Dreaded 403 PowerShell Remoting Error

If you have worked with PowerShell remoting then you will have seen this error before:

: Connecting to remote server BLAH failed with the following error message : The WinRM client received an HTTP status code of 403 from the remote WS-Management service. For more information...

It is not a happy message, especially when you have been using PowerShell to remotely manage the particular server for ages! So then you try remoting from another client and it works! You go back to your original client and try remoting to anything else and it fails...dohh! "But this worked just yesturday!" you scream.

Ahh, little things can make a big difference and in my case the issue was related to a VSTS Agent update that I did the day before. In order for the new version of the agent to communicate with VSTS in the cloud I needed to set a WinHTTP proxy. Once the agent was configured I could use a .proxy file in the agent directory instead...but I forgot to remove the WinHTTP proxy and in so doing broke PS Remoting.

Here is the story in a nutshell:


Without a proxy I can enter a remote PS session but with one I can not and I get the error, which sucks if you forgot that you had set the proxy. Best to remember that WinRM uses the HTTP protocol so proxy settings matter.

Later dudes.

Friday, 14 July 2017

MSDTC in Windows Core

Simple one this...

How can we enable inbound transactions in MSDTC on a core machine? Well we use PowerShell of course! :-)

PS C:\>Set-DtcNetworkSetting -InboundTransactionsEnabled $True
Nice one.

Monday, 3 July 2017

The curious case of the missing stored procedure

We sometimes use the Biztalk WCF adapter to talk to SQL Server and for that we need there to be an endpoint stored procedure in the target database for BTS to execute. When the adapter needs to execute the procedure to get or post data to the database, it will first execute a metadata query to confirm the stored procedure's existence - this is unusual behavior and it can lead to some head scratching if the lights are not all on :-)

This is an example of the query that BTS executes:
exec sp_executesql N'SELECT sp.type AS [ObjectType], modify_date AS [LastModified]FROM sys.all_objects AS sp WHERE (sp.name=@ORIGINALOBJECTNAME and SCHEMA_NAME(sp.schema_id)=@ORIGINALSCHEMANAME);SELECT [param].name AS [ParameterName], usrt.name AS [DataType], SCHEMA_NAME(usrt.schema_id) AS DataTypeSchemaName, baset.name AS [SystemType], usrt.is_table_type as IsTableType, usrt.is_assembly_type as IsAssemblyType, CAST(CASE WHEN baset.name IN (N''nchar'', N''nvarchar'') AND param.max_length <> -1 THEN param.max_length/2 ELSE param.max_length END AS int) AS [Length], CAST(param.precision AS int) AS [NumericPrecision], CAST(param.scale AS int) AS [NumericScale], param.is_output AS [IsOutputParameter], AT.assembly_qualified_name AS AssemblyQualifiedName FROM sys.all_objects AS sp INNER JOIN sys.all_parameters AS param ON param.object_id=sp.object_id LEFT OUTER JOIN sys.types AS usrt ON usrt.user_type_id = param.user_type_id LEFT OUTER JOIN sys.types AS baset ON (baset.user_type_id = param.system_type_id and baset.user_type_id = baset.system_type_id) or ((baset.system_type_id = param.system_type_id) and (baset.user_type_id = param.user_type_id) and (baset.is_user_defined = 0) and (baset.is_assembly_type = 1))  LEFT JOIN sys.assembly_types AT ON AT.[name] = usrt.name AND AT.schema_id = usrt.schema_id WHERE (sp.type = ''P'' OR sp.type = ''RF'' OR sp.type=''PC'') AND (sp.name=@ORIGINALOBJECTNAME and SCHEMA_NAME(sp.schema_id)=@ORIGINALSCHEMANAME) ORDER BY param.parameter_id ASC; ',N'@ORIGINALOBJECTNAME nvarchar(12),@ORIGINALSCHEMANAME nvarchar(3)',@ORIGINALOBJECTNAME=N'MyEndPointProc',@ORIGINALSCHEMANAME=N'dbo'
Now, should it transpire that the adapter is unable to execute the endpoint stored procedure and an error is logged in Windows on the BTS server, you may want to confirm the following:

  1. Is Biztalk trying to connect to the right SQL Server? A SQL Profiler trace should show the above metadata query if it is
  2. Does the Biztalk service account have execute permission on the stored procedure?
You may well see the metadata query in a trace output and assume all is well but still end up with the following exception being raised on the Biztalk server which says the proc doesn't exist:


A message sent to adapter "WCF-Custom" on send port "Send Message to MyDb" with URI "mssql://MyServer//MyDb?MyEndPointProc" is suspended. 
 Error details: Microsoft.ServiceModel.Channels.Common.MetadataException: Object [dbo].[MyEndPointProc] of type StoredProcedure does not exist

Server stack trace: 
   at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
   at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)

Exception rethrown at [0]: 
   at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
   at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
   at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result)
   at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result) 
 MessageId:  {69B1CA00-8526-4D70-A6D3-C82093BEC0A1}
 InstanceID: {1F3766A8-8AA9-435E-BFB6-2D785C8D34FB}
"What you talk'n about Willis!? The proc is there dude!!"

This happens because the Biztalk service account has permissions to execute the metadata query but not the stored procedure, so the metadata query returns no records and Biztalk thinks it isn't there and raises it's own exception rather than what you might expect which for Biztalk to try and execute the proc and cause SQL Server to generate a security exception (which would be so obvious and easy to address).

So, there you have it.

Later dudes!

Wednesday, 28 June 2017

Shared Database Code

If you use SQL Server Data Tools and deploy your database code using the SQL Server Database Project in Visual Studio, then read on.

If you need to deploy a database to multiple instances and have some objects only deploy to certain instances, read on.

If you didn't know that you can reference a database project within another database project and have the reference database schema included as part of the database, read on.

If you didn't know that you need to specifically configure your deployment to include composite objects when you have a 'composite' database project, you do now :-)

Later dudes!

Tuesday, 27 June 2017

Getting a DACPAC from a misbehaving database

We use SQL dacpacs for database references in our SQL Data Tools projects. Sometimes the database we need to reference is a vendor database with loads of issues in it, so if we try and extract a dacpac using management studio, it just bombs out with an ugly exception. I have seen all kinds of issues when doing this.

Your first though may be to re-create the database using scripts and only include the objects that you need. That works but it's a bit of a pain to look after. What we really need is way to extract the dacpac, warts and all, and the only way I know how to do that is by using the SqlPackage.exe command line tool.

Use the following to extract a simple dacpac for a reference:

C:\Program Files (x86)\Microsoft SQL Server\130\DAC\bin\SqlPackage.exe /Action:Extract /SourceServerName:SQLCLUSTER3 /SourceDatabaseName:$(DbName) /TargetFile:"C:\Temp\$(DbName) .dacpac"

Simples dudes!

Monday, 12 June 2017

ODBC, DSNs, SSIS Code Pages, metadata and BIML

The scenario:

GIVEN a SQL 2000 data source
AND a SQL 2016 destination
AND a metadata driven, BIML generated SSIS package to move data from source to destination
WHEN you try build the SSIS package using an OLEDB Connection
THEN SSIS says it can't connect because SQL 2000 is not supported

So, what do you do? Well obviously ODBC comes to mind and so you try that avenue (ala ODBC Driver for SQL Server) only to find you're presented with another unfriendly message:



Now what? Well, simply use the SQL Server provider and that will work. SSIS is able to use it with an ODBC connection, so all is good, until you execute your package and get the most excellent of exceptions, the dreaded VS_NEEDSNEWMETADATA! Or, in my words "AAAAHHHHGGGRRRRHHH!!!" What just happened? My package built without any issues and didn't throw any warnings so why does this happen when I run it? Weird ODBC behavior I guess.

Turns out that the ODBC connection defaults to UNICODE (whereas the OLEDB defaults to ANSI). Now in the yesterday world of hand-cranking your SSIS packages you would just set the BindCharColumnAs property to ANSI and everything would just work. BIML doesn't give you the option to set this so you need to find another way to use the connection if you want to automate the generation of the package.

Linked Servers my friend! Yes, by simply setting up a linked server from the target to the source SQL Server instance, we are able to access the source table using the linked server reference and need only to make a slight tweak to the BIML. Job's a good'n!

Later dudes!


Friday, 19 May 2017

if(effort -neq time){velocity -eq effort\time}

I have been doing Scrum for years and have had some great teachers (Richard Hundhausen, Derek Davidson and even Jeff Sutherland - well, I read his book :-)) but one very important aspect of of the framework has continued to haunt me until now.

I have always struggled to explain the difference between effort and work required and no matter how many times I suggested that effort was related to complexity I was never able to come up with an analogy or explanation that would suffice. The team would constantly revert to the assumption that effort meant how much of the sprint (therefore time) would be needed to complete the work; I would say, "no, that is what we use work remaining for" and the team would invariably reply "what's the point of effort then?".

The other day while explaining the importance of doing design in the planning meeting to a bunch of colleagues, one very astute colleague asked the question "won't we run out of time in the planning if we are getting into the detail of design in the meeting?". I realized that I had not explained the point of estimating effort for each backlog item and then it suddenly dawned on me...what does effort actually mean? Here is the quick google definition:










Ahha! Given that effort is about how much energy or determination is required to complete something, then it must be dependent on the amount of complexity and unknowns contained within the change; and the time it will take is dependent on the tasks required to complete it.

But the thing is we only know what tasks are involved (and therefore the work hours involved) after we have done planning (if we do it right) and by then we will have dealt with the complexity and unknowns through the design discussions had during planning.

So, it becomes obvious that the estimation of effort should only be used to determine what changes to attempt during planning. Once planning is done it is all down to the time estimated for each task to determine if team has the capacity to complete the planned work.

If the team has estimated poorly, planning will bring it to light and if you don't plan properly the sprint will bring it to light. So if your team is constantly violating the burn down it can only be a result of poor planning.

"Simples Sergei!"

DNS -eq Remoting Pain

Just provisioned a bunch of nice shiny new VMs with Windows Server 2016 Core, joined to the domain, proxy set and Windows Updates applied. All ready for DSC except that I can't connect remotely! The following red text annoys me:













So I try a few things...

1) Try remoting from a VM on the same host - nada
2) Try removing the proxy - nada (just got a different exception)
3) Reboots - nada

So then I decided to talk to a network\infrastructure dude to get help (caus I is a dev).

A huff and puff and some jiggery pokery later and it turns out DNS was holding static records for the previous versions of the machines. You see, the VMs were shiny and new but their names weren't. Once the static DNS records were deleted I could remote! Adds weight to the argument that VMs should be named generically.

Shout out to Si for helping me with this :-)

Thursday, 4 May 2017

Ermm, where's the door?

This one is just for me :-)

Using git via command line will often (but not always??) change the context of the command window to Vim which is lovely and all but it is not that obvious how to get back to the normal command prompt. Sadly, 'esc' does not do it, you have to type wq at the colon prompt.

Not that intuitive but hey...

Kicked to the 'kurb'

Ever had that feeling that gremlins are at work? Keep going round in circles and start contemplating a career change? Well...

I have been struggling with getting Reporting Services 2016 configured to use Windows Authentication, ala kerberos (as my previous post alluded). In the past this has been a relatively trivial task but this last week I have found new depths of despair and frustration. So let me explain the scenario:

GIVEN a reporting services server that is built by DSC
AND the ReportServer service is running on port 80
AND the ReportServer.config file has been modified (see docs)
WHEN the HTTP/mymachine.mydomain.com SPN is added for myaccount
THEN Windows Authentication is enabled for SSRS :-)
AND PowerShell remote sessions (WinRM) is broken :-(











So, me thinking I was clever, simply added port 80 to the SPN definition which fixed WinRM and broke SSRS. That sucks! It seems you can only have one or the other.

Tuesday, 7 February 2017

Database Stats and Stuff

I thought I knew how SQL Server manages statistics and started writing some maintenance procedures to augment the built in stats maintenance. All was well until I sat down to test my code and applied the principles of black box testing. This made me think about what I needed happen rather than what I was expecting to write in the implementation of the code. I wrote the following acceptance criteria:

Table statistics are updated whenever the number of records in a table increases by a percentage defined at the instance level.

Simple enough, so I started by investigating how SQL does things by default and I came across this excellent SQLBits presentation by Maciej Pilecki: https://sqlbits.com/Sessions/Event7/Lies_Damned_Lies_And_Statistics_Making_The_Most_Out_of_SQL_Server_Statistics

After watching this I realised that what my code needed to do was provide a mechanism for overriding the default threshold for updating stats and not to simple schedule routine stats update statements. Maciej also makes it clear that the rate and volume of change in a table is a far better indicator of stale stats than STATS_DATE, which is what I have always used, and that rate of change is calculable by using the [modification_counter] attribute of the sys.dm_db_stats_properties function.

So, the result of this is that my code now simply checks, every hour, if there are any tables that have exceeded a defined percentage threshold (stored as an extended property) of change rather than the 500 records + 20% which is the default (which for very large tables means that your stats won't get updated very often).

I don't think I have bottomed this one out yet but the SQLBits presentation really helped so I thought I'd share it.

Later Dudes and Dudettes!

Friday, 20 January 2017

What's that saying about assumptions?

I wonder if there is a word that invokes more feelings of frustration for an IT dude than kerberos?

This shouldn't be the case because there is a great number of useful resources out there to help you understand and troubleshoot the issues that occur in kerberos land. However, none of these troubleshooting guides remind you of one very important thing, be patient!

This is because kerberos is a security protocol that deals with distributed objects and services that involve synchronization and expiration and unless your name is Mark Russinovich and you have every Windows related command at your disposal, you are sometimes just going to have to wait a bit for certain things to synchronize or expire.

Don't assume that adding an SPN will immediately cause kerberos authentication to start working and read the documentation (that last bit was for me).

Yesterday I was working on a custom DSC resource for adding SPNs and my code ended up generating a couple of really weird looking SPNs which not only failed to enable kerberos authentication but also disabled my ability to establish a PowerShell CIM session with the machine in question. I removed the dubious SPNs but still could not start a CIM session. I was ready to trash the machine and rebuild it because I assumed it was toast but a still quiet voice reminded me that I would not learn anything by doing so. I thought "tomorrow is another day" and left it.

When I tried again today everything was back to normal and I realised what had happened: the domain controllers had synchronized, the old kerberos tickets had expired and my machine was back to normal. I added the correct SPNs, waited a few minutes and all was good in the land of kerberos :-)

This helped:


and this was the documentation I referred to:
https://msdn.microsoft.com/library/cc281253.aspx

Later dudes

Monday, 9 January 2017

From Tree to Tea

I recently wrote some help comments for a PowerShell function and needed to show an example directory structure and because I have, ermm, issues, I wanted a way to do it exactly the same in all my help comments. So, the inevitable Google search began and I soon discovered tree.exe, which did the trick.

I am blogging about this because tree.exe comes with a nasty little issue that I want to remember (and you can benefit - how nice of me ;-)). Running the following produces a tree structure with garbage encoding:

PS:\>Tree $Path /F | Out-File C:\Temp\tree.txt




And adding the -Encoding parameter doesn't help. The only way I managed to make this useful is to pipe the tree output to the clipboard and paste it into notepad, like so:

PS:\>Tree $Path /F | Clip


Result!

This stackoverflow thread helps explain the issue nicely: http://stackoverflow.com/a/139302

Friday, 6 January 2017

SQL Service SIDs

This is a new one to me...

I set up an SMB Share on the Backup directory for a SQL Server 2016 instance (using DSC of course) and assigned the SQL Server AD account full access, only to find that I was getting an "Access Denied" exception when I tried backing up using the share but not when backing up using the local path. This made no sense because the AD account had full access to the folder and the share (or so I thought).

However, Microsoft has been doing stuff with security accounts and promoting Managed Service Accounts as a best practice so I figured "what the heck" and tried replacing the AD account with the SQL Server Service (i.e. NT Service\MSSQLSERVER) on the share...and what do you know, it worked.

Turns out that the SID for the AD account and the the SID for the SQL Server Service are not the same so even though the account does have access to both the folder and the share it only has access to the folder via the SQL Server Service. To sort this out you need to use the same thing for both, either the SQL Server Service or the AD Account but you can't mix the two.

IT huh, almost 20 years on and still learning the basics. Later dudes!