We use SQL Server Data Tools almost daily and have done so since 2005 when Microsoft release Visual Studio for Database Professionals. It was the first tool to put SQL databases under source control using a fully integrated IDE. Not only that, it also validated our code during Build by deploying it to a SQL Express instance that was deployed as part of the tool. For the first time we could start treating database code in the same way as our .Net assembly code and the like.
So you can imagine the look on my face when making a change to a SQL Server 2016 database project yesterday when SSDT decided to throw an exception because I used the STRING_SPLIT() function in my code. After all these years of trusting the tool to keep me right, it was now telling me lies because it didn't recognize the function. I had tested my code in Management Studio and knew that it was definitely valid so I assumed that SSDT was not fully compliant with the latest SQL version.
I checked my project properties to ensure that the Target platform was set to SQL Server 2016 and Compatibility level set to 130. I did another build just to make sure but sure enough, the exception persisted:
Either the object does not exist or the reference is ambiguous because it could refer to any of the following objects: [dbo].[STRING_SPLIT].[value] or [STRING_SPLIT].[value].
I googled, naturally, and found this: https://connect.microsoft.com/SQLServer/feedback/details/2906200/ssdt-failing-to-compile-if-string-split-is-used
Which to help you out just basically says 'yup, we know it's a problem' and then at the bottom it says they have fixed it in the latest release of SSDT. Phew! I downloaded the latest version for VS 2015 and whaddayaknow, it works!
Later Dudes!
We are just a bunch of geeks who love working in IT and are making the most of the transition that the industry is currently undergoing. This is the place where we learn and collaborate on our DevOps journey.
Friday, 19 January 2018
Saturday, 30 September 2017
Project Honolulu "as a server"
So with MS Ignite behind us with and with some great new revelations, I thought I'd take a look at the newly announced "Project Honolulu". Project Honolulu (which can be downloaded here, is going to be the replacement to Microsoft's administrative MMC snap-in tools like Device Manager, Disk Manager etc. Its not replacing RSAT tools or the system center suit but to compliment them and fill the void in simply and basic administration of server management particularly for Windows Server Core. Before I begin, please remember that any thoughts/opinions in this blog are my own and are not influenced in any way but feel free to comment - I love "nerd talk", and any code posted comes "AS-IS" and you, the executor are responsible for your own stuff.
Project Honolulu can be installed on Windows 10 as an application. This will then install as running process and rely on the application to be running in the backgroud. The alternative method is to install Project Honolulu on a server which will install as two components, a gateway (collecting data from server nodes) and a webserver (to present the webapp). Today I'm going to look at a server installation.
Environment
To get PH up and running in gateway mode you will need…. A server.
Host
|
Hyper-v
|
CPU
|
2
|
Memory
|
2-4GB Dynamic
|
HDD
|
40GB vhdx
|
OS
|
Windows Server 2016
|
Simples!
Installation
My first instinct was to secure the webserver with my on enterprise CA so, using the remote certificate snap-in (ironically) I generated a cert request file. There are other methods i.e. certreq.exe but this for me was the quickest. Logged onto my CA and completed the request. I then imported the certificate to the servers personal certificate store.
Now I launched the MSI and agreed to the licensing agreement. I also agreed to allow my server the ability to update its TrustedHosts list. This is needed for managing workgroup servers. After this, I was prompted to provide a port for the web application and either allow the application to generate a self-signed certificate or specify one of my own. As I've generated my one certificate I chose to use that.
I specified my preferred port and took the thumbprint of my generated certificate and entered it. Then hit my first hurdle… :(
Hmmm?
Attempted to install with other ports, registered and ephemeral but with no joy? With very little details on the webserver and its configuration, my troubleshooting was pretty limited. Just as a off-change (and time being against me) I decided to install the application using the built-in self-sign function, which installed with no errors…. Odd, thoughts @Microsoft.
After this, the installation went through with no further issues. Upon completion I opened a web browser and browsed to https:\\<PH SERVER UNC>:<CONFIGURED PORT>. There's a short tour to skip through and that's it.
Add Managed Endpoints
At this point I am now ready to add endpoints to be manage. By clicking on the plus sign on the right-hand side you can enter server details and credentials to be used to manage it with, awesome! At which point the PH server will gather data and manage the server using the credentials provided.
Alternatively, you can give the Honolulu servers' AD object the permission to the able to manage the endpoints. As the Honolulu service runs as the NT SERVICE\Network Service, configuring the computer account with delegate permissions to manage the endpoints allows the endpoints to be managed automatically. However, my immediate reaction to this was that there appears to be no sense of roll based access control so if a Honolulu server has access to manage other servers on a network, Sys admins would quickly lose sight on what admins can do…. For example, a low-level summer intern would have the same abilities as a second-line support engineer.
This, thankfully, is not the case. The NT SERVICE/Network Service simply discovers the endpoints. The administration of such endpoints is executed via remote PowerShell and WinRM but the admin executing the commands still have their credentials passed through. So, as long as our low-level summer intern doesn’t have the rights to shut down that business critical ERP server, they won't be able to using Honolulu.
To enable the Honolulu server delegate permissions to discover the end points, the Honolulu server needs to be added to the end points PrincipalsAllowedToDelegateToAccount setting. To do this MS have kindly documented this here. I however, have taken it one step further.
$gateway = "HONOLULU SERVER"$gatewayObject = Get-ADComputer -Identity $gateway$nodeObjects = Get-ADComputer -Filter * -SearchBase "TOP LEVEL AD OU OF ENDPOINTS"foreach ($nodeObject in $nodeObjects){Set-ADComputer -Identity $nodeObject -PrincipalsAllowedToDelegateToAccount $gatewayObject$output += ($nodeObject.DNSHostName + ",")}$filePath = $Env:USERPROFILE + '\Desktop\UploadToHonolulu.txt'$output | Out-File -FilePath $filePath -NoClobber
This PS script will add the Honolulu to all servers in a desired OU, then take the FQDNs of all endpoints and compile them into a .txt file which is outputted to the user's desktop. Open the Honolulu server and import all the servers by importing the .txt file
"Voila!"
Managing an End Point
Now that your endpoints have been discovered, simply click on to it to see the vast amount of administration that can be done from this simply console. Along with a simply but affective overview of the endpoints performance, its quick to see how this tool will help many sys admins going forward.
Conclusion
It has been a while since Microsoft have developed a tool that fills a genuine gap. Since the uptake on Windows server core, a number of companies having the confidence to deploy it to production enviornments has been slow due to lage learning curve needed to achieve simple/basic administration task on an endpoints. MMC snap-in fills the majority of that gap but not entirely and it is clunky at best. With project Honolulu Sys admins can now preform most (if not all administration) task from a web console….
Good Work Mr. Microsoft!
Further info found here:
Wednesday, 13 September 2017
SSDT - to script or not to script!
I have been using SSDT for years, through its various incarnations, and I am a huge fan. I can say I have fond memories of Gert the Data Dude posting his way to blogger awesomeness and me being extremely grateful that he did. Gert has moved on to other parts of the Microsoft universe but the product has survived and seems to be a fully-fledged senior citizen in the Visual Studio landscape. Worryingly, Visual Studio has also started to entertain a new suitor, Red-Gate, and their devops offering is quite different from the model-based SSDT project...we shall see what happens there.
Anyway, the reason for the post is that I have just learned something rather interesting about how SSDT, VS, MSBuild and SqlPackage.exe co-operate to get scripts added to the beginning and end of a database change script.
The Requirement:
I have just started using tSQLt to write database unit tests after years of using the SQL Server Unit Test in the Visual Studio Test Project and my plan is to integrate the two different frameworks so that I can benefit from the fakes and assertion utilities in tSQLt but still have all my tests visible in the Visual Studio Test Explorer. I needed to have tSQLt deployed as part of the database project to make this happen and I wanted it to be extremely easy to upgrade tSQLt when a new version is released.
The Plan:
Add the tSQLt.class.sql downloaded from tSQLt.org as a Post-Deployment script and have the project decide whether to include it based on a project variable. Sounds simple but there is a catch - you can't simple add some conditional logic to the Post-Deployment script like this:
IF ('$(UseTestFramework)' = 'true')
:r .\tSQLt.class.sql
The Problem:
It would be nice if you could but by adding a TSQL statement to the script SSDT treats the tSQLt.class.sql as embedded TSQL and throws an exception because the file is crammed with GO statements. So you may try this:
:r .\$(TestFrameworkFileName).sql
In this case the sqlcmd variable value can be set differently in each environment publish settings file and for the environments where it is not needed an empty substitute file can be used. The problem is that SqlPackage.exe uses the DEFAULT value of the sqlcmd variable to evaluate the expression, not the value set in a publish settings file; so you end up with the the same result whatever you do.
The Solution:
It is similar but with a twist: you need to set the value of a sqlcmd variable in the database project file using an MSBUILD variable that can be determined at build time. The legendary Gert describes the solution here: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/745a2189-62db-4739-8437-8aac16affa05/conditional-post-deployment-for-unit-test?forum=ssdt
So, the steps to use conditional logic to determine if a particular script is included are quite simple:
Anyway, the reason for the post is that I have just learned something rather interesting about how SSDT, VS, MSBuild and SqlPackage.exe co-operate to get scripts added to the beginning and end of a database change script.
The Requirement:
I have just started using tSQLt to write database unit tests after years of using the SQL Server Unit Test in the Visual Studio Test Project and my plan is to integrate the two different frameworks so that I can benefit from the fakes and assertion utilities in tSQLt but still have all my tests visible in the Visual Studio Test Explorer. I needed to have tSQLt deployed as part of the database project to make this happen and I wanted it to be extremely easy to upgrade tSQLt when a new version is released.
The Plan:
Add the tSQLt.class.sql downloaded from tSQLt.org as a Post-Deployment script and have the project decide whether to include it based on a project variable. Sounds simple but there is a catch - you can't simple add some conditional logic to the Post-Deployment script like this:
IF ('$(UseTestFramework)' = 'true')
:r .\tSQLt.class.sql
The Problem:
It would be nice if you could but by adding a TSQL statement to the script SSDT treats the tSQLt.class.sql as embedded TSQL and throws an exception because the file is crammed with GO statements. So you may try this:
:r .\$(TestFrameworkFileName).sql
In this case the sqlcmd variable value can be set differently in each environment publish settings file and for the environments where it is not needed an empty substitute file can be used. The problem is that SqlPackage.exe uses the DEFAULT value of the sqlcmd variable to evaluate the expression, not the value set in a publish settings file; so you end up with the the same result whatever you do.
The Solution:
It is similar but with a twist: you need to set the value of a sqlcmd variable in the database project file using an MSBUILD variable that can be determined at build time. The legendary Gert describes the solution here: https://social.msdn.microsoft.com/Forums/sqlserver/en-US/745a2189-62db-4739-8437-8aac16affa05/conditional-post-deployment-for-unit-test?forum=ssdt
So, the steps to use conditional logic to determine if a particular script is included are quite simple:
- Add a database project variable named Configuration
- Unload the project
- Set the value of the variable to $(Configuration) - that's the MSBuild variable
- Reload the project
- Add a Debug.sql and a Release.sql file as post-deployment scripts
- Use the $(Configuration) sqlcmd variable in the post-deployment script to include the correct file based on the configuration of the build
The downside is that your optional execution paths are determined by the number of configuration types you have, rather than by the content of your publish settings file...but it is better than nothing!
Thursday, 10 August 2017
The Dreaded 403 PowerShell Remoting Error
If you have worked with PowerShell remoting then you will have seen this error before:
: Connecting to remote server BLAH failed with the following error message : The WinRM client received an HTTP status code of 403 from the remote WS-Management service. For more information...
It is not a happy message, especially when you have been using PowerShell to remotely manage the particular server for ages! So then you try remoting from another client and it works! You go back to your original client and try remoting to anything else and it fails...dohh! "But this worked just yesturday!" you scream.
Ahh, little things can make a big difference and in my case the issue was related to a VSTS Agent update that I did the day before. In order for the new version of the agent to communicate with VSTS in the cloud I needed to set a WinHTTP proxy. Once the agent was configured I could use a .proxy file in the agent directory instead...but I forgot to remove the WinHTTP proxy and in so doing broke PS Remoting.
Here is the story in a nutshell:
Without a proxy I can enter a remote PS session but with one I can not and I get the error, which sucks if you forgot that you had set the proxy. Best to remember that WinRM uses the HTTP protocol so proxy settings matter.
Later dudes.
: Connecting to remote server BLAH failed with the following error message : The WinRM client received an HTTP status code of 403 from the remote WS-Management service. For more information...
It is not a happy message, especially when you have been using PowerShell to remotely manage the particular server for ages! So then you try remoting from another client and it works! You go back to your original client and try remoting to anything else and it fails...dohh! "But this worked just yesturday!" you scream.
Ahh, little things can make a big difference and in my case the issue was related to a VSTS Agent update that I did the day before. In order for the new version of the agent to communicate with VSTS in the cloud I needed to set a WinHTTP proxy. Once the agent was configured I could use a .proxy file in the agent directory instead...but I forgot to remove the WinHTTP proxy and in so doing broke PS Remoting.
Here is the story in a nutshell:
Without a proxy I can enter a remote PS session but with one I can not and I get the error, which sucks if you forgot that you had set the proxy. Best to remember that WinRM uses the HTTP protocol so proxy settings matter.
Later dudes.
Friday, 14 July 2017
MSDTC in Windows Core
Simple one this...
How can we enable inbound transactions in MSDTC on a core machine? Well we use PowerShell of course! :-)
How can we enable inbound transactions in MSDTC on a core machine? Well we use PowerShell of course! :-)
PS C:\>Set-DtcNetworkSetting -InboundTransactionsEnabled $True
Nice one.
Monday, 3 July 2017
The curious case of the missing stored procedure
We sometimes use the Biztalk WCF adapter to talk to SQL Server and for that we need there to be an endpoint stored procedure in the target database for BTS to execute. When the adapter needs to execute the procedure to get or post data to the database, it will first execute a metadata query to confirm the stored procedure's existence - this is unusual behavior and it can lead to some head scratching if the lights are not all on :-)
This is an example of the query that BTS executes:
This happens because the Biztalk service account has permissions to execute the metadata query but not the stored procedure, so the metadata query returns no records and Biztalk thinks it isn't there and raises it's own exception rather than what you might expect which for Biztalk to try and execute the proc and cause SQL Server to generate a security exception (which would be so obvious and easy to address).
So, there you have it.
Later dudes!
This is an example of the query that BTS executes:
exec sp_executesql N'SELECT sp.type AS [ObjectType], modify_date AS [LastModified]FROM sys.all_objects AS sp WHERE (sp.name=@ORIGINALOBJECTNAME and SCHEMA_NAME(sp.schema_id)=@ORIGINALSCHEMANAME);SELECT [param].name AS [ParameterName], usrt.name AS [DataType], SCHEMA_NAME(usrt.schema_id) AS DataTypeSchemaName, baset.name AS [SystemType], usrt.is_table_type as IsTableType, usrt.is_assembly_type as IsAssemblyType, CAST(CASE WHEN baset.name IN (N''nchar'', N''nvarchar'') AND param.max_length <> -1 THEN param.max_length/2 ELSE param.max_length END AS int) AS [Length], CAST(param.precision AS int) AS [NumericPrecision], CAST(param.scale AS int) AS [NumericScale], param.is_output AS [IsOutputParameter], AT.assembly_qualified_name AS AssemblyQualifiedName FROM sys.all_objects AS sp INNER JOIN sys.all_parameters AS param ON param.object_id=sp.object_id LEFT OUTER JOIN sys.types AS usrt ON usrt.user_type_id = param.user_type_id LEFT OUTER JOIN sys.types AS baset ON (baset.user_type_id = param.system_type_id and baset.user_type_id = baset.system_type_id) or ((baset.system_type_id = param.system_type_id) and (baset.user_type_id = param.user_type_id) and (baset.is_user_defined = 0) and (baset.is_assembly_type = 1)) LEFT JOIN sys.assembly_types AT ON AT.[name] = usrt.name AND AT.schema_id = usrt.schema_id WHERE (sp.type = ''P'' OR sp.type = ''RF'' OR sp.type=''PC'') AND (sp.name=@ORIGINALOBJECTNAME and SCHEMA_NAME(sp.schema_id)=@ORIGINALSCHEMANAME) ORDER BY param.parameter_id ASC; ',N'@ORIGINALOBJECTNAME nvarchar(12),@ORIGINALSCHEMANAME nvarchar(3)',@ORIGINALOBJECTNAME=N'MyEndPointProc',@ORIGINALSCHEMANAME=N'dbo'
Now, should it transpire that the adapter is unable to execute the endpoint stored procedure and an error is logged in Windows on the BTS server, you may want to confirm the following:- Is Biztalk trying to connect to the right SQL Server? A SQL Profiler trace should show the above metadata query if it is
- Does the Biztalk service account have execute permission on the stored procedure?
A message sent to adapter "WCF-Custom" on send port "Send Message to MyDb" with URI "mssql://MyServer//MyDb?MyEndPointProc" is suspended.
Error details: Microsoft.ServiceModel.Channels.Common.MetadataException: Object [dbo].[MyEndPointProc] of type StoredProcedure does not exist
Server stack trace:
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Channels.ServiceChannel.SendAsyncResult.End(SendAsyncResult result)
at System.ServiceModel.Channels.ServiceChannel.EndCall(String action, Object[] outs, IAsyncResult result)
at System.ServiceModel.Channels.ServiceChannel.EndRequest(IAsyncResult result)
Exception rethrown at [0]:
at System.Runtime.Remoting.Proxies.RealProxy.HandleReturnMessage(IMessage reqMsg, IMessage retMsg)
at System.Runtime.Remoting.Proxies.RealProxy.PrivateInvoke(MessageData& msgData, Int32 type)
at System.ServiceModel.Channels.IRequestChannel.EndRequest(IAsyncResult result)
at Microsoft.BizTalk.Adapter.Wcf.Runtime.WcfClient`2.RequestCallback(IAsyncResult result)
MessageId: {69B1CA00-8526-4D70-A6D3-C82093BEC0A1}
InstanceID: {1F3766A8-8AA9-435E-BFB6-2D785C8D34FB}
"What you talk'n about Willis!? The proc is there dude!!"This happens because the Biztalk service account has permissions to execute the metadata query but not the stored procedure, so the metadata query returns no records and Biztalk thinks it isn't there and raises it's own exception rather than what you might expect which for Biztalk to try and execute the proc and cause SQL Server to generate a security exception (which would be so obvious and easy to address).
So, there you have it.
Later dudes!
Wednesday, 28 June 2017
Shared Database Code
If you use SQL Server Data Tools and deploy your database code using the SQL Server Database Project in Visual Studio, then read on.
If you need to deploy a database to multiple instances and have some objects only deploy to certain instances, read on.
If you didn't know that you can reference a database project within another database project and have the reference database schema included as part of the database, read on.
If you didn't know that you need to specifically configure your deployment to include composite objects when you have a 'composite' database project, you do now :-)
Later dudes!
If you need to deploy a database to multiple instances and have some objects only deploy to certain instances, read on.
If you didn't know that you can reference a database project within another database project and have the reference database schema included as part of the database, read on.
If you didn't know that you need to specifically configure your deployment to include composite objects when you have a 'composite' database project, you do now :-)
Later dudes!
Subscribe to:
Posts (Atom)