Salı, 29 Mayıs 2018 / Published in Uncategorized

One thing I like to do as part of my applications that are running is to have an information page that gives some basic information about the runtime environment the application is running under.

For example, here’s what I add to my info page in my AlbumViewer Sample:

I find it useful especially these days with SDKs and runtimes switching so frequently that I can quickly determine what versions the application actually runs under and where it’s hosted (Window/Linux/container) without having to actually look at the source code.

This should be easy right? Well, it may not be difficult, but obvious it is not.

Getting the Runtime Version

You would think it would be easy to get runtime version information – after all the runtime is… well… it’s running. But nooooo… Microsoft has never made it easy to get proper runtime version information that’s suitable for display in an application. Hell, in full framework you had to resort to checking the registry and then translating magic partial version numbers to an official release version number (like 4.7.1). You know you’re doing it wrong when you can’t tell what version of a runtime you have installed without looking in the registry and looking at an obscure lookup table to resolve the actual version everyone expects to look at.

This trend continues in .NET Core. There’s no direct API that returns a version number like 1.1, 2.0 or 2.1 etc. Because why make the obvious easy?

There are lots of APIs that you might think would work but they either don’t return anything or return some unrelated version number like the system installed full framework version – but not the one of the actual running runtime.

I’m not going to bore with all the false starts I’ve had here. If you want to find the .NET Core Runtime that your application is targeting you can look at the TargetFrameworkAttribute in your application’s startup assembly:

var framework = Assembly
    .GetEntryAssembly()?
    .GetCustomAttribute<TargetFrameworkAttribute>()?
    .FrameworkName;

var stats = new
{                
    OsPlatform = System.Runtime.InteropServices.RuntimeInformation.OSDescription,
    AspDotnetVersion = framework
};

It seems pretty hacky but it should be fairly reliable since every application has to have a a TargetFramework associated with it. This value comes from the project file:

<Project Sdk="Microsoft.NET.Sdk.Web">
  <PropertyGroup>
    <TargetFramework>netcoreapp2.1</TargetFramework>
  </PropertyGroup>
</project>

and the project build process turns that into an attribute attached to the startup assembly.

It’s a shame that this isn’t exposed somewhat more logically like the property that actually exists but is not set:

string runtime = System.Runtime.InteropServices.RuntimeInformation.FrameworkDescription;

Sadly that value very usefully returns null.

Use the force wisely, Luke!

OK, so now you can get the version number, but you should be wary of using it for anything other than displaying it for reference information. Version numbers are not really meant to be used for feature detection or other application critical comparisons, so use this version number only in cases where you absolutely have no other way. I suspect this is one of the reasons the .NET teams refuse to build runtime version numbers directly into the frameworks.

Personally, I tend to use compiler directives to differentiate broad feature ‘versions’ – I’ve talked about this a bit in my previously posts on building components that target both full framework and .NET Standard. This works great for broad frameworks, which in most cases is all you need to differentiate for, but can also be used for more specific targets.

Summary

Version detection in .NET Runtimes is not obvious for most versions and there’s no direct built-in way to get a friendly description of the version you’re running.

So, the above hack is it – it works. For now

this post created and published with
Markdown Monster

© Rick Strahl, West Wind Technologies, 2005-2018

Posted in
.NET Core  ASP.NET Core  






Salı, 29 Mayıs 2018 / Published in Uncategorized

I’m in the process of re-organizing a ton of mostly static content on several of my Web sites and in order to make it easier to manage the boat load of ancient content I have sitting around in many places. Writing content – even partial page content – as Markdown is a heck of a lot more comfortable than writing HTML tag soup.

So to make this easier I’ve been thinking about using Markdown more generically in a number of usage scenarios lately, and I wrote last week’s post on Creating a Markdown TagHelper for AspNetCore and an earlier one on doing the same thing with class ASP.NET WebForms pages. These controls allow for embedding Markdown content directly into ASP.NET Core MVC Views or Pages and WebForms HTML content respectively.

Serving Markdown Files as HTML

But in a lot of scenarios even these controls add a lot of unnecessary cruft – it would be much nicer to simply dump some Markdown files and serve those files as content along with a proper content template so those pages fit into the context of the greater site. This typically means access to a layout page by way of a generic template into which the Markdown content is rendered.

By using plain Markdown files it’s easier to edit the files, and when you host them in a repo like Github as they can just be displayed as rendered Markdown. In short it’s a similar use case, but meant for content only displays that’s ideal for Documentation sites or even things like a file only Blog.

So in this post I’ll describe a generic Middleware implementation that allows you to drop Markdown files into a folder and get them served – either as .md extension files, or as extensionless Urls based on the filename without the extension.

Get it

If you want to try out the middleware I describe in this post, you can install the NuGet package from here:

PM> Install-Package Westwind.AspNetCore.Markdown

or take a look at the source code on Github:

Generic Markdown Processing Middleware

The idea to process Markdown files directly is nothing new – it’s a common feature in standalone documentation and CMS/Blog generators.

But wouldn’t it be nice to have this functionality as a simple, drop-in feature that you can attach to any folder that is part of your existing Web application? In many of my dynamic Web sites, I often have a handful of information pages (like About, Terms of Service, Contact us, Support etc.) that are essentially static pages. And for those simple Markdown formatting is a perfect fit.

Additionally many sites I work on also need documentation and having a separate area to actually document a site with simple Markdown files. You use only Markdown text, and leave the site chrome to a generic configured template that renders the reusable part of the site. When creating content all you do then is write Markdown – you can focus on content and forget the layout logistics.

What do we need to serve Markdown Pages?

Here are the requirements for serving ‘static’ markdown pages:

  • A ‘wrapper’ page that provides the site chrome
  • A content area into which the markdown gets dropped
  • The actual rendered Markdown text from the file
  • Optional Yaml Parsing for title and headers
  • Optional title parsing based on a header or the file name

So, today I sat down to build the start of some generic middleware that processes Markdown content from disk and renders it directly using a configurable MVC View into which the Markdown content is rendered to provide the ‘container’ page that provides the styling and site chrome that you are likely to need in order to display your Markdown. This template can contain self contained HTML page content, or it can reference a _Layout page to provide the same site chrome that the rest of your site uses.

The idea is that I can set up one or more folders (or the entire site) for serving markdown files with an .md extension or extensionless Urls and then serve the Markdown files into a configurable View template.

The middleware is a relatively simple implementation that looks for a configured folder and extensionless urls within (think Docs for documentation or Posts folder for Blog posts) or .md files in the configured folder. When it finds either, the URL is processed by loading the underlying Markdown file, rendering it to HTML and simply embedding it into the specified View template.

Getting Started With the MarkdownPageProcessorMiddleWare

To use this feature you need to do the following:

  • Create a Markdown View Template (default is: ~/Views/__MarkdownPageTemplate.cshtml)
  • Use AddMarkdown() to configure the page processing
  • Use UseMarkdown() to hook up the middleware
  • Create .md files for your content

Basic Configuration

The first step is to configure the MarkdownPageProcessor by telling it which folders to look at. You specify a site relative folder, an optional MVC View or Page Template (the template has to exist) and a few optional parameters.

As usual for ASP.NET Core Middleware, you need to both hook up ConfigureServices() configuration and engage the Middleware in Configure().

The following configures up a /posts/ folder for processing for Markdown files:

public void ConfigureServices(IServiceCollection services)
{
    services.AddMarkdown(config =>
    {
        // Simplest: Use all default settings - usually all you need
        config.AddMarkdownProcessingFolder("/posts/", "~/Pages/__MarkdownPageTemplate.cshtml");
    });

    // We need MVC so we can use a customizable Razor template page
    services.AddMvc();
}

You then also need to hook up the Middleware in the Configure() method:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{
    app.UseMarkdown();

    app.UseStaticFiles();
    
    // we need MVC for the customizable Razor template
    app.UseMvc();
}

Create a Razor Host Template

Next we need a Razor template that will host the rendered Markdown. This template is the "site chrome" that surrounds a rendered Markdown page. Each folder you configure can have its own template, so it’s possible to vary the template. The template is just a Razor page that receives MarkdownModel which includes among other things a Model.RenderedMarkdown that you can embed into the page.

The simplest template you can create looks like this:

@model Westwind.AspNetCore.Markdown.MarkdownModel
@{
    ViewBag.Title = Model.Title;
    Layout = "_Layout";
}
@Model.RenderedMarkdown

The template has really nothing in it except the rendered markdown. All the rest of the ‘site chrome’ will be picked up by the _layout.cshtml page which provides the over look and feel of the page.

Note that you can do whatever you want in the template. You don’t have to use a _layout page – you can create a standalone page, or a page with partials and sections or whatever you want. All you have to make sure is there:

  • Make sure you have a @model Westwind.AspNetCore.Markdown.MarkdownModel
  • Make sure you call @Model.RenderedMarkdown to embed the rendered HTML
  • Pick up the page title from Model.Title

Note that the title parsing is optional, but it is enabled by default. The middleware checks for YAML header and title: property or a # Header tag in the top 10 lines of content.

Test it out

With this basic configuration code in place you should now be able to place a markdown file with a .md anywhere into the /posts/ folder somewhere and render it. I took my last Weblog post’s Markdown file and simply dumped it into a folder like this:

I can now go to:

http://localhost:59805/posts/2018/03/23/MarkdownTagHelper.md

or the extensionless version:

http://localhost:59805/posts/2018/03/23/MarkdownTagHelper

The default configuration works both with an .md extension or no extension. When no extension is specified the middleware looks at each extensionless request and tries to append .md and checks if a file exists then renders it.

With this in place you can now render the page like this:

Keep in mind this is pretty much a stock ASP.NET Core project – it uses the stock Bootstrap template and I haven’t made any other changes to the layout or page templates, yet the markdown file just works as a drop in file.

Cool, n’est pas?

More Cowbell

Ok the above is the basics, lets look at a few more configuration and customization options here. You can:

  • Customize the Razor template
  • Configure folders that are handled
  • Configure each folder’s options

Let’s take a look

A better Template: Adding Syntax Coloring

Most likely you’ll want to spruce up things a little bit. If you’re doing software related stuff like documentation or a blog posts one of the first things you’ll want is syntax highlighting.

I’m a big fan of highlightjs which comes with most common syntax languages I care about, and provides a number of really nice themes including vs2015 (VS Code Dark), visualstudio, monokai, twilight and a couple of github flavors.

The code below explicitly uses the Visual Studio (Code) Dark theme (vs2015) that also use on Weblog site:

@model Westwind.AspNetCore.Markdown.MarkdownModel
@{
    Layout = "_Layout";
}
@section Headers {
    <style>
        h3 {
            margin-top: 50px;
            padding-bottom: 10px;
            border-bottom: 1px solid #eee;
        }
        /* vs2015 theme specific*/
        pre {
            background: #1E1E1E;
            color: #eee;
            padding: 0.7em !important;
            overflow-x: auto;
            white-space: pre;
            word-break: normal;
            word-wrap: normal;
        }

            pre > code {
                white-space: pre;
            }
    </style>
}
@Model.RenderedMarkdown
@section Scripts { http://~/lib/highlightjs/highlight.pack.js <link href="~/lib/highlightjs/styles/vs2015.css" rel="stylesheet" /> setTimeout(function () { var pres = document.querySelectorAll("pre>code"); for (var i = 0; i }

HighlightJs from CDN

The provided highlight JS package includes a customized set of languages that I use most commonly and it also includes a custom language (FoxPro) that doesn’t ship on the CDN. You can however also pick up HighlightJs directly off a CDN with:

https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/highlight.min.js
<link href="https://cdnjs.cloudflare.com/ajax/libs/highlight.js/9.12.0/styles/vs2015.min.css" rel="stylesheet" />

Here’s what the page looks like with the Syntax highlighting enabled:

Better right?

Configuration Options

If you want more control over how the Markdown processing is done you can explicitly configure each of the folders you set up for Markdown handling. You can:

  • Configure .md file and extensionless processing
  • Specify whether you want to extract for title in the Markdown content
  • Hook in pre-processing code that is passed to the host template
  • Configure the Markdown Parser (Markdig)

The following sets up the /posts folder with some of the options explicitly set:

services.AddMarkdown(config =>
{
    // Simplest: Use all default settings - usually all you need
    config.AddMarkdownProcessingFolder("/docs/", "~/Pages/__MarkdownPageTemplate.cshtml");
    
    // Customized Configuration: Set FolderConfiguration options
    var folderConfig = config.AddMarkdownProcessingFolder("/posts/", "~/Pages/__MarkdownPageTemplate.cshtml");

    // Optional configuration settings
    folderConfig.ProcessExtensionlessUrls = true;  // default
    folderConfig.ProcessMdFiles = true; // default

    // Optional pre-processing
    folderConfig.PreProcess = (folder, controller) =>
    {
        // controller.ViewBag.Model = new MyCustomModel();
    };

    // optional custom MarkdigPipeline (using MarkDig; for extension methods)
    config.ConfigureMarkdigPipeline = builder =>
    {
        builder.UseEmphasisExtras(Markdig.Extensions.EmphasisExtras.EmphasisExtraOptions.Default)
            .UsePipeTables()
            .UseGridTables()                        
            .UseAutoIdentifiers(AutoIdentifierOptions.GitHub) // Headers get id="name" 
            .UseAutoLinks() // URLs are parsed into anchors
            .UseAbbreviations()
            .UseYamlFrontMatter()
            .UseEmojiAndSmiley(true)                        
            .UseListExtras()
            .UseFigures()
            .UseTaskLists()
            .UseCustomContainers()
            .UseGenericAttributes();
    };
}    

If you want to improve performance a little, don’t use extensionless URLs for the markdown files. The way the implementation currently works extensionless URLs require intercepting every extensionless URL request and checking for a Markdown file with an .md extension. Using just .md files will only affect files that actually have an .md extension.

This can be mitigated with some caching behavior – I come back to that a bit later in this post.

The default Markdig configuration has most of the pipeline extensions enabled so most things just work, but if you want optimal performance for your Markdown processing explicitly whittling the list down to just what you need can yield better performance.

Creating the Markdown File Middleware

So how does all of this work? As you might expect the process of creating this is actually not very difficult, but it does involve quite a few moving pieces as is fairly standard when you’re creating a piece of middleware.

Here’s what is required

  • Actual Middleware implementation to handle the request routing
  • Middleware Extensions that hook into Start ConfigureServices() and Configure()
  • MVC Controller that handles the actual render request
  • The Razor template to render the actual rendered Markdown HTML

A quick review of Middleware

The core bit is the actual Middleware extension that is hooked into the ASP.NET Core middleware pipeline.

Middleware is simply a class that implements a Task InvokeAsycn(HttpContext context) method. Alternately, Middleware can also be implemented directly in the Startup class or as part of a Middleware Extension using app.Use() or for terminating middleware using app.Run()

The idea behind Middleware is quite simple: You implement a middleware handler that receives a context object and calls a next(context) which passes the context forward to the next middleware defined in the chain and it calls the next and so on until all of the middleware components have been called. Then chain reverses and each of the those calls return their task status back up the chain.

image credit: Microsoft Docs

Middleware can handle both ‘before’ and ‘after’ operations – ‘before’ you pass on the call to next() for providing inbound functionality which is the most commonly used for the main middleware functionality. The ‘after’ processing fires after the call for next() completes which occurs after the chain has basically reversed. If you’ve used HttpModules in ASP.NET previously you can think of middleware as a module that can handle both pre-handler and post-handler operations in a single interface.

The ‘after’ handling can be used for cleanup or in post-processing final output (although that’s actually pretty tricky given that the output stream like is complete already).

If any middleware component wants to terminate the pipeline after its inbound processing, it can simply not call next() and the chain reaction ends and reverses out. All the previously hooked up middleware still fires its post-next() code.

In this scheme the order of middleware components is very important since they fire in order declared. For example, it’s crucial that things like the Authentication, Routing and CORS middleware bits are hooked up before the MVC middleware executes.

Implementing a dedicated middleware component usually involves creating the actual middleware component as well a couple of middleware extensions that allow for being called in ConfigureServices() for configuration (ie. services.Add<MiddlewareName>()), and Configure() for actually attaching the middleware to the pipeline (ie. app.Use<MiddleWareName>(). Yeah I know – talk about misnamed handlers: Configuration usually happens in ConfigureServices() where you configure the dependency injected components either directly or via callbacks that fire on each request and attaching the middleware is done in Configure() – no configuration usually happens in Configure() according to the patterns Microsoft has laid out. Go figure.

Implementing Markdown Page Handling as Middleware

So lets look at the actual implementation of the Markdown Middleware.

The primary job of the middleware is to figure out whether an incoming request is a Markdown request by checking the URL. If the request is to an .md Markdown file, the middleware effectively rewrites the request URL and routes it to a custom, well-known Controller endpoint that is provided as part of this component library.

public class MarkdownPageProcessorMiddleware
{
    private readonly RequestDelegate _next;
    private readonly MarkdownConfiguration _configuration;
    private readonly IHostingEnvironment _env;

    public MarkdownPageProcessorMiddleware(RequestDelegate next, 
                                           MarkdownConfiguration configuration,
                                           IHostingEnvironment env)
    {
        _next = next;
        _configuration = configuration;
        _env = env;
    }

    public Task InvokeAsync(HttpContext context)
    {
        var path = context.Request.Path.Value;
        if (path == null)
            return _next(context);

        bool hasExtension = !string.IsNullOrEmpty(Path.GetExtension(path));
        bool hasMdExtension = path.EndsWith(".md");
        bool isRoot = path == "/";
        bool processAsMarkdown = false;

        var basePath = _env.WebRootPath;
        var relativePath = path;
        relativePath = PathHelper.NormalizePath(relativePath).Substring(1);
        var pageFile = Path.Combine(basePath, relativePath);

        // process any Markdown file that has .md extension explicitly
        foreach (var folder in _configuration.MarkdownProcessingFolders)
        {
            if (!path.StartsWith(folder.RelativePath, StringComparison.InvariantCultureIgnoreCase))
                continue;

            if (isRoot && folder.RelativePath != "/")
                continue;

            if (context.Request.Path.Value.EndsWith(".md", StringComparison.InvariantCultureIgnoreCase))
            {
                processAsMarkdown = true;
            }
            else if (path.StartsWith(folder.RelativePath, StringComparison.InvariantCultureIgnoreCase) &&
                 (folder.ProcessExtensionlessUrls && !hasExtension ||
                  hasMdExtension && folder.ProcessMdFiles))
            {
                if (!hasExtension && Directory.Exists(pageFile))
                    continue;

                if (!hasExtension)
                    pageFile += ".md";

                if (!File.Exists(pageFile))
                    continue;

                processAsMarkdown = true;
            }

            if (processAsMarkdown)
            {             
                context.Items["MarkdownPath_PageFile"] = pageFile;
                context.Items["MarkdownPath_OriginalPath"] = path;
                context.Items["MarkdownPath_FolderConfiguration"] = folder;

                // rewrite path to our controller so we can use _layout page
                context.Request.Path = "/markdownprocessor/markdownpage";
                break;
            }
        }

        return _next(context);
    }
}

The key bit in this middleware uses Context.Request.Path.Value to grab the request path and based on that path determines whether the request needs to be passed on to the Markdown controller that renders the template. Most of the code above deals with checking the URL and if it is a match via processAsMarkdown path is simply rewritten to point at the MVC controller that renders the Markdown.

Middleware constructors can inject requested components via Dependency Injection and I capture the active Request delegate (next) in order to call the next middleware component. I also capture Markdown configuration that was setup during startup so that I can look at the path configuration and map it to the current path if any. The configuration holds a few global configuration settings as well as well as the configuration for each of the folders mapped in the configuration.

The code first tries to match the path to a configured path. If a matching path is found, it looks for a .md extension in the URL path. If it finds that it simply forwards the request to the controller by rewriting the URL to a fixed path that the controller is generically listening on.

Rewriting is simple – just set the Request.Path to a new value:

context.Request.Path = "/markdownprocessor/markdownpage";

If the URL is an extensionless URL things are a bit trickier. The code has to first check to see if the request is to a physical directory – if it is it’s not a markdown file. It then has to append the .md extension and check for the file’s existence the determine if the file can be found. If not the request is ignored and passed on in the middleware pipeline. If there is a matching markdown file on disk then it too gets re-written to the markdown controller’s route path.

If the URL is to be processed the original, un-re-written URL and the actual filename are written into Context.Items along with the folder configuration that was matched which makes these values available in the Controller so we don’t have to repeat the expensive lookup logic in the controller.

The Generic Markdown Controller

The request is forwarded to a controller that’s implemented in the library. The controller has a single Action method that has a fixed and well-known attribute route:

[Route("markdownprocessor/markdownpage")]
public async Task<IActionResult> MarkdownPage()

This fixed route is found even though it lives in a library. Note that this route only works in combination with the middleware because it depends on preset Context.Items values that were stored by the middleware earlier in the request.

Here’s main action method in the controller (full code on Github):

public class MarkdownPageProcessorController : Controller
{
    public MarkdownConfiguration MarkdownProcessorConfig { get; }
    private readonly IHostingEnvironment hostingEnvironment;

    public MarkdownPageProcessorController(IHostingEnvironment hostingEnvironment,
        MarkdownConfiguration config)
    {
        MarkdownProcessorConfig = config;
        this.hostingEnvironment = hostingEnvironment;
    }

    [Route("markdownprocessor/markdownpage")]
    public async Task<IActionResult> MarkdownPage()
    {            
        var basePath = hostingEnvironment.WebRootPath;
        var relativePath = HttpContext.Items["MarkdownPath_OriginalPath"] as string;
        if (relativePath == null)
            return NotFound();

        var folderConfig = HttpContext.Items["MarkdownPath_FolderConfiguration"] as MarkdownProcessingFolder;
        var pageFile = HttpContext.Items["MarkdownPath_PageFile"] as string;
        if (!System.IO.File.Exists(pageFile))
            return NotFound();
        
        // string markdown = await File.ReadAllTextAsync(pageFile);
        string markdown;
        using (var fs = new FileStream(pageFile, FileMode.Open, FileAccess.Read))
        using (StreamReader sr = new StreamReader(fs))
        {                
            markdown = await sr.ReadToEndAsync();                
        }
        
        if (string.IsNullOrEmpty(markdown))
            return NotFound();

        var model = ParseMarkdownToModel(markdown);
    
        if (folderConfig != null)
        {
            folderConfig.PreProcess?.Invoke(folderConfig, this);
            return View(folderConfig.ViewTemplate, model);
        }
        
        return View(MarkdownConfiguration.DefaultMarkdownViewTemplate, model);
    }

    private MarkdownModel ParseMarkdownToModel(string markdown, MarkdownProcessingFolder folderConfig = null)
    {
        var model = new MarkdownModel();

        if (folderConfig == null)
            folderConfig = new MarkdownProcessingFolder();

        if (folderConfig.ExtractTitle)
        {
            var firstLines = StringUtils.GetLines(markdown, 30);
            var firstLinesText = String.Join("\n", firstLines);

            // Assume YAML 
            if (markdown.StartsWith("---"))
            {
                var yaml = StringUtils.ExtractString(firstLinesText, "---", "---", returnDelimiters: true);
                if (yaml != null)
                    model.Title = StringUtils.ExtractString(yaml, "title: ", "\n");
            }

            if (model.Title == null)
            {
                foreach (var line in firstLines.Take(10))
                {
                    if (line.TrimStart().StartsWith("# "))
                    {
                        model.Title = line.TrimStart(new char[] {' ', '\t', '#'});
                        break;
                    }
                }
            }
        }

        model.RawMarkdown = markdown;
        model.RenderedMarkdown = Markdown.ParseHtmlString(markdown);

        return model;
    }
}

The main controller code reads the path from Context.Items and then checks to ensure the file exists. If it does reads the markdown from disk and passes it to a helper that populates the model.

The ParseMarkdownToModel() helper tries to extract a title and parses the markdown to HTML and stores those values on the model. The resulting model is then fed to the view specified in the folder configuration.

Et voilà! We have rendered Markdown documents.

Markdown Parsing

The actual Markdown Parsing is handled by helper functionality in the library that includes the logic for configuring and implementing a specific Markdown parser. This library uses the popular Markdig parser by default.

I covered how the Markdown Parsing is handled in my last post and if you’re interested how the Markdown pipeline is set up you can review it there.

Performance

As I mentioned earlier this middleware has some overhead because it has to effectively look at every request for the folders you have configured and check either for the .md extension worse for extensionless URLs check whether the file exists. Therefore I recommend that you are very specific about the folders you set up to serve markdown from rather than making this a global hookup in the root folder. Use specific directories like /docs/ or /posts/ etc. rather than just setting the entire site to use the root / site.

There’s some opportunity for optimization here as well. Output caching on the controller is one thing that would help, but I couldn’t actually get this to work with server side caching – ResponseCache doesn’t seem to work on the controller most likely due to the fact that the path is rewritten from the original URL. For caching to work it would have to be handled in code using a Memory (or other) Cache object directly and mapping to specific URL paths.

It would also help to cache file lookups to avoid the disk hit for file existence checks which are relatively slow. Keeping track of files that were previously checked could avoid that process. One advantage of the way things work now is that you don’t have to worry about updating Markdown files on the server because currently there is no caching. Change the file and it will be picked up immediately in the next request.

Summary

There’s still stuff to do with this library, but I’ve thrown this into a few internal projects and so far it works great. These projects are applications that have lots of dynamic content, but also have several sections that are mostly static text which previously was hand coded HTML – I was able to throw out a bunch of these HTML pages and convert them to Markdown in Markdown Monster as they were in Markdown friendly simple HTML to start with. It greatly simplifies editing and I’ve been able to pass of these documents to other non-coder types to edit where previously it was just easier for me or somebody else on my team to write the html ourselves.

This is nothing overly complex, but I find this drop in Markdown functionality incredibly useful and I’m sure I’ll be using it extensively in the future. I hope some of you find this useful as well. Enjoy.

Related Resources

this post created and published with
Markdown Monster

© Rick Strahl, West Wind Technologies, 2005-2018

Posted in
ASP.NET Core  Markdown  






Salı, 29 Mayıs 2018 / Published in Uncategorized

Practical .NET

Leveraging Raw SQL in Entity Framework Core

The ability to use SQL with Entity Framework Core has always been a cool feature. There’s a new feature in Entity Framework Core that opens the door for whole new set of SQL-related functionality.

Entity Framework Core provides a new tool for sending raw SQL to your database: FromSql. Initially, FromSql might sound like a replacement for Entity Framework’s SqlQuery. However, that’s missing the point (and the power) of FromSql.

The Differences and Replacements It’s critical to understand that FromSql is much more limited than SqlQuery in terms of what it can return. SqlQuery, which was available from the DbContext’s Database object, allowed you to submit any SQL statement to create any set of objects you wanted. Typical code looked like this:

var cNames = db.Database.SqlQuery("Select FirstName, LastName from Customers");

FromSql, on the other hand, is called from one of the entity object collections on your DbContext object. Typical FromSql code looks like this:

var custs = db.Customers
.FromSql("Select * from Customers")
.ToList();

Effectively, then, FromSql is tied to a specific DbContext collection so you can only return entity objects from that collection (in my example, that would be my Customer object). Nor are you allowed to skip properties, either! Your SQL statement must provide a value for every property on your entity class.

One other difference: With SqlQuery the names used in your SQL statement’s Select clause had to match the names of the properties on your entity object. That means if you’d used the Column attribute to tie a property called CustomerId to a column in the table called pkCID, then the Select clause in your SQL statement would need to include an item named after the property: CustomerId. With FromSql, the items in your Select clause must use the column names that properties are mapped to. Using my previous example, that means the Select clause in my SQL statement must include an item named pkCID.

Ad hoc queries like the ones that SqlQuery supported are on the roadmap for Entity Framework Core 2.1. However, as I write this, the current version of EFC is 2.0. If, in the meantime, you’re looking for a substitute (and are feeling brave) you can consider using EFC’s RelationalCommand class. I say "brave" because the documentation points out that RelationalCommand is for internal use only and might be altered or disappear at any time. You can also still use Context.Database.Connection to work with ADO using the ADO Command and DataReader objects (though, since those don’t return objects, that seems to be missing the point of using Entity Framework). ExecuteSqlCommand is also available for submitting Insert, Update and Delete statements.

The Benefits If you’ve been using SqlQuery this must all sound pretty awful … but the reason those limitations are in place is so that FromSql can integrate with LINQ.

While you could use LINQ with SqlQuery, the result was inherently inefficient. In this code, for example, SqlQuery would return all of the "Status 1" Customers to my application and then use LINQ to filter the results in memory:

var custs = from c in db.Database.SqlQuery("Select * from Customers Where Status = 1")
            where c.LastOrderDate < DateTime.Now
            select c;

On the other hand, the equivalent FromSql query collapses the LINQ statement and the FromSql into a single query sent to the database. The result is that, in this example, only the Customers that meet all the criteria in both the SQL statement and the LINQ Where method will be returned to my client:

var custs = db.Customers
            .FromSql("Select * from Customers Where Status = 1")
            .Where(c => c.LastOrderDate < DateTime.Now)
            .ToList();

This integration allows you to retrieve related data by using an Include statement in your LINQ code. This example returns Customers with each Customer object’s SalesOrders property populated with the Customer’s SalesOrders:

var custs = db.Customers
            .FromSql("Select * from Customers")
            .Include(c => c.SalesOrders)
            .ToList();

You can also use FromSql to submit stored procedure calls (just begin the SQL statement with the Execute keyword), provided the stored procedure meets the requirements of FromSql.

Efficiency aside, FromSql is exciting for another reason: It gives you access to SQL features that LINQ doesn’t support, including proprietary extensions. I’ve written some columns in the recent past where I bemoaned that lack in LINQ (temporal tables, for example). FromSQL is going to solve that problem. For example, you can integrate table-valued functions with your LINQ statement, provided you include the function in a Select statement:

var custs = db.Customers
            .FromSql("Select * from CustomersWithOrdersOver(10000)")
            .Include(c => c.SalesOrders)
            .ToList();

EFC works its magic by tucking your SQL statement into a subquery. To ensure that EFC can do that, you’ll need to avoid doing anything in your SQL statement that would prevent it from being used in a subquery (for example: putting a semicolon at the end of your statement).

To be able to use FromSql, you’ll need to add the EntityFrameworkCore.Relational package to your project (another example of how EFC splits tools specific to working with Relational database out of the core EFC package). However, if you’re accessing a relational database, you’ve probably already done that.

Managing Parameters As always when working with raw SQL there’s a temptation to just concatenate raw input into the SQL statement, creating opportunities for SQL injection attacks. FromSql provides a variety of ways to integrate parameters, all of which (under the hood) generate a DbParameter object … and DbParameter objects protect you against SQL injection.

You can use string interpolation to insert a value into your SQL statement by prefixing the string holding the statement with a dollar sign ($) and wrapping the name of the variable holding the value in braces ({ }). That’s what this code does:

int cLimit;
cLimit = 10000;
var custs = db.Customers
            .FromSql($"Select * from Customers Where CreditLimit = {cLimit}")
            .ToList();

You can also use placeholders and values as you would with the string object’s Format command. That’s what this code does:

var custs = db.Customers
            .FromSql("Select * from Customers Where CreditLimit = {0}", cLimit)
            .ToList();

If you’re comfortable with the @ syntax that T-SQL uses for parameters, you can use a DbParameter object. This code does that:

DbParameter cLimit = new SqlParameter("limit",110);
var custs = db.Customers
            .FromSql("Select * from Customers Where CreditLimit = @limit", cLimit)
            .ToList();

That’s all cool but, let me be clear: FromSql isn’t a replacement for SqlQuery (though I will be glad when EFC acquires similar functionality). FromSql is the tool that allows you to mix SQL and LINQ to create efficient queries while giving you access to the full power of SQL. That’s not better but it is good to have.

About the Author

Peter Vogel is a system architect and principal in PH&V Information Services. PH&V provides full-stack consulting from UX design through object modeling to database design. Peter tweets about his VSM columns with the hashtag #vogelarticles. His blog posts on user experience design can be found at http://blog.learningtree.com/tag/ui/.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

What are some of the unique features in the new tool SQL Operations Studio tool?

Solution

Microsoft has developed a new tool for SQL Server called SQL Operations Studio. I am going to call it SOS for short, and I think that acronym is appropriate as it helps with managing SQL Server.  Also, you will need to remember that the current iteration of the product is in the very early stages of its lifecycle (Preview mode) and in fact Microsoft could even discontinue it at some point (I do not think so though).  You may certainly notice the name selected for this new tool: SQL Operations Studio. Its path is directly related to the DevOps movement and designing a tool that is focused on the DevOps areas of organizations.

For this particular tip, we are going to focus on a few areas that standout on the feature list:

  • Integrated Terminal Window
  • User Settings
  • Custom Widgets for Dashboards

Of course our first step is to download Microsoft’s SQL Operations Studio at: https://docs.microsoft.com/en-us/sql/sql-operations-studio/download. I would recommend taking a look at Rajendra Gupta’s tip on the install process

On the download site, a few items may pop out as being unusual from the start.  SQL Operations Studio, SOS, can be installed on multiple operating systems including, of course, Windows, and then MAC and Linux. The specific list of supported OS’s includes Windows 7, 8, & 10, Windows Server 2008-2016, MacOS 10.12 & 10.13, and Redhat 7.3 & 7.4 (enterprise), Suse Linux v12SP2, and Ubuntu 16.04. Certainly, be on the lookout for changes to this list, which should generally be shown on the download page.  Once SOS is downloaded, the install process on a Windows based system follows the standard “next, next, next, etc.” install. Now we are ready to start exploring this new tool.

Using SQL Operations Studio

As shown below, the display area for the tool is very basic and focuses on keyboard-based commands for quick ease of use.  The first step in the process is to setup a new connection. Just to the right of the Server heading we have three buttons, the left of which is the Add Connection button.

For the connection, the server name and the authentication details must be completed. The database name is optional as is the Server Group. The Server Group allows you to group similar servers into category groups.

After hitting connect, the Server Dashboard is displayed with several widgets.

Each of these widgets is portable and can be moved around the main body of SOS by clicking on the edit button, as shown below.

This setup also allows for SOS to be extensible with the addition of other widgets to compliment the existing widgets. Furthermore, because of it is .Net base and uses microservice architecture along with the SQL Server Tool Service (https://github.com/Microsoft/sqltoolsservice), the overall setup of the application is designed for customization and additional functionality.

SQL Operations Studio Integrated Terminal Window

One of the very neat features of the tool, is the integration with various command line tools. Simply selecting View and Integrated Terminal opens the command editor.

In this case, we can execute commands without having to leave the tool. In the below example, we are using the DOS command line, but depending on your operating system, you can also execute PowerShell (Windows) or Bash (Mac / Linux).

As needed and as shown below, multiple terminal sessions can be running at the same time. Furthermore, using the plus button adds additional sessions whereas the trash can button removes a session.

If PowerShell is the preferred tool for the command line, then the settings can be changed to reflect PowerShell as the default tool. First the settings must be opened via the preference option on the File Menu.

Next, we need to set the terminal.integrated.shell.windows element to the location of the PowerShell exe.

Now, when a new integrated terminal session is opened, PowerShell will be used.

SQL Operations Studio User Settings

You may have noticed in the above example that the settings area is actually a json file with various elements and line items that allow you to change each item manually. With the tool designed to be light weight and flexible, these settings allow for detailed control of your SOS tool.  Furthermore, on the left side of the settings area, under Default Settings, you will find the default settings and commonly used options for many of the settings. To make changes you will want to adjust the value in the right-side pane.

Additionally, settings can be set at two levels, either the user level or workspace level. The workspace scope settings are the primary and override any user settings which may be set.

Finally, you can easily share your settings.json file, commonly located in %APPDATA%\sqlops\User on Windows, with your coworkers or on other machines which makes setup much easier. Just copy the file from one machine to another.

In the event you do something “crazy” with the user settings, you always have the default values to fall back on. In fact, there is a quick way to copy the default setting back to your users setting if needed. In the below example, I changed the editor.tabsize setting for the user to 25. As you can see in the left panel, the default is 4.

Simply hovering over the editor.tabsize option, then clicking on the pencil edit button, you are given the option to Replace in Settings.

Selecting this option overwrites the current user setting.

Of course, care must be taken with these options, as you could easily overwrite other options that you would prefer to not be changed.

SQL Operations Studio Custom Dashboard Widgets

In a way similar to Power BI Customizations, SOS also provides the ability to add custom widgets to server and database dashboards.

Our first step in the widget creation process is to create a query that will be used as a basis for the widget. In the below example, the average, minimum, and maximum backup times, in seconds, are recorded for each database. Once you are satisfied with the query, be sure to save it.

Next, we to see the results in chart format by clicking on the View as Chart button.

Here is a copy of the chart results. For each database, we have the Average, Minimum, and Maximum duration time in seconds.

The next step is to click on the Create Insight option in the chart results.

SOS creates the JSON details for the widget.

This text must then be copied into the user settings; specifically, it is added as one of the elements under the dashboard.server.widget section. Once added, the setting will need to be saved (ctrl + s is the keyboard equivalent).

Now, the custom wizard has been added to the server dashboard. It can be resized or moved within the dashboard very easily.

The new Microsoft SQL Operations Studio tool, SOS, is a great addition to the SQL Server tool set.  Although only in beta or preview mode, it is quickly achieving value for DBAs, developers, and other database support personnel.

Next Steps
  • Check out the following resources:

Last Update:
2018-04-09

About the author

Scott Murray has a passion for crafting BI Solutions with SharePoint, SSAS, OLAP and SSRS.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

Usually when a new database server gets built, it has the latest operating system service pack, and SQL Server is also installed with the latest service pack. Unfortunately, when there are many servers and instances, it’s easy to forget to patch them all, both at the OS level and at the SQL level. Also, it is hard to determine if all of them have been patched to the latest version. And when there are issues, the OS and SQL build numbers are needed for the provider to troubleshoot.

Solution

Below you’ll find a PowerShell script that checks the OS version details and the SQL Server build, which then can be compared against the latest build to see if it needs to be patched or not. To speed up the process, it queries the servers in parallel, but constructs a single query to update the Inventory database table in a single connection.

In the script there are some variables you need to modify:

  • $server: the name of the server where you have stored your Inventory table
  • $database: the database where you have stored your Inventory table
  • $query: modify this depending on the names of your columns
  • $Maxthreads: query 20 servers at a time; if you’re tight on resources you can lower this number, but the results may take longer to appear

The way it works is as follows:

  1. Query table “Inventory” in database “Inventory”, which contains one entry for each instance with the details: ServerName, InstanceName, Version, BuildNumber, ServicePack, OSVersion, OSBuildNumber, OSServicePack and DatetimeLastUpdate.
  2. In parallel and for each server, query the SQL information (BuildNumber, ServicePack) and OS information (OSVersion, OSBuildNumber, OSServicePack), and construct a SQL update statement.
  3. When all threads have finished, we have a list of SQL update statements to run in our Inventory database. Note that if you don’t want to run these commands immediately, you need to comment the last line of the script.
  4. Finally, we can run a query to identify if we have the latest SQL Server build numbers, and a similar approach can be done to identify if we have the latest OS build number.

Script to create the inventory table and insert test records

The following script creates our Inventory table where we will list each server we need to connect to.  The script also adds a few sample records for testing.  You would need to insert the names of your servers to test in your environment.

CREATE TABLE [Inventory] (
   [ServerName] VARCHAR(128),
   [InstanceName] VARCHAR(128),
   [Version] VARCHAR(20),
   [BuildNumber] VARCHAR(20), 
   [ServicePack] VARCHAR(20), 
   [OSVersion] VARCHAR(20), 
   [OSBuildNumber] VARCHAR(20), 
   [OSServicePack] VARCHAR(20), 
   [DatetimeLastUpdate] DATETIME) 
 
INSERT INTO [Inventory] ([ServerName], [InstanceName], [Version]) VALUES ('nonexist', 'nonexist', 'Microsoft SQL Server 2000')
INSERT INTO [Inventory] ([ServerName], [InstanceName], [Version]) VALUES ('WIND01', 'WIND01', 'Microsoft SQL Server 2014')
INSERT INTO [Inventory] ([ServerName], [InstanceName], [Version]) VALUES ('WINT01', 'WINT01\TEST01', 'Microsoft SQL Server 2008')
INSERT INTO [Inventory] ([ServerName], [InstanceName], [Version]) VALUES ('WINP04', 'WINP04\PROD04', 'Microsoft SQL Server 2016')
INSERT INTO [Inventory] ([ServerName], [InstanceName], [Version]) VALUES ('WINP04', 'WINP04\PROD05', 'Microsoft SQL Server 2017')

PowerShell Script

Here is the script.  As mentioned, you will need to adjust the first couple of variables to match your environment.  Also, this script uses Windows authentication to connect to the servers.

$ErrorActionPreference = "Stop" #stop when an error is encountered
# Declare variables
$server = "."
$database = "Inventory"
$query = @"
SELECT [ServerName], [InstanceName]
  FROM [Inventory]
"@
$SleepTimer = 1000 #after X milliseconds, check if the jobs have finished. 1000 is every second.
$MaxResultTime = 300 #after X seconds, all jobs are killed. 300 is 5 minutes.
$Maxthreads = 20 #number of parallel jobs
# import modules
Import-Module SqlPs -DisableNameChecking
$error.clear() #clear error generated by last command
# get list of instances
$objects = @(Invoke-Sqlcmd -ServerInstance $server -Database $database -Query $query)
# environment setup
$output = ""
$errors = ""
$Jobs = @()
$ISS = [system.management.automation.runspaces.initialsessionstate]::CreateDefault()
$RunspacePool = [runspacefactory]::CreateRunspacePool(1, $MaxThreads, $ISS, $Host)
$RunspacePool.Open()
$script = {
   Param ([string]$serverName, [string]$instanceName)
   $ErrorActionPreference = "Stop"
   $query = "
      SET NOCOUNT ON --No 'rows returned' message
      SELECT SERVERPROPERTY('productversion') [ProductVersion], SERVERPROPERTY('productlevel') [ProductLevel]"
   try {
      $command = "UPDATE [Inventory] SET "
      # Get SQL info
      $data = sqlcmd -S $instanceName -Q $query -h-1 -W -b -r1 -s"\t" 2>&1 #no header, no whitespaces, break on error, errors to output, tab separator, stderr to stdout
      if (!$data -or $data.length -eq 0) { return $instanceName + ": Unable to query server" | Out-String } #server may not exist anymore
      $data = $data.split('\t')
      $command += "[BuildNumber] = '"+$data[0]+"'"
      # Get OS info
      $data = Get-WmiObject -class Win32_OperatingSystem -computername $serverName | select Version, BuildNumber, ServicePackMajorVersion
      $command += ", [OSVersion] = '"+$data."Version"+"', [OSBuildNumber] = '"+$data."BuildNumber"+"', [OSServicePack] = '"+$data."ServicePackMajorVersion"+"'"
      "" #indicate there was no error
      $command+", [DatetimeLastUpdate] = GETDATE() WHERE [InstanceName] = '"+$instanceName+"';"
   }
   catch {
      return $instanceName + ": " + $LastExitCode + " " + $_ | foreach { $_.ToString() } | Out-String
   }   
}
function CreateThread() {
   param ([string]$serverName, [string]$instanceName, [ref]$Jobs)
   $PowershellThread = [powershell]::Create().AddScript($script) #scriptToRun
   $PowershellThread.AddArgument($serverName) | out-null
   $PowershellThread.AddArgument($instanceName) | out-null
   $PowershellThread.RunspacePool = $RunspacePool
   $Handle = $PowershellThread.BeginInvoke()
   $Job = "" | select Handle, Thread, object
   $Job.Handle = $Handle; $Job.Thread = $PowershellThread
   $Jobs.value += $Job
}
$ResultTimer = Get-Date #start time
#start processing first task for each instance
for ($i=0; $i -lt $objects.length; $i++) {
   CreateThread $objects[$i].ServerName $objects[$i].InstanceName ([ref]$Jobs)
}
while (@($Jobs | where {$_.Handle -ne $Null}).count -gt 0) {
   #update completed jobs, get errors and result, and dispose them
   foreach ($Job in @($Jobs | where {$_.Handle -ne $Null -and $_.Handle.IsCompleted -eq $True})) {
      $results = $Job.Thread.EndInvoke($Job.Handle)
      if ($results[0] -and $results[0] -ne "") {
         $errors += $results[0]
      }
      for ($i = 1; $i -lt $results.count; $i++) {
         $output += $results[$i]+"`r`n"
      }
      #end thread
      $Job.Thread.Dispose()
      $Job.Thread = $Null
      $Job.Handle = $Null
   }
   #show progress
   $inProgress = @($Jobs | where {$_.Handle.IsCompleted -eq $False}).count
   Write-Progress `
      -Activity "Gathering data" `
      -PercentComplete (($objects.length - $inProgress) * 100 / $objects.length) `
      -Status "$inProgress pending"
   #exit on timeout
   $currentTime = Get-Date
   if (($currentTime - $ResultTimer).totalseconds -gt $MaxResultTime) {
      Write-Error "Child script appears to be frozen, try increasing MaxResultTime"
      break
   }
   #sleep
   Start-Sleep -Milliseconds $SleepTimer
}
#dispose thread pools
$RunspacePool.Close() | Out-Null
$RunspacePool.Dispose() | Out-Null
$errors #return errors
Invoke-Sqlcmd -ServerInstance $server -Database $database -Query $output
			
			

The output generated by the script is similar to the following. This is stored in the $output variable and the last line executes these commands to update the Inventory table.  If you don’t want to update the data you can comment out the last line and add $output as the last line to see the what is stored in the $output variable.

PS C:\Users\PabloEcheverria\Desktop> .\UpdateInventory.ps1
nonexist: 1 HResult 0x102, Level 16, State 1
 
UPDATE [Inventory] SET [BuildNumber] = '12.0.4100.1', [OSVersion] = '6.1.7601', [OSBuildNumber] = '7601', [OSServicePack] = '1', [DatetimeLastUpdate] = GETDATE() WHERE [InstanceName] = 'DEV01';
UPDATE [Inventory] SET [BuildNumber] = '10.0.5890.0', [OSVersion] = '6.1.7601', [OSBuildNumber] = '7601', [OSServicePack] = '1', [DatetimeLastUpdate] = GETDATE() WHERE [InstanceName] = 'TEST01';
UPDATE [Inventory] SET [BuildNumber] = '13.0.4466.4', [OSVersion] = '6.1.7601', [OSBuildNumber] = '7601', [OSServicePack] = '1', [DatetimeLastUpdate] = GETDATE() WHERE [InstanceName] = 'PROD04';
UPDATE [Inventory] SET [BuildNumber] = '14.0.3022.28', [OSVersion] = '6.1.7601', [OSBuildNumber] = '7601', [OSServicePack] = '1', [DatetimeLastUpdate] = GETDATE() WHERE [InstanceName] = 'PROD05';
			

After the information has been updated in the Inventory database, you can run the following query to determine if you have the latest SQL Server versions. Note that you need to update the “Build” column values to match the latest version at the moment you’re running the query, I personally use this link to get the latest build numbers for each version of SQL Server.

USE [Inventory]
GO
 
CREATE TABLE #Info ([Version] VARCHAR(128), [Build] VARCHAR(20))
INSERT INTO #Info VALUES ('Microsoft SQL Server 2000',    '8.00.2305')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2005',    '9.00.5324.00')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2008',    '10.0.6556.0')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2008 R2', '10.50.6560.0')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2012',    '11.0.7462.0')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2014',    '12.0.5579.0')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2016',    '13.0.4474.0')
INSERT INTO #Info VALUES ('Microsoft SQL Server 2017',    '14.0.3023.8')
 
SELECT [msl].[ServerName], [msl].[InstanceName], [DatetimeLastUpdate], [msl].[BuildNumber], [i].[Build] [LatestBuild], CASE WHEN ISNULL([msl].[BuildNumber], '') <> [i].[Build] THEN 'NO' ELSE 'YES' END [Latest?] 
FROM [Inventory] [msl]
LEFT JOIN #Info [i] ON [i].[Version] = [msl].[Version]
ORDER BY [latest?], [msl].[InstanceName]
 
DROP TABLE #Info

And here is the output from running the above query:

ServerName InstanceName DatetimeLastUpdate BuildNumber LatestBuild Latest?
nonexist nonexist 8.00.2305 NO
WIND01 WIND01 3/14/2018 18:28 12.0.4100.1 12.0.5571.0 NO
WINT01 WINT01\TEST01 3/14/2018 18:28 10.0.5890.0 10.0.6556.0 NO
WINP04 WINP04\PROD04 3/14/2018 18:28 13.0.4466.4 13.0.4466.4 YES
WINP04 WINP04\PROD05 3/14/2018 18:28 14.0.3022.28 14.0.3022.28 YES

Note that this script can be easily converted into a SQL Server job to run on a scheduled basis to check if all servers are patched to the latest version and notify you about the ones that aren’t or it can be run manually when you know there have been updates or you want to provide the latest information accurately. Also, a similar approach can be followed to identify if you have the latest OS patch level.

Next Steps
  • Start by creating your inventory database, and then run the process to update the information. Then check if you have the latest version for them.
  • You can learn more about SERVERPROPERTY in SQL 2016 by following this link.
  • You can learn more about PowerShell Get-WmiObject function by following this link.
  • You can automate the part to get the latest build numbers by following this link.

Last Update:
2018-04-09

About the author

I’ve worked for more than 10 years as a software programmer and analyst. Last year I switched jobs to a DBA position, where I’ve been suited to implement new processes and optimize existing ones.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

In a previous tip we learned how to import JSON files into SQL Server using SSIS.  However, I have been supplied a JSON file which contains multiple nested JSON objects. Hence, I would like to learn how to import nested JSON objects into SQL Server using SQL Server Integration Services.

Solution

In this tip, I will show how to import two nested JSON object files into SSIS using a couple of examples.

Import JSON File into SQL Server – Example #1

The below image represents a simple JSON object which contains a nested JSON object "links". The "links" JSON object has 5 attributes namely "self", "first", "last", "next" and "prev".

It is observed that the attribute "prev" has a value of "null". In JSON null or no values are expressed as "null". This example is the same as the “Orders” JSON file mentioned in the previous tip. Hence, we will be following similar procedures to load the file.

As a first step, let’s create a data flow task and add a script component to source the JSON file. Once this is done, let’s add the output columns for the JSON object.

The columns "First", "Last", "Next", "Prev" and "Self" have been added as output columns with the datatype string.

Now we need to define a class to store the value of "Links" object at runtime. If you observed very closely on the JSON viewer, it is evident that there is a root level JSON object.

The root level JSON object contains the "links" JSON object. As there are two objects, we need to create two classes. A "LinkSubItem" class will be defined to store the value of "links" attributes.

A petition class will be defined to represent the root level JSON object. The petition class will have a property to store the values of links. In this way, an object of type Petition will store the root level object with "links" as an inner most object. Hence once the JSON file has been loaded and stored as a petition object, we can access all the properties using .Net libraries.

 using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace PetitionNamespace
{
   class Petition
   {
      public LinkSubItem links { get; set; }
   }
   public class LinkSubItem
   {
      public string self { get; set; }
      public string first { get; set; }
      public string last { get; set; }
      public string next { get; set; }
      public string prev { get; set; }
   }
}			

The above class definition will help you to create the class for Petition.

Deserialization

We have learned about deserialization in the last tip. Now we can deserialize and load the JSON file into an object of type petition. As there is only one JSON object at the root we don’t need to define an array. A simple definition of petition object is enough. Once the petition object has been created we can access the inner most object “links” using the notation "petition.links".

Hence the attributes of link object can be accessed by "petition.links.self". The below mentioned script will help you to deserialize the JSON object.

public override void CreateNewOutputRows()
{
   /*
   Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
   For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
   */
   String jsonFileContent = File.ReadAllText(@"C:\SSIS_JSON\Petition.JSON");
   JavaScriptSerializer js = new JavaScriptSerializer();
   Petition petition = js.Deserialize<Petition>(jsonFileContent);
   Output0Buffer.AddRow();
   Output0Buffer.self = petition.links.self;
   Output0Buffer.First = petition.links.first;
   Output0Buffer.Last = petition.links.last;
   Output0Buffer.Next = petition.links.next;
   Output0Buffer.prev = petition.links.prev;
}			

After the successful execution of the package, we can see a record in the data pipe line as shown in the below picture.

Import JSON File into SQL Server – Example #2

In example #1, we had a quick look at a simple example for a nested JSON document. Now let’s have a look at complex example on the nested JSON file.

In this example, at root level we have a single object "data". The data object contains the value as array and it has two petition objects. Each petition object has three attributes namely type, id and links. Both the attributes type and id are of string datatype and the links attribute is of type object. The JSON object viewer in the below image represents the object structure.

Overall, we have three property details to collect from the supplied JSON and they are "id", "type" and "Link". So, let’s create output columns for these attributes with the data type "string" as mentioned in the below picture.

By closely observing the JSON viewer, we can see that there are three objects in the supplied JSON document. The inner most object is "links" where it has the property "self" which holds the actual link for the petition. The next level in the hierarchy is data array item. An item has three attributes and they are "type", "id" and "links". A data item is a root level object that holds the array of “datasubitems” as its property.

To deserialize this JSON, we need three classes. The inner most class "SelfLink" represents the "links" item. The “DataSubItem” represents each item in the array. It is observed that the “datasubitem” has a property "links" which will return the object of type "Selflink" which contains the link details.

Finally, the “dataitem” class represents the root level object in the JSON file. This root object has a property "data" which will return a collection of subitem.

 using System;
using System.Collections.Generic;
using System.Collections.ObjectModel;
using System.Linq;
using System.Text;

namespace PetitionNamespace
{
   public class DataItem
   {
      public Collection<DataSubItem> data { get; set; }
   }
   public class DataSubItem
   {
      public string type { get; set; }
      public string id { get; set; }
      public Selflink links { get; set; }
   }
   public class Selflink
   {
      public string self { get; set; }
   }
}			

Once the  class has been defined, we can deserialize the JSON file and store the file content of type "DataItem". The “dataitem” object contains a collection of “datasubitmes”. Hence, we need to iterate thru “datasubitems” to collect the details of the attributes. This is achieved by using the foreach loop construct in C#. As the attributes "type", "id" are the simple properties they can be extracted directly from the “datasubitem”. However, the “datasubitem” has "links" object as its attribute. Hence, we need to extract the "links" object from “datasubitem” to collect the "self" link.

The below script will help you to deserialize and extract all the data contents of the JSON file.

 public override void CreateNewOutputRows()
{
   /*
   Add rows by calling the AddRow method on the member variable named "<Output Name>Buffer".
   For example, call MyOutputBuffer.AddRow() if your output was named "MyOutput".
   */
   String jsonFileContent = File.ReadAllText(@"C:\SSIS_JSON\Petition.JSON");
   JavaScriptSerializer js = new JavaScriptSerializer();
   DataItem dataItem = js.Deserialize<DataItem>(jsonFileContent);

   foreach (DataSubItem dataSubItem in dataItem.data)
   {
      Output0Buffer.AddRow();
      Output0Buffer.type = dataSubItem.type;
      Output0Buffer.id = dataSubItem.id;
      Selflink selflink = dataSubItem.links;
      Output0Buffer.Link = selflink.self;
   }
}			

Summary

In this tip, we learned about importing nested JSON data files using SQL Server Integration Services. Also, we have learned about deserializing nested JSON into JSON runtime objects.

Next Steps
  • Learn JSON basics with this tip
  • Challenge your JSON knowledge with this tip

Last Update:
2018-04-10

About the author

Nat Sundar is working as an independent SQL BI consultant in the UK with a Bachelors Degree in Engineering.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

SQL Server security monitoring is a critical part of the Database Administrator’s job. Some security related alerts could be setup very easy, but others require third-party tools or extra steps to setup.  For example, SQL Server Audit can be used to monitor logins or users modification, but it requires audit log review. We would like to get real-time alerts every time a login or a user is created or added to a server or a database role, how can this be done?

Solution

In one of our previous tips, we explained how to setup WMI alerts for database changes monitoring. The setup consists of SQL Server Agent configuration steps, Database Mail configuration, and creation of the alert and a SQL Server Agent Job.

In this tip we will provide steps and scripts for setting up WMI alerts and jobs responding to these alerts to monitor the creation and removal of users and logins as well as server and database roles membership changes.

Make sure that Database Mail is configured and SQL Server Agent is setup to allow replacing tokens as per this tip.

We will provide jobs steps and alerts screenshots and a complete script at the end of the tip for all of the jobs and alerts.

Note – The jobs are not scheduled and cannot be run manually.

Create SQL Server Alert for Create Login and Drop Login Events

In this section we will create a SQL Server Agent Job and an Alert for when logins are created or dropped.

Create SQL Server Job for Create Login and Drop Login Events

The following job ("WMI Response – Audit Add/Remove Login Event") will be responding to the WMI event every time a login is created or deleted.

To create a SQL Server Job, expand SQL Server Agent in SQL Server Management Studio and right click on Jobs and select New Job.

The image below displays the job’s step. You will need to update @profile_name and @recipients parameters with your values (@profile_name will be the "Mail profile" that you created during Database Mail configuration):

Here is the script for the job step above (you will need to update @profile and @recipients parameters with your values):

DECLARE @p_subject NVARCHAR(255), @p_action INT 

SELECT @p_action = $(ESCAPE_SQUOTE(WMI(EventSubClass)))

SELECT  @p_subject = N'WMI Alert: Login [$(ESCAPE_SQUOTE(WMI(ObjectName)))] ' + 
      CASE WHEN  @p_action = 1 THEN 'created on' 
           WHEN  @p_action = 3 THEN 'dropped from' 
           ELSE 'changed on' 
      END + 
      ' [$(ESCAPE_SQUOTE(WMI(ComputerName)))\$(ESCAPE_SQUOTE(WMI(SQLInstance)))].' ;

EXEC msdb.dbo.sp_send_dbmail
   @profile_name = 'DBServerAlerts', -- update with your values
   @recipients = '[email protected]', -- update with your values
   @subject = @p_subject,
   @body = N'Time: $(ESCAPE_SQUOTE(WMI(StartTime))); 
ComputerName: $(ESCAPE_SQUOTE(WMI(ComputerName)));
SQL Instance: $(ESCAPE_SQUOTE(WMI(SQLInstance))); 
Database: $(ESCAPE_SQUOTE(WMI(DatabaseName)));
Target Login Name: $(ESCAPE_SQUOTE(WMI(ObjectName)));
Source Application Name: $(ESCAPE_SQUOTE(WMI(ApplicationName)));
Source Host Name: $(ESCAPE_SQUOTE(WMI(HostName)));
Source Login Name: $(ESCAPE_SQUOTE(WMI(LoginName)));
Source Session Login Name: $(ESCAPE_SQUOTE(WMI(SessionLoginName)));
EventSubClass: $(ESCAPE_SQUOTE(WMI(EventSubClass)));
';
GO	

Create WMI Event for Create Login and Drop Login Events

To create a SQL Server Alert, expand SQL Server Agent in SQL Server Management Studio and right click on Alerts and select New Alert.

Now we will setup the WMI alert:

  • Set the alert type to "WMI event alert"
  • Make sure you use the correct WMI namespace:

Note: The namespace will be different for the default instance and for the named instance. Here are some examples.

-- DEFAULT instance's namespace ("DEMOSQL1" SQL Server):
\\.\root\Microsoft\SqlServer\ServerEvents\MSSQLSERVER
 
--NAMED instance's namespace ("DEMOSQL1\SQLINSTANCE1" SQL Server):
\\.\root\Microsoft\SqlServer\ServerEvents\SQLINSTANCE1			

Here is the WMI query for this alert:

select * from AUDIT_SERVER_PRINCIPAL_MANAGEMENT_EVENT where EventSubClass= 1 or EventSubClass = 3

Set the response in the alert’s properties to execute the SQL Server job we created earlier:

Create SQL Server Alert for Add Member and Drop Member Server Role Events

In this section we will create a SQL Server Agent Job and an Alert for when logins are added or dropped from SQL Server server roles.

Create SQL Server Job for Add Member and Drop Member Server Role Events

Here is the "WMI Response – Audit Add/Remove Server Role Member Event " job’s step for the server roles membership changes monitoring response:

Here is the script for the job step:

DECLARE @p_subject NVARCHAR(255), @p_action INT 

SELECT @p_action = $(ESCAPE_SQUOTE(WMI(EventSubClass)))

SELECT @p_subject = N'WMI Alert: Login [$(ESCAPE_SQUOTE(WMI(TargetLoginName)))] ' + 
      CASE WHEN  @p_action = 1 THEN 'added to the' 
           WHEN @p_action = 2 THEN 'removed from the' 
           ELSE 'changed on' 
      END + 
      ' [$(ESCAPE_SQUOTE(WMI(RoleName)))] Server Role 
        on [$(ESCAPE_SQUOTE(WMI(ComputerName)))\$(ESCAPE_SQUOTE(WMI(SQLInstance)))].' ;

EXEC msdb.dbo.sp_send_dbmail
   @profile_name = 'DBServerAlerts', -- update with your values
   @recipients = '[email protected]', -- update with your values
   @subject = @p_subject,
   @body = N'Time: $(ESCAPE_SQUOTE(WMI(StartTime))); 
Computer Name: $(ESCAPE_SQUOTE(WMI(ComputerName)));
SQL Instance: $(ESCAPE_SQUOTE(WMI(SQLInstance))); 
Database: $(ESCAPE_SQUOTE(WMI(DatabaseName)));
Target Login Name: $(ESCAPE_SQUOTE(WMI(TargetLoginName)));
Target Server Role Name: $(ESCAPE_SQUOTE(WMI(RoleName)));
Source Application Name: $(ESCAPE_SQUOTE(WMI(ApplicationName)));
Source Host Name: $(ESCAPE_SQUOTE(WMI(HostName)));
Source Login Name: $(ESCAPE_SQUOTE(WMI(LoginName)));
Source Session Login Name: $(ESCAPE_SQUOTE(WMI(SessionLoginName)));
SQL Statement: $(ESCAPE_SQUOTE(WMI(TextData)));
EventSubClass: $(ESCAPE_SQUOTE(WMI(EventSubClass)));
';	

Create WMI Event for Add Member and Drop Member Server Role Events

Now we will create the WMI alert as following:

Here is the WMI query for this alert:

select * from AUDIT_ADD_LOGIN_TO_SERVER_ROLE_EVENT			

If you want to audit only "sysadmin" role membership changes you can update the alert’s WMI query above with this:

select * from AUDIT_ADD_LOGIN_TO_SERVER_ROLE_EVENT where RoleName='sysadmin'			

Set the response in the alert’s properties to execute the job we created earlier:

Create SQL Server Alert for Create User and Drop User Events

In this section we will create a SQL Server Agent Job and an Alert for when users are created or dropped for a database. 

Create SQL Server Job for Create User and Drop User Events

This job ("WMI Response – Audit Add/Remove Database User Event") will respond to the alerts triggered when a user in a database is created or if a user is deleted from the database:

Here is the script for the job step:

DECLARE @p_subject NVARCHAR(255), @p_action INT 

SELECT @p_action = $(ESCAPE_SQUOTE(WMI(EventSubClass)))

SELECT  @p_subject = N'WMI Alert: Database User [$(ESCAPE_SQUOTE(WMI(TargetUserName)))] ' + 
      CASE WHEN  @p_action = 3 THEN 'added to' 
           WHEN @p_action = 4 THEN 'removed from' 
           WHEN @p_action = 1 THEN 'added to the [$(ESCAPE_SQUOTE(WMI(RoleName)))] Database Role on'  
           WHEN @p_action = 2 THEN 'removed from the [$(ESCAPE_SQUOTE(WMI(RoleName)))] Database Role on' 
           ELSE 'changed on' 
      END + 
      ' [$(ESCAPE_SQUOTE(WMI(ComputerName)))\$(ESCAPE_SQUOTE(WMI(SQLInstance)))]:[$(ESCAPE_SQUOTE(WMI(DatabaseName)))].' ;

EXEC msdb.dbo.sp_send_dbmail
   @profile_name = 'DBServerAlerts', -- update with your values
   @recipients = '[email protected]', -- update with your values
   @subject = @p_subject,
   @body = N'Time: $(ESCAPE_SQUOTE(WMI(StartTime))); 
Computer Name: $(ESCAPE_SQUOTE(WMI(ComputerName)));
SQL Instance: $(ESCAPE_SQUOTE(WMI(SQLInstance))); 
Database: $(ESCAPE_SQUOTE(WMI(DatabaseName)));
Target Login Name: $(ESCAPE_SQUOTE(WMI(TargetLoginName)));
Target DB User Name: $(ESCAPE_SQUOTE(WMI(TargetUserName)));
Source Application Name: $(ESCAPE_SQUOTE(WMI(ApplicationName)));
Source Host Name: $(ESCAPE_SQUOTE(WMI(HostName)));
Source Login Name: $(ESCAPE_SQUOTE(WMI(LoginName)));
Source Session Login Name: $(ESCAPE_SQUOTE(WMI(SessionLoginName)));
EventSubClass: $(ESCAPE_SQUOTE(WMI(EventSubClass)));
';	

Create WMI Event Job for Create User and Drop User Events

Here is the WMI alert for the database users creation or deletion:

Here is the WMI query for this alert:

select * from AUDIT_ADD_DB_USER_EVENT			

Set the response in the alert’s properties to execute the job we created earlier:

Create SQL Server Alert for Add Member and Drop Member Database Role Events

In this section we will create a SQL Server job and an Alert for when users are added or dropped from database roles.

Create SQL Server Job for Add Member and Drop Member Database Role Events

This job ("WMI Response – Audit Add/Remove DB Role Member Event") will respond to the database roles membership modification events:

Here is the script for the job step:

DECLARE @p_subject NVARCHAR(500), @p_action INT 

SELECT @p_action = $(ESCAPE_SQUOTE(WMI(EventSubClass)))

SELECT  @p_subject = N'WMI Alert: User [$(ESCAPE_SQUOTE(WMI(TargetLoginName)))] ' + 
      CASE WHEN  @p_action = 1 THEN 'added to the' 
           WHEN @p_action = 2 THEN 'removed from the' 
           ELSE 'changed on' 
      END + 
      ' [$(ESCAPE_SQUOTE(WMI(RoleName)))] Database Role on [$(ESCAPE_SQUOTE(WMI(DatabaseName)))]:[$(ESCAPE_SQUOTE(WMI(ComputerName)))\$(ESCAPE_SQUOTE(WMI(SQLInstance)))].' ;

EXEC msdb.dbo.sp_send_dbmail
   @profile_name = 'DBServerAlerts', -- update with your values
   @recipients = '[email protected]', -- update with your values
   @subject = @p_subject,
   @body = N'Time: $(ESCAPE_SQUOTE(WMI(StartTime))); 
Computer Name: $(ESCAPE_SQUOTE(WMI(ComputerName)));
SQL Instance: $(ESCAPE_SQUOTE(WMI(SQLInstance))); 
Database: $(ESCAPE_SQUOTE(WMI(DatabaseName)));
Target User Name: $(ESCAPE_SQUOTE(WMI(TargetUserName)));
Target Database Role Name: $(ESCAPE_SQUOTE(WMI(RoleName)));
Source Application Name: $(ESCAPE_SQUOTE(WMI(ApplicationName)));
Source Host Name: $(ESCAPE_SQUOTE(WMI(HostName)));
Source Login Name: $(ESCAPE_SQUOTE(WMI(LoginName)));
Source Session Login Name: $(ESCAPE_SQUOTE(WMI(SessionLoginName)));
EventSubClass: $(ESCAPE_SQUOTE(WMI(EventSubClass)));	
	

Create WMI Event for Add Member and Drop Member Database Role Events

WMI alert for the database roles membership changes:

Here is the WMI query for this alert:

select * from AUDIT_ADD_MEMBER_TO_DB_ROLE_EVENT			

Set the response in the alert’s properties to execute the job we created earlier:

Testing SQL Server Security Alerts

Now we should be able to receive email notifications every time somebody creates or deletes logins or database users or adds or removes them from server or database roles.

Let’s create a test login:

USE [master]
GO
CREATE LOGIN [_demo_user] 
   WITH PASSWORD=N'[email protected]', 
   DEFAULT_DATABASE=[master], 
   CHECK_EXPIRATION=ON, 
   CHECK_POLICY=ON
GO	

You should get an email as the following one:

Now we will add this login to a server role:

USE [master]
GO
ALTER SERVER ROLE [bulkadmin] ADD MEMBER [_demo_user]
GO			

Here is an email notification:

We will add this login as a database user on the Contoso database:

USE [Contoso]
GO
CREATE USER [_demo_user] FOR LOGIN [_demo_user]
GO			

Here is the email:

Now, we will add this user to the db_datareader database role:

USE [Contoso]
GO
ALTER ROLE [db_datareader] ADD MEMBER [_demo_user]
GO			

The email looks like this:

Removing a user from a database role:

USE [Contoso]
GO
ALTER ROLE [db_datareader] DROP MEMBER [_demo_user]
GO			

Here is an email example:

Let’s remove the user from the database:

USE [Contoso]
GO
DROP USER [_demo_user]
GO			

Here is the email:

Now, let’s remove the login from the server role:

USE [master]
GO
ALTER SERVER ROLE [bulkadmin] DROP MEMBER [_demo_user]
GO			

Here is the email:

And, finally, delete the login completely from the SQL Server:

USE [master]
GO
DROP LOGIN [_demo_user]
GO			

Here is the email:

Complete Script

The script for all of the jobs and alerts can be downloaded here.

Please note, that you may need to update the following parts of the script:

  • @wmi_namespace for the alerts (see the examples above)
  • @profile_name and @recipients parameters with your values (@profile_name will be the "Mail profile" that you created during Database Mail configuration)
  • Replace the job owner in the script if the "sa" login is renamed on your SQL Server:

Next Steps

Last Update:
2018-04-11

About the author

Svetlana Golovko is a DBA with 13 years of the IT experience (including SQL Server and Oracle) with main focus on performance.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

In part 1, we described the requirements for calculating attrition and also demonstrated one method that doesn’t rely on writing DAX code at all. In the second part of this tip, we introduce alternative methods of creating a calculation in DAX to calculate the number of employees that have left the company.

Solution

If you want to follow along with the code examples in this tip, you can check out part 1 for the test set-up and the creation of sample data. We already calculated attrition using the distinct count of employee keys and the auto-generated date hierarchy of Power BI Desktop. This method has some drawbacks, which we try to remedy in the alternative methods of calculation presented in the following sections.

Adding a Date Dimension

One of the downsides of the auto-generated date hierarchy is that it is limited: there are only 4 levels in the hierarchy (year, quarter, month and date) and there are no other descriptive attributes such as weeks, fiscal hierarchies, holidays and so on. In typical data warehouse scenarios, a date dimension is used. Luckily, it’s very easy to create one directly in the model. Let’s try the DAX code introduced in the tip Scenarios for Using Calculated Tables in SSAS Tabular 2016 or Power BI Desktop – Part 2.

You can add extra columns if you desire. Don’t forget to set additional properties such a sorting order:

After adding this calculated table, you have the following model:

Method 2 – Using DAX and Unrelated Date Dimension

You can find method 1 in this tip.

As you can see in the screenshot of the model, there’s no relationship between the EmployeeRetention table and the Date table. This is actually quite common in typical data warehouse implementations. In a traditional star schema, the dimensions are related to the fact tables. The data of the EmployeeRetention table can usually be found in the employee dimension: the employees and their respective termination dates. Since both tables are dimensions, it’s not unusual to not have any relationships between them. If you would create a relationship, you would introduce snowflaking into the model, which makes it more complex. Furthermore, in a normal star schema, Power BI Desktop doesn’t allow you to create such a relationship because it would make the model ambiguous. The following diagram explains the issue for a star schema with a single fact table:

The red arrow is the new relationship between employee and date. If you filter on the employee dimension, the filter directly filters the fact table. The filter will also propagate to the date dimension, filter that table, which then will propagate to the fact table. So, there are now two paths on which a filter on employee can reach the fact table. This makes the model ambiguous and Power BI Desktop will avoid such a scenario.

Since we cannot create a relationship in this type of scenario, we need to enforce the relationship virtually in DAX. There are multiple ways of achieving this in DAX. The following measure uses the FILTER function to filter the employeeretention table using the minimum and maximum date of the date dimension:

EmployeesGone = VAR MinDate = MIN('Date'[Date])
                VAR MaxDate = MAX('Date'[Date])
                VAR Exits   = CALCULATE(DISTINCTCOUNT(EmployeeRetention[EmployeeKey])
                                ,FILTER(EmployeeRetention,EmployeeRetention[EndDate] >= MinDate && EmployeeRetention[EndDate] <= MaxDate)
                )
RETURN IF(ISBLANK(Exits),0,Exits)
			

For every row of the EmployeeRetention table, we check if the EndDate is between the minimum and maximum date of the date dimension. The min and max date are determined by the current filter context. If the date dimension is filtered on the year 2018, the min date is 2018-01-01 and the max date is 2018-12-31. Every employee with an end date in 2018 will be counted towards the attrition. If the current filter context is April 2017, the min date is 2017-04-01 and the max date is 2017-04-30.

The final line of the measure turns any blank result into a 0. This makes sure there are no gaps in the date sequence. For example, if we want to plot the trend line of the attrition over the months of the year and no employees have left in April, the result set needs to contain a result for April as well or it would be omitted. The downside is that now the measure returns a result for every date in the date dimension:

With a bit of filtering we can focus on the relevant months (you can add for example a columns IsCurrentYear to your date dimension):

We can verify the measures works and it returns the same results as the method used in part 1. However, highlighting a month gives an incorrect result in the bar chart:

It looks like the highlighted bars forget the filter context of the selected month. In fact, if we hover over a bar, the tooltip shows those long bars have the value 244, which is the total number of employees that do not have a termination date.

To fully understand what is going on, we can turn on Profiler to capture the generated queries. The blog post Connecting SQL Server Profiler to Power BI Desktop walks you through this process. The following DAX query is created when we highlight a certain month:

Let’s analyze what is going on:

The H0FilterTable contains all existing filters on the table, together with the month we clicked on. The DS0FilterTable contains any existing filter on the bar chart. In the EVALUATE query, the bar chart is calculated again: years and months on the vertical axis + the existing filters + the original measure (which will be put in the light color) and the same measure but now with the H0FilterTable applied. That new measure corresponds with the highlighted values, which are in this case 244. The IGNORE option is part of SUMMARIZECOLUMNS and specifies how to handle blank values. Since our measure doesn’t return blank values, we can ignore this.

But why does every month return 244, the number of employees that do not have a termination date? The current filter context is also applied. For example, we have a single bar for 2018 October. This means the data is filtered upon Year and Month (the first two columns specified in SUMMARIZECOLUMNS). The final filter thus becomes: return me the measure for which the termination date is in October 2018 and as well September 2018 (the highlighted month). No actual date can be in two months at the same time, so all the employees with a termination date are filtered out. However, if the termination date is NULL, Boolean logic returns “unknown” and the data is returned to the bar chart.

Basically, the measure has a problem when termination dates are empty. In the following two methods, we will try to solve this issue.

Method 3 – Using DAX and Relations

The first option is to create a relationship between the two tables. However, we need to make sure the EmployeeRetention table is modeled as a fact table and not as a dimension, otherwise we get an ambiguous model as stated before. In the scenario where you only have the two tables like in this test set-up, there’s not a problem of course and we can create the relationship as follows:

Because now the date table actually filters the EmployeeRetention table, the measure from the previous section behaves as expected:

Method 4 – Using DAX and Unrelated Date Dimension – Revisited

In the case where the EmployeeRetention table cannot be modeled as a fact table – maybe all the necessary data is stored in the employee dimension and it’s not feasible to create a new fact table – we need to write a measure forcing the relationship like we did before. However, now we need to fix the issue with the empty end dates. We have two options:

  • Assign a dummy date for the termination date for employees who haven’t left the company, for example 2999-12-31. We can easily filter this value out of the visuals.
  • We fix the measure itself.  This can be achieved by filtering out the blank values of the end date.

The measure now becomes:

EmployeesGone_Correct = VAR MinDate = MIN('Date'[Date])
                        VAR MaxDate = MAX('Date'[Date])
                        VAR Exits = CALCULATE(DISTINCTCOUNT(EmployeeRetention[EmployeeKey])
                                        ,   EmployeeRetention[EndDate] >= MinDate
                                        &&  EmployeeRetention[EndDate] <= MaxDate
                                        &&  NOT(ISBLANK(EmployeeRetention[EndDate]))
                )
RETURN IF(ISBLANK(Exits),0,Exits)
			

It works as expected:

Conclusion

In this two-part tip we looked at how we can calculate attrition numbers using the DAX query language. There are 3 distinct methods:

  • Simply count the employee keys on the retention dates. This method has the disadvantages that you have to work with the limited auto-generated date hierarchy and that gaps aren’t shown.
  • The second option is the use a date table and create a relationship between the two tables. However, you need to make sure your data model is modeled correctly to avoid an ambiguous model.
  • The last option is to use an unrelated date dimension and use a DAX measure. To have correct behavior, you need to take blank values into account.
Next Steps
  • Try it out yourself. You can follow the steps described in this tip. Don’t forget part 1 where we create the sample data.
  • If you want to learn more about DAX, you can follow the DAX tutorial.
  • You can find more Analysis Services tips in this overview.

Last Update:
2018-05-08

About the author

Koen Verbeeck is a BI professional, specializing in the Microsoft BI stack with a particular love for SSIS.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized

Problem

You probably know what SQL Server temporal tables are by now, but do you know all of the benefits of using them? In this tip we cover some aspects to help you make an informed decision when building your next application about why and how to use SQL Server temporal tables.

Solution

Temporal tables are useful in applications where tracking of data changes is required. Let’s learn about some of the key benefits of using temporal tables in this tip.

This is part 1 of a series of tips to explain various benefits of using temporal tables. In each tip in this series, we are going to present a different example of using SQL Server temporal tables and from it learn about the usefulness of this feature in detail.

Each scenario will be tagged with one or more benefits from the following list:

  1. Speedy coding
  2. Built in optimization
  3. Easy maintenance
  4. Granular security
  5. Easy auditing
  6. Transparent implementation
  7. Quick data recovery

Using SQL Server Temporal Tables for Quick Data Recovery from Accidental Updates or Deletes

We will create a test database and a temporal table. Then we will insert a few rows into the table and then we will do some DML operations and finally query both tables to see the data.

USE master
GO

DROP DATABASE IF EXISTS TestTemporal;

CREATE DATABASE TestTemporal;
GO
 
USE TestTemporal
GO
 
CREATE TABLE Customer (
   CustomerId INT IDENTITY(1,1)  PRIMARY KEY
  ,FirstName VARCHAR(30) NOT NULL
  ,LastName VARCHAR(30) NOT NULL
  ,Amount_purchased DECIMAL NOT NULL
  ,StartDate datetime2 generated always as row START NOT NULL
  ,EndDate datetime2 generated always as row END NOT NULL
  ,PERIOD FOR SYSTEM_TIME (StartDate, EndDate)
)
WITH(SYSTEM_VERSIONING = ON (HISTORY_TABLE = dbo.CustomerHistory)) ;
GO

INSERT INTO dbo.Customer (FirstName, LastName, Amount_Purchased)
VALUES('Frank', 'Sinatra', 20000.00),('Shawn', 'McGuire', 30000.00),('Amy', 'Carlson', 40000.00);
GO
 
-- Now make some changes in the table
WAITFOR DELAY '00:00:30';

-- insert a row
INSERT INTO Customer (FirstName, LastName, Amount_purchased)
VALUES ('Peter', 'Pan', 50000);
GO

WAITFOR DELAY '00:00:30';

-- delete a row
DELETE FROM dbo.Customer WHERE CustomerId = 2;
GO

WAITFOR DELAY '00:00:30';

-- update a row
UPDATE Customer SET Lastname = 'Clarkson' WHERE CustomerId = 3;
 
-- Let us query both temporal and history tables
SELECT * FROM dbo.Customer;
SELECT * FROM dbo.CustomerHistory;

Here is the what the data in the tables looks like.

After running the above code, we will see all the changes reflected in Customer and CustomerHistory tables.

Recover Deleted Record for SQL Server Temporal Table

If we want to recover the data we deleted we simply have to find the record id and time the delete operations happened to bring the data back to the main temporal table. Here is how it is done.

-- recover one row that we deleted			
-- this table has an identity column so we need to allow inserts using this command
SET IDENTITY_INSERT dbo.Customer ON 
  
INSERT INTO dbo.Customer(CustomerId, FirstName, LastName, Amount_purchased) 
SELECT CustomerId, FirstName,LastName, Amount_purchased 
FROM dbo.Customer  
   FOR SYSTEM_TIME AS OF '2018-04-19 18:16:43.3351187' 
WHERE CustomerId =2 

-- this table has an identity column so now we need to turn off inserts using this command  
SET IDENTITY_INSERT dbo.Customer OFF 

The time value used here is the one where this customer record was valid (for example at the time of insert). The FOR SYSTEM_TIME clause AS OF made it a breeze to get the data back from the history table and insert the data into the Customer table. We did not have to perform any joins.  If the table does not have an identity column (CustomerId) then you don’t need to do Identity_Insert ON and OFF.

Recover Updated Data for SQL Server Temporal Table

Now let’s recover the old last name of Amy (Carlson) that was updated to “Clarkson”. Here, FOR System_Time clause is acting as a history table joined to the Customer table to get the updated value.

-- Let's look at the old value of CustomerID =3 
SELECT * 
FROM dbo.Customer  
   FOR SYSTEM_TIME AS OF '2018-04-19 18:16:43.3351187'  
WHERE CustomerId = 3 
  
-- Let's look at the current value of CustomerID =3 
SELECT * 
FROM dbo.Customer  
   FOR SYSTEM_TIME AS OF '2018-04-19 18:18:13.3820395' 
WHERE CustomerId = 3 
			

Here are the results.

Here is the command we can use to get the data back.

-- Recover old value of the updated row
UPDATE dbo.Customer 
SET LastName= history.Lastname
FROM dbo.Customer 
   FOR SYSTEM_TIME AS OF '2018-04-19 18:16:43.3351187' as history
WHERE history.CustomerId = 3 and Customer.CustomerId = 3
 
-- Let us query both temporal and history tables
SELECT * FROM dbo.Customer;
SELECT * FROM dbo.CustomerHistory;	

Here are the results.

Again we see how easy it is to update the values in the current temporal table with values from the history table. One thing to note is that this is actually a second update to the CustomerId = 3 record and hence you will see one more row in the history table. In the example above of recovering deleted data, it was an insert statement to the Customer temporal table and therefore no history row was generated for it. We can interpret that values in the history tables were valid during the period of the StartTime and EndTime dates.

Show list of all changes made to a SQL Server Temporal Table

Now let’s say we want to audit the data to show all changes for all records in a table or for just one record. You need to use the ALL clause of the FOR SYSTEM_TIME.

SELECT * 
FROM dbo.Customer 
   FOR SYSTEM_TIME ALL 
ORDER BY StartDate; 

-- All records for Amy 
SELECT * 
FROM dbo.Customer 
   FOR SYSTEM_TIME ALL 
WHERE CustomerId = 3 
ORDER BY StartDate; 
  
-- All records for Shawn 
SELECT * 
FROM dbo.Customer 
   FOR SYSTEM_TIME ALL 
WHERE customerId = 2 
ORDER BY StartDate; 

Here are the results.

Summary

In this tip, we saw that SQL Server temporal tables are an excellent feature to use where data tracking of a mission critical application is required. This is because it is so easy to setup and retrieve data for auditing as well as for recovery of data from accidental updates and deletes that are super-fast and simple to achieve.

Next Steps
  • Please read more about temporal table query syntax here.
  • Review this tip for another example of retrieving data from temporal tables.

Last Update:
2018-05-08

About the author

Ameena Lalani is a MCSA on SQL Server 2016. She is a SQL Server veteran and started her journey with SQL Server 2000. She has implemented numerous High Availability and Disaster Recovery solutions.

View all my tips

Salı, 29 Mayıs 2018 / Published in Uncategorized



The SOLID design principles were promoted by Robert C. Martin and are some of the best-known design principles in object-oriented software development. SOLID is a mnemonic acronym for the following five principles:

Each of these principles can stand on its own and has the goal to improve the robustness and maintainability of object-oriented applications and software components. But they also add to each other so that applying all of them makes the implementation of each principle easier and more effective.

I explained the first four design principles in previous articles. In this one, I will focus on the Dependency Inversion Principle. It is based on the Open/Closed Principle and the Liskov Substitution Principle. You should, therefore, at least be familiar with these two principles, before you read this article.

Definition of the Dependency Inversion Principle

The general idea of this principle is as simple as it is important: High-level modules, which provide complex logic, should be easily reusable and unaffected by changes in low-level modules, which provide utility features. To achieve that, you need to introduce an abstraction that decouples the high-level and low-level modules from each other.

Based on this idea, Robert C. Martin’s definition of the Dependency Inversion Principle consists of two parts:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

An important detail of this definition is, that high-level and low-level modules depend on the abstraction. The design principle does not just change the direction of the dependency, as you might have expected when you read its name for the first time. It splits the dependency between the high-level and low-level modules by introducing an abstraction between them. So in the end, you get two dependencies:

  1. the high-level module depends on the abstraction, and
  2. the low-level depends on the same abstraction.

Based on other SOLID principles

This might sound more complex than it often is. If you consequently apply the Open/Closed Principle and the Liskov Substitution Principle to your code, it will also follow the Dependency Inversion Principle.

The Open/Closed Principle required a software component to be open for extension, but closed for modification. You can achieve that by introducing interfaces for which you can provide different implementations. The interface itself is closed for modification, and you can easily extend it by providing a new interface implementation.

Your implementations should follow the Liskov Substitution Principle so that you can replace them with other implementations of the same interface without breaking your application.

Let’s take a look at the CoffeeMachine project in which I will apply all three of these design principles.

Brewing coffee with the Dependency Inversion Principle

You can buy lots of different coffee machines. Rather simple ones that use water and ground coffee to brew filter coffee, and premium ones that include a grinder to freshly grind the required amount of coffee beans and which you can use to brew different kinds of coffee.

If you build a coffee machine application that automatically brews you a fresh cup of coffee in the morning, you can model these machines as a BasicCoffeeMachine and a PremiumCoffeeMachine class.

Implementing the BasicCoffeeMachine

The implementation of the BasicCoffeeMachine is quite simple. It only implements a constructor and two public methods. You can call the addGroundCoffee method to refill ground coffee, and the brewFilterCoffee method to brew a cup of filter coffee.

import java.util.Map;

public class BasicCoffeeMachine implements CoffeeMachine {

    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee).   
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee  
       return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException { 
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity())
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.")
            }
        } else {
            this.groundCoffee.put(sel, newCoffee)
        }
    }  
}

Implementing the PremiumCoffeeMachine

The implementation of the PremiumCoffeeMachine class looks very similar. The main differences are:

  • It implements the addCoffeeBeans method instead of the addGroundCoffee method.
  • It implements the additional brewEspresso method.

The brewFilterCoffee method is identical to the one provided by the BasicCoffeeMachine.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28));
    }

    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.ESPRESSO),
            config.getQuantityCoffee())
        // brew an espresso
        return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
            config.getQuantityWater());
    }

    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee,
            config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
         } else {
             this.beans.put(sel, newBeans); 
         }
    }
}

To implement a class that follows the Dependency Inversion Principle and can use the BasicCoffeeMachine or the PremiumCoffeeMachine class to brew a cup of coffee, you need to apply the Open/Closed and the Liskov Substitution Principle. That requires a small refactoring during which you introduce interface abstractions for both classes.

Introducing abstractions

The main task of both coffee machine classes is to brew coffee. But they enable you to brew different kinds of coffee. If you use a BasicCoffeeMachine, you can only brew filter coffee, but with a PremiumCoffeeMachine, you can brew filter coffee or espresso. So, which interface abstraction would be a good fit for both classes?

As all coffee lovers will agree, there are huge differences between filter coffee and espresso. That’s why we are using different machines to brew them, even so, some machines can do both. I, therefore, suggest to create two independent abstractions:

  • The FilterCoffeeMachine interface defines the Coffee brewFilterCoffee() method and gets implemented by all coffee machine classes that can brew a filter coffee.
  • All classes that you can use to brew an espresso, implement the EspressoMachine interface, which defines the Coffee brewEspresso() method.

As you can see in the following code snippets, the definition of both interface is pretty simple.

 
public interface CoffeeMachine {
    Coffee brewFilterCoffee();
}

public interface EspressoMachine {
    Coffee brewEspresso();
}

In the next step, you need to refactor both coffee machine classes so that they implement one or both of these interfaces.

Refactoring the BasicCoffeeMachine class

Let’s start with the BasicCoffeeMachine class. You can use it to brew a filter coffee, so it should implement the CoffeeMachine interface. The class already implements the brewFilterCoffee() method. You only need to add implements CoffeeMachine to the class definition.

public class BasicCoffeeMachine implements CoffeeMachine {
    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee) {
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException {
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity());
            } else {
             throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
           }
        } else {
            this.groundCoffee.put(sel, newCoffee);
        }
    } 
}

Refactoring the PremiumCoffeeMachine class

The refactoring of the PremiumCoffeeMachine also doesn’t require a lot of work. You can use the coffee machine to brew filter coffee and espresso, so the PremiumCoffeeMachine class should implement the CoffeeMachine and the EspressoMachine interfaces. The class already implements the methods defined by both interfaces. You just need to declare that it implements the interfaces.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine implements CoffeeMachine, EspressoMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder;
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28)); 
    }

    @Override
    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
           this.beans.get(CoffeeSelection.ESPRESSO),
           config.getQuantityCoffee());
       // brew an espresso
       return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
           config.getQuantityWater());
    }

    @Override
    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, 
            groundCoffee,config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
        } else {
            this.beans.put(sel, newBeans);
        }
    }
}

The BasicCoffeeMachine and the PremiumCoffeeMachine classes now follow the Open/Closed and the Liskov Substitution principles. The interfaces enable you to add new functionality without changing any existing code by adding new interface implementations. And by splitting the interfaces into CoffeeMachine and EspressoMachine, you separate the two kinds of coffee machines and ensure that all CoffeeMachine and EspressMachine implementations are interchangeable.

Implementing the coffee machine application

You can now create additional, higher-level classes that use one or both of these interfaces to manage coffee machines without directly depending on any specific coffee machine implementation.

As you can see in the following code snippet, due to the abstraction of the CoffeeMachine interface and its provided functionality, the implementation of the CoffeeApp is very simple. It requires a CoffeeMachine object as a constructor parameter and uses it in the prepareCoffee method to brew a cup of filter coffee.

public class CoffeeApp {
    private CoffeeMachine coffeeMachine;

    public CoffeeApp(CoffeeMachine coffeeMachine) {
     this.coffeeMachine = coffeeMachine
    }

    public Coffee prepareCoffee(CoffeeSelection selection
        throws CoffeeException {
        Coffee coffee = this.coffeeMachine.brewFilterCoffee();
        System.out.println("Coffee is ready!");
        return coffee;
    }  
}

The only code that directly depends on one of the implementation classes is the CoffeeAppStarter class, which instantiates a CoffeeApp object and provides an implementation of the CoffeeMachine interface. You could avoid this compile-time dependency entirely by using a dependency injection framework, like Spring or CDI, to resolve the dependency at runtime.

import java.util.HashMap;
import java.util.Map;

public class CoffeeAppStarter {
    public static void main(String[] args) {
        // create a Map of available coffee beans
        Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>();
        beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean(
            "My favorite espresso bean", 1000));
        beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean(
             "My favorite filter coffee bean", 1000))
        // get a new CoffeeMachine object
        PremiumCoffeeMachine machine = new PremiumCoffeeMachine(beans);
        // Instantiate CoffeeApp
        CoffeeApp app = new CoffeeApp(machine);
        // brew a fresh coffee
        try {
           app.prepareCoffee(CoffeeSelection.ESPRESSO);
        } catch (CoffeeException e) {
            e.printStackTrace();
        }
    }
}

Summary

The Dependency Inversion Principle is the fifth and final design principle that we discussed in this series. It introduces an interface abstraction between higher-level and lower-level software components to remove the dependencies between them.

As you have seen in the example project, you only need to consequently apply the Open/Closed and the Liskov Substitution principles to your code base. After you have done that, your classes also comply with the Dependency Inversion Principle. This enables you to change higher-level and lower-level components without affecting any other classes, as long as you don’t change any interface abstractions.

If you enjoyed this article, you should also read my other articles about the SOLID design principles:

TOP