Cuma, 31 Ağustos 2018 / Published in Uncategorized

Disclaimer: I’m not a Git guru, so information in this post might not be the most accurate, it just works on my machine and wanted to share my experience.

I take for granted that you are a Visual Studio user, that you use Git using the Visual Studio plugin and, like me, have the need to work on projects where you need to share code hosted in its own separate repository among different solutions. I know that there are several solutions for this, as example using Nuget packages for shared code, but none of them reaches the flexibility that using source code offers, both in terms of debugging and real time bug fixing. Ok, not ‘technically correct’ I know, but it works, and I love things that works and make my life easier. Since I’m not the only one with this need I checked online and found that, among different alternatives, the most voted one (with also several opponents indeed) is to use Git SubModules that, to make it simple, are nothing more than repositories embedded inside a main repository. In this case, the submodule repository is copied into a subfolder of main one, and information about the original module are also added to main repository this means that when you clone the main project also all submodules are cloned.

Submodules in action Let’s create our fist project that uses a git submodule, fasten your seat belt and enjoy the journey.

I’ve created two UWP projects (but you can use any kind of project of course…) a main UWP application SubModulesApp and a UWP library named SubModulesLib, each one has its own repository hosted on github.com. I now have the need that SubModulesApp must use some services contained inside SubModules lib and once I start using the lib, is evident that both repos will have a string relationship so, even if i could keep them separated and just reference local SubModulesLib project from main app, the best solution is to create a submodule, this also gives us the benefit to keep submodule on a different commit compared to the ‘master’ one in case we need it.

Let’s start and open our empty app in Visual Studio:

Now open a Visual Studio command prompt at solution folder, if you use Visual Studio PowerCommands, just right click the solution node and select Power Command –> Open Command Prompt. Let now type: git submodule add <Path to your git repository> <foldername> and your project will be cloned into <foldername> here’s an example of what I’ve got on my machine

and here’s project folder structure

Now you can add the Lib project inside MyAwesomeLib folder to SubModulesApp project

Consume the Lib services and push everything back to github.

Let’s now make an important test: What if lib author updates the code in SubModulesLib? will I get the changes if when I pull SubModulesApp? To test it I’ve added a comment to MyCalculator.cs class and pushed the change back to original repository, I then pulled the SubModulesApp that uses the lib as submodules and, unfortunately, the change is not there so, it looks like that what we get here is a copy, or, to better say, something not pointing to the latest commit. To see the changes we need to open the solution from inside our nested folder (in this case MyAwesomeLib) and pull the changes from there, totally annoying stuff that could be avoided if Git plugin for Visual Studio would support multiple repositories (please vote for this feature here: https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/8960629-allow-multiple-git-repositories-to-be-active-at-on) What about the opposite? If I do a modification of code inside a submodule, will it be pushed back to the original repository? (in our case from inside SubModuleApp solution) unfortunately not, as before you need to push changes from the instance of Visual Studio that hosts the SubModuleLib residing inside MyAwesomeLib folder doing that properly aligns the original source repository.

All this works because we are working on the project where submodule was created, if someone else need to clone and work on the same project the following steps must be done:

1-Clone the project from Visual Studio (or manually if you’re an hypster…) 2-Open a VS Command Prompt at solution level and issue this command: git submodule update –init –recursive 3-Open each submodule under main solution, and checkout the associated branch using Visual Studio plugin (you will see that it results in detached state)

Now your cloned solution’s submodules are attached to original repositories and everything works as previously described.

A bit tricky for sure, at least until Visual Studio Git plugin won’t support multiple repositories. but once project is properly initialized is just a matter of remember to open the submodule project each time you need to interact with git.

Cuma, 31 Ağustos 2018 / Published in Uncategorized

This little project is a practical implementation of a blog post I wrote about implementing daemons in .NET Core. This daemon is a .NET Core console app that is using a Generic Host to host an MQTT Server based on the code in MQTTNet, which has become my go-to library for MQTT for .NET apps of all kinds. In reality, this really isn’t creating the server. MQTTNet already supplies much of the plumbing needed to do this. This is putting a wrapper around MQTTNet’s server API, which by itself can act as a service because it does implement the IHostedService interface we talked about in our last post.

Download the complete project here.

To make this work, I started with the same basic structure from the previous project with a service class, a config class, and the Program.cs that acts as the entry point for the daemon. I renamed the project to MQTTDaemon, the service to MQTTService and the config to MQTTConfig. Likewise, I added the needed dependencies from Nuget to include MQTTNet. Here are the files respectively:

Program.cs

This simply wired up the renamed services with their new names. Notice the config still gets it’s configuration from the CLI or Environment. In this case, it’s looking for a port.

using System;
using System.Threading.Tasks;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.DependencyInjection;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using MQTTnet.Server;

namespace MQTTDaemon
{
    class Program
    {
        public static async Task Main(string[] args)
        {
            var builder = new HostBuilder()
                .ConfigureAppConfiguration((hostingContext, config) =>
                {
                    config.AddEnvironmentVariables();
                    if (args != null)
                    {
                        config.AddCommandLine(args);
                    }
                })
                .ConfigureServices((hostContext, services) =>
                {
                    services.AddOptions();
                    services.Configure<MQTTConfig>(hostContext.Configuration.GetSection("MQTT"));
                    services.AddSingleton<IHostedService, MQTTService>();
                })
                .ConfigureLogging((hostingContext, logging) => {
                    logging.AddConsole();
                });
            await builder.RunConsoleAsync();
        }
    }
}

MQTTConfig.cs

Again, this is just another POCO class that has some fields in it that carry the configuration for the server.

using System;

namespace MQTTDaemon
{
    public class MQTTConfig 
    {
        public int Port { get; set; } 

    }
}

MQTTService.cs

This class is not doing anything particularly special, but take a moment to notice the StartAsync. Rather than return a Task.Completed, this is returning the MQTTNet’s server’s StartAsync task.

using System;
using System.Threading;
using System.Threading.Tasks;
using Microsoft.Extensions.Hosting;
using Microsoft.Extensions.Logging;
using Microsoft.Extensions.Options;
using MQTTnet;
using MQTTnet.Server;

namespace MQTTDaemon
{

    public class MQTTService : IHostedService, IDisposable
    {
        private readonly ILogger _logger;
        private readonly IOptions<MQTTConfig> _config;
        private IMqttServer _mqttServer;
        public MQTTService(ILogger<MQTTService> logger, IOptions<MQTTConfig> config)
        {
            _logger = logger;
            _config = config;
        }

        public Task StartAsync(CancellationToken cancellationToken)
        {
            _logger.LogInformation("Starting MQTT Daemon on port " + _config.Value.Port);

            //Building the config
            var optionsBuilder = new MqttServerOptionsBuilder()
                .WithConnectionBacklog(1000)
                .WithDefaultEndpointPort(_config.Value.Port);


            //Getting an MQTT Instance
            _mqttServer = new MqttFactory().CreateMqttServer();

            //Wiring up all the events...

            _mqttServer.ClientSubscribedTopic += _mqttServer_ClientSubscribedTopic;
            _mqttServer.ClientUnsubscribedTopic += _mqttServer_ClientUnsubscribedTopic;
            _mqttServer.ClientConnected += _mqttServer_ClientConnected;
            _mqttServer.ClientDisconnected += _mqttServer_ClientDisconnected;
            _mqttServer.ApplicationMessageReceived += _mqttServer_ApplicationMessageReceived;

            //Now, start the server -- Notice this is resturning the MQTT Server's StartAsync, which is a task.
            return _mqttServer.StartAsync(optionsBuilder.Build());
        }

        private void _mqttServer_ApplicationMessageReceived(object sender, MqttApplicationMessageReceivedEventArgs e)
        {
            _logger.LogInformation(e.ClientId + " published message to topic " + e.ApplicationMessage.Topic);
        }

        private void _mqttServer_ClientDisconnected(object sender, MqttClientDisconnectedEventArgs e)
        {
            _logger.LogInformation(e.ClientId + " Disonnected.");
        }

        private void _mqttServer_ClientConnected(object sender, MqttClientConnectedEventArgs e)
        {
            _logger.LogInformation(e.ClientId + " Connected.");
        }

        private void _mqttServer_ClientUnsubscribedTopic(object sender, MqttClientUnsubscribedTopicEventArgs e)
        {
            _logger.LogInformation(e.ClientId + " unsubscribed to " + e.TopicFilter);
        }

        private void _mqttServer_ClientSubscribedTopic(object sender, MqttClientSubscribedTopicEventArgs e)
        {
            _logger.LogInformation(e.ClientId + " subscribed to " + e.TopicFilter);
        }

        public Task StopAsync(CancellationToken cancellationToken)
        {
            _logger.LogInformation("Stopping MQTT Daemon.");
            return _mqttServer.StopAsync();
        }

        public void Dispose()
        {
            _logger.LogInformation("Disposing....");

        }
    }
}

Using the Daemon with a Client

Now, you can build and run  the app. To do so, first run dotnet build in the root directory of the app. Next, run dotnet run --MQTT:Port=1883. This will start the server on port 1883, the standard port for a MQTT Server.

With that running you can test it. There are dozens of tools for testing MQTT Servers and one such tool is MQTTLens, which is a Chrome plugin.  The app is pretty straightforward. Click the + button next to Connections to add a connection, then set the Hostname to localhost and the Port to 1883. Click Create Connection at the bottom of the config to connect to the server.

MQTTLens Config

You’ll notice in the shell where your server is running that a new connection added. Now, you can subscribe to a topic. Enter the name of a topic and click Subscribe. This will create a new topic in the server and MQTTLens is now listening on this topic.

info: MQTTDaemon.MQTTService[0]
      Starting MQTT Daemon on port 1883
Application started. Press Ctrl+C to shut down.
Hosting environment: Production
Content root path: D:\source\dotnet-daemon\mydaemon\bin\Debug\netcoreapp2.1\
info: MQTTDaemon.MQTTService[0]
      lens_tlcrOhH5HWEGFb3egEdX0uTRTzm Connected.

With MQTTLens you can also publish to the topic you created. Simply enter the same topic name, enter a message, then click Publish. The message will show up in MQTTLens.

MQTTLensBack in the shell running the server, you can see where the topic was subscribed to and where the message was published.

info: MQTTDaemon.MQTTService[0]
      lens_tlcrOhH5HWEGFb3egEdX0uTRTzm subscribed to MyTopic@AtMostOnce
info: MQTTDaemon.MQTTService[0]
      lens_tlcrOhH5HWEGFb3egEdX0uTRTzm published message to topic MyTopic

Conclusion

This little demo shows you how to wire up a daemon and do something practical with it. As mentioned, MQTTnet itself can act as a service too. You don’t need the wrapper, as the library has its own configuration classes, logger interfaces, service classes, etc. so you can can wire it up right in the Program.cs and literally have a 1 class MQTT daemon at your disposal. In our next installment in this series, we’ll look at how to use an .NET Core daemon in Docker containers.

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

Shared frameworks have been an essential part of .NET Core since 1.0. ASP.NET Core shipped as a shared framework for the first time in 2.1. You may not have noticed if things are working smoothly, but there have been some bumps and ongoing discussion about its design. In this post, I will dive deep into the shared frameworks and talk about some common developer pitfalls.

The Basics

.NET Core apps run in one of two modes: framework-dependent or self-contained. On my MacBook, a minimal self-contained ASP.NET Core application is 93 MB and has 350 files. By contrast, a minimal framework-dependent app is 239 KB and has 5 files.

You can produce both kinds of apps with these command line instructions.

dotnet new web
dotnet publish --runtime osx-x64 --output bin/self_contained_app/
dotnet publish --output bin/framework_dependent_app/

When the app runs, it is functionally equivalent in both modes. So why are there different modes? As the docs explain well:

framework-dependent deployment relies on the presence of a shared system-wide version of .NET Core…. [A] self-contained deployment doesn’t rely on the presence of shared components on the target system. All components…are included with the application.

This document does a great job of explaining the advantages of each mode.

The shared framework

To put it simply, a .NET Core shared framework is a folder of assemblies (*.dll files) that are not in the application folder. These assemblies version and release together. This folder is one part of the “shared system-wide version of .NET Core”, and is usually found in C:/Program Files/dotnet/shared.

When you run dotnet.exe WebApp1.dll, the .NET Core host must

  1. discover the names and versions of your app dependencies
  2. find those dependencies in common locations.

These dependencies are found in a variety locations, including, but not limited to, the shared frameworks. In a previous post, I briefly explained how the deps.json and runtimeconfig.json files configure the host’s behavior. See that post for more details.

The .NET Core host reads the *.runtimeconfig.json file to determine which shared framework(s) to load. Its contents may look like this:

{ "runtimeOptions": { "framework": { "name": "Microsoft.AspNetCore.App", "version": "2.1.1" } } } 

The shared framework name is just that – a name. By convention, this name ends in “.App”, but it could be anything, like “FooBananaShark”.

The shared framework version represents the minimum version. The .NET Core host will never run on a lower version, but it may try to run on a higher one.

Comparing Microsoft.NETCore.App, AspNetCore.App, and AspNetCore.All

As of .NET Core 2.2, there are three shared frameworks.

Framework name Description
Microsoft.NETCore.App The base runtime. It supports things like System.Object, List<T>, string, memory management, file and network IO, threading, etc.
Microsoft.AspNetCore.App The default web runtime. It imports Microsoft.NETCore.App, and adds API to build an HTTP server using Kestrel, Mvc, SignalR, Razor, and parts of EF Core.
Microsoft.AspNetCore.All Integrations with third-party stuff. It imports Microsoft.AspNetCore.App. It adds support for EF Core + SQLite, extensions that use Redis, config from Azure Key Vault, and more. (Will be deprecated in 3.0.)

Relationship to the NuGet package

The .NET Core SDK generates the runtimeconfig.json file. In .NET Core 1 and 2, it uses two pieces from the project configuration to determine what goes in the framework section of the file:

  1. the MicrosoftNETPlatformLibrary property. By default this is set to "Microsoft.NETCore.App" for all .NET Core projects.
  2. NuGet restore results, which must include a package by the same name.

The .NET Core SDK adds an implicit package reference to Microsoft.NETCore.App to all projects. ASP.NET Core overrides the default by setting MicrosoftNETPlatformLibrary to "Microsoft.AspNetCore.App".

The NuGet package, however, does not provide the shared framework. I repeat, the NuGet package does not provide the shared framework. (I’ll repeat once more below.) The NuGet package only provides a set of APIs used by the compiler and a few other SDK bits. The shared framework files come from runtime installers found on https://aka.ms/dotnet-download, or bundled in Visual Studio, Docker images, and some Azure services.

Version roll-forward

As mentioned above, runtimeconfig.json is a minimum version. The actual version used depends on a rollforward policy documented in great detail by Microsoft. The most common way this applies is:

  • If an app minimum version is 2.1.0, the highest 2.1.* version will be loaded.

Layered shared frameworks

This feature was added in .NET Core 2.1.

Shared frameworks can depend on other shared frameworks. This was introduced to support ASP.NET Core which converted from a package runtime store to a shared framework.

For example, if you look inside the $DOTNET_ROOT/shared/Microsoft.AspNetCore.All/$version/ folder, you will see a Microsoft.AspNetCore.All.runtimeconfig.json file.

$ cat /usr/local/share/dotnet/shared/Microsoft.AspNetCore.All/2.1.2/Microsoft.AspNetCore.All.runtimeconfig.json
{
  "runtimeOptions": {
    "tfm": "netcoreapp2.1",
    "framework": {
      "name": "Microsoft.AspNetCore.App",
      "version": "2.1.2"
    }
  }
}

Multi-level lookup

This feature was added in .NET Core 2.0.

The host will probe several locations to find a suitable shared framework. It starts by looking in the dotnet root, which is the directory containing the dotnet executable. This can also be overridden by setting the DOTNET_ROOT environment variable to a folder path. The first location probed is:

$DOTNET_ROOT/shared/$name/$version

If a folder is not there, it will attempt to look in pre-defined global locations using multi-level lookup. This can be turned off by setting the environment variable DOTNET_MULTILEVEL_LOOKUP=0. The default global locations are:

The host will probe for directories in:

$GLOBAL_DOTNET_ROOT/shared/$name/$version

ReadyToRun

The assemblies in the shared frameworks are pre-optimized with a tool called crossgen. This produces “ReadyToRun” versions of .dll’s which are optimized for specific operating systems and CPU architectures. The primary performance gain is that this reduces the amount of time the JIT spends preparing code on startup.

Pitfalls

I think every .NET Core developer has fallen into one of these pitfalls at some point. I’ll attempt to explain how this happens.

HTTP Error 502.5 Process Failure

By far the most common pitfall when hosting ASP.NET Core in IIS or running on Azure Web Services. This typically happens after a developer upgraded a project, or when an app is deployed to a machine which hasn’t been updated recently. The real error is often that a shared framework cannot be found, and the .NET Core application cannot start without it. When dotnet fails to launch the app, IIS issues the HTTP 502.5 error, but does not surface the internal error message.

“The specified framework was not found”

It was not possible to find any compatible framework version
The specified framework 'Microsoft.AspNetCore.App', version '2.1.3' was not found.
  - Check application dependencies and target a framework version installed at:
      /usr/local/share/dotnet/
  - Installing .NET Core prerequisites might help resolve this problem:
      http://go.microsoft.com/fwlink/?LinkID=798306&clcid=0x409
  - The .NET Core framework and SDK can be installed from:
      https://aka.ms/dotnet-download
  - The following versions are installed:
      2.1.1 at [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.App]
      2.1.2 at [/usr/local/share/dotnet/shared/Microsoft.AspNetCore.App]

This error is often found lurking behind HTTP 502.5 errors or Visual Studio Test Explorer failures.

This happens when the runtimeconfig.json file specifies a framework name and version, and the host cannot find an appropriate version using the multi-level lookup and rollforward policies, as explained above.

Updating the NuGet package for Microsoft.AspNetCore.App

The NuGet package for Microsoft.AspNetCore.App does not provide the shared framework. It only provides the APIs used by the C#/F# compiler and a few SDK bits. You must download and install the shared framework separately. See https://aka.ms/dotnet-download.

Also, because of rollforward policies, you don’t need to update the NuGet package version to get your app to run on a new shared framework version.

It was probably a design mistake on the part of the ASP.NET Core team (which I’m on) to represent the shared framework as a NuGet package in the project file. The packages which represent shared frameworks are not normal packages. Unlike most packages, they are not self-sufficient. It is reasonable to expect that when a project uses a <PackageReference>, NuGet is able to install everything needed, and frustrating that these special packages deviate from the pattern. Various proposals have been made to fix this. I’m hopeful one will land soon-ish.

<PackageReference Include="Microsoft.AspNetCore.App" />

New project templates and docs for ASP.NET Core 2.1 showed users that they only needed to have this line in their project.

<PackageReference Include="Microsoft.AspNetCore.App" />

All other <PackageReference>’s must include a Version attribute. The version-less package ref only works if the project begins with <Project Sdk="Microsoft.NET.Sdk.Web">, and only works for the Microsoft.AspNetCore.{App, All} packages. The Web SDK will automatically pick a version of these packages based on other valeus in the project, like <TargetFramework> and <RuntimeIdentifier>.

This magic does not work if you specify a version on the package reference element, or if you’re not using the Web SDK. It’s hard to recommend a good solution because the best approach depends on your level of understanding and the project type.

Publish trimming

When you run dotnet publish to create a framework-dependent app, the SDK uses the NuGet restore result to determine which assemblies should be in the publish folder. Some will be copied from NuGet packages, and others are not because they are expected to be in the shared frameworks.

This can easily go wrong because ASP.NET Core is available as a shared framework and as NuGet packages. The trimming attempts to do some graph math to examine transitive dependencies, upgrades, etc., and pick the right files accordingly.

Take for example this project:

<PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.1" />
<PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.9" />

MVC is actually part of Microsoft.AspNetCore.App, but when dotnet publish runs, it sees that your project has decided to upgrade “Microsoft.AspNetCore.Mvc.dll” to a version which is higher than what Microsoft.AspNetCore.App 2.1.1 includes, so it will put Mvc.dll into your publish folder.

This is less than ideal because your application grows in size and you don’t get a ReadyToRun optimized version of Microsoft.AspNetCore.Mvc.dll. This can happen unintentionally if you get upgraded transitively through a ProjectReference of via a third-party dependencies.

Confusing the target framework moniker with the shared framework

It’s easy to think that "netcoreapp2.0" == "Microsoft.NETCore.App, v2.0.0". This is not true. A target framework moniker (aka TFM) is specified in a project using the <TargetFramework> element. “netcoreapp2.0” is meant to be a human-friendly way to express which version of .NET Core you would like to use.

The pitfall of a TFM is that it is too short. It cannot express things like multiple shared frameworks, patch-specific versioning, version rollforward, output type, and self-contained vs framework-dependent deployment. The SDK will attempt to infer many of these settings from the TFM, but it cannot infer everything.

So, more accurately, "netcoreapp2.0" implies "Microsoft.NETCore.App, at least v2.0.0".

Confusing project settings

The final pitfall I will mention is about project settings. There are many, and the terminology and setting names don’t always line up. It’s a confusing set of terms, so this one isn’t your fault if you get them mixed up.

Below, I’ve listed some common project settings and what they actually mean.

<PropertyGroup>
  <TargetFramework>netcoreapp2.1</TargetFramework>
  <!-- Actual meaning: * The API set version to use when resolving compilation references from NuGet packages. -->

  <TargetFrameworks>netcoreapp2.1;net471</TargetFrameworks>
  <!-- Actual meaning: * Compile for two different API version sets. This does not represent multi-layered shared frameworks. -->

  <MicrosoftNETPlatformLibrary>Microsoft.AspNetCore.App</MicrosoftNETPlatformLibrary>
  <!-- Actual meaning: * The name of the top-most shared framework -->

  <RuntimeFrameworkVersion>2.1.2</RuntimeFrameworkVersion>
  <!-- Actual meaning: * version of the implicit package reference to Microsoft.NETCore.App which then becomes the _minimum_ shared framework version. -->

  <RuntimeIdentifier>win-x64</RuntimeIdentifier>
  <!-- Actual meaning: * Operating system kind + CPU architecture -->

  <RuntimeIdentifiers>win-x64;win-x86</RuntimeIdentifiers>
  <!-- Actual meaning: * A list of operating systems and CPU architectures which this project _might_ run on. You still have to select one by setting RuntimeIdentifier. -->

</PropertyGroup>

<ItemGroup>

  <PackageReference Include="Microsoft.AspNetCore.App" Version="2.1.2" />
  <!-- Actual meaning: * Use the Microsoft.AspNetCore.App shared framework. * Minimum version = 2.1.2 -->

  <PackageReference Include="Microsoft.AspNetCore.Mvc" Version="2.1.2" />
  <!-- Actual meaning: * Use the Microsoft.AspNetCore.Mvc package. * Exact version = 2.1.2 -->

  <FrameworkReference Include="Microsoft.AspNetCore.App" />
  <!-- Actual meaning: * Use the Microsoft.AspNetCore.App shared framework. (This is new and unreleased...see https://github.com/dotnet/sdk/pull/2486) -->

</ItemGroup>

Closing

The shared framework is an optional feature of .NET Core, and I think it’s a reasonable default for most users despite the pitfalls. I still think it’s good for .NET Core developers to understand what goes on under the hood, and hopefully this was a good overview of the shared frameworks feature. I tried to link to official docs and guidance where possible so you can find more info. If you have more questions, leave a comment below.

More info

https://github.com/dotnet/cli/blob/v2.1.400/Documentation/specs/runtime-configuration-file.md

Perşembe, 30 Ağustos 2018 / Published in Uncategorized


WEBINAR:On-Demand

Building the Right Environment to Support AI, Machine Learning and Deep Learning

Introduction

CAPTCHA (Completely Automated Public Turing test to Tell Computers and Humans Apart) is a security check to distinguish between humans and computers. Computers or bots are not capable of solving a Captcha problem. Captcha is automatically generated with a random string. Generally, Captcha is generated by using an image with text/numbers or combination of both. A human has to enter the correct Captcha code to pass through the security check. Captcha technology is used mostly to block spammers who try to sign up and hack information from Web sites, blogs, or forums.

Google provides a reCAPTCHA free service to create Captcha that protects a Web site from spam and abuse. In this article, I will explain about using Captcha with a sample ASP.NET Web application.

ASP.NET Captcha Application

To create a Captcha application in ASP.NET, Open Visual Studio 2015 -> File Menu -> New, then Project. It will open a new project window. Choose only ASP.NET Web Application. Specify the project name "CaptchaSample" and then click OK. You can see this in Figure 1.

Figure 1: .NET New Web Application

From the next screen of the window, choose Web Forms and click OK. The system will create a basic Web Form application solution for you (see Figure 2).

Figure 2: Select Web Form

Next, add a new ASP.NET Web form page, "BankAccountRegistration.aspx". To add a new page, right-click the project and choose Add, New Item. From the New Item, choose Web, then Web Form with Master Page. Provide the specific name for the page and click OK, as shown in Figure 3.

Figure 3: Select Web Form with Master Page

After that, select your master page for the page and click the OK button (see Figure 4).

Figure 4: Select Master Page

Next, to add Captcha, I have added another ASP.NET page where I will create a Captcha image and use it on the BankAccountRegistration.aspx page.

So, after adding the "Captcha.aspx" page, my solution will look similar to Figure 5.

Figure 5: Solution Screen Shot

BankAccountRegistration.aspx

Refer to the following HTML code of the BankAccountRegistration page created earlier in this article.

<%@ Page Title="" Language="C#" MasterPageFile="~/Site.Master"
   AutoEventWireup="true"
   CodeBehind="BankAccountRegistration.aspx.cs"
   Inherits="CaptchaSample.Registration" %>
<asp:Content ID="Content1" ContentPlaceHolderID="MainContent"
      runat="server">
   <table>
      <tr>
         <td colspan="2">Online Bank Account Creation</td>
      </tr>
      <tr>
         <td >Account Holder Name</td>
         <td>
            <asp:TextBox runat="server" ID="txtFullName">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>Email Id</td>
         <td>
            <asp:TextBox runat="server" ID="txtEmail">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>Account Type
         </td>
         <td>
            <asp:TextBox runat="server" ID="txtaccounttype">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>PAN Number</td>
         <td>
            <asp:TextBox runat="server" ID="txtpan">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>User Name</td>
         <td>
            <asp:TextBox runat="server" ID="txtUserName">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>Password</td>
         <td>
            <asp:TextBox runat="server" ID="txtPassword"
               TextMode="Password"></asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>ReEnter Password</td>
         <td>
            <asp:TextBox runat="server" ID="txtPassword1"
               TextMode="Password"></asp:TextBox>
         </td>
      </tr>
      <tr>
         <td>Verification Code</td>
         <td>
            <asp:Image ID="Image2" runat="server" Height="55px"
               ImageUrl="~/SampleCaptcha.aspx" Width="186px" />
            <br />
            <asp:Label runat="server" ID="lblCaptchaMessage">
            </asp:Label>
         </td>
      </tr>
      <tr>
         <td>Enter Verifaction Code</td>
         <td>
            <asp:TextBox runat="server" ID="txtVerificationCode">
            </asp:TextBox>
         </td>
      </tr>
      <tr>
         <td colspan="2">
            <asp:Button runat="server" ID="btnSubmit"
               Text="Submit"
               OnClick="btnSubmit_Click" />
         </td>
      </tr>
   </table>

</asp:Content>

BankAccountRegistration.aspx.cs

The following C# server-side code will check the entered values inside the textbox and session value; they should be the same. If so, Captcha is right; otherwise, it is wrong.

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;


namespace CaptchaSample
{
   public partial class Registration : System.Web.UI.Page
   {
      protected void Page_Load(object sender, EventArgs e)
      {

      }
      protected void btnSubmit_Click(object sender, EventArgs e)
      {
         if (txtVerificationCode.Text.ToLower() ==
            Session["CaptchaVerify"].ToString())
         {
            Response.Redirect("Default.aspx");
         }
         else
         {
            lblCaptchaMessage.Text = "You have entered Wrong
               Captcha. Please enter correct Captcha !";
            lblCaptchaMessage.ForeColor = System.Drawing.Color.Red;
         }
      }

   }
}

Default.aspx

This code redirects to the default page if the Captcha entries matched successfully. Refer to the following HTML code.

<%@ Page Title="Home Page"
   Language="C#" MasterPageFile="~/Site.Master"
   AutoEventWireup="true" CodeBehind="Default.aspx.cs"
   Inherits="CaptchaSample._Default" %>

<asp:Content ID="BodyContent" ContentPlaceHolderID="MainContent"
      runat="server">
   <br />
      <asp:Label runat="server" ID="lblCaptchaMessage"
         ForeColor="Green">
      </asp:Label>

</asp:Content>

Default.aspx.cs

using System;
using System.Collections.Generic;
using System.Linq;
using System.Web;
using System.Web.UI;
using System.Web.UI.WebControls;

namespace CaptchaSample
{
   public partial class _Default : Page
   {
      protected void Page_Load(object sender, EventArgs e)
      {
         lblCaptchaMessage.Text = "You have entered correct
            Captcha code";
         lblCaptchaMessage.ForeColor = System.Drawing.Color.Green;
      }
   }
}

Let’s execute the application now. Press F5 to run the project; it will open the bank account register page, as shown in Figure 6. Entering the wrong Captcha will show a validation error.

Figure 6: Bank Account Registration Page with Captcha

Conclusion

There are few disadvantages to using Captcha. Many people consider it an annoyance. Also, visually impaired individuals cannot use Captcha. However, audio Captcha is difficult to understand. It’s also noticed that after adding Captcha, a considerable amount of incoming traffic dropped.

That’s all for today. Happy coding!

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

In this demo, I will show how to utilize the Automapper library efficiently. Automapper makes our lives easy with minimal steps. In a nutshell, AutoMapper is an object-object mapper. It transforms the input object of one type into an output object of another type.

Requirements

  • Visual Studio 2017.

  • Auto Mapper NuGet Package

  • Auto Mapper Dependency Injection Package

In this example, I’ve taken two classes, Employee and EmployeeModel.

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class Employee {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public string City {  

  16.             get;  

  17.             set;  

  18.         }  

  19.         public string State {  

  20.             get;  

  21.             set;  

  22.         }  

  23.     }  

  24. }  

  25. namespace ASPNETCORE_AUTOMAPPER.Models {  

  26.     public class EmployeeModel {  

  27.         public int Id {  

  28.             get;  

  29.             set;  

  30.         }  

  31.         public string Name {  

  32.             get;  

  33.             set;  

  34.         }  

  35.         public string Designation {  

  36.             get;  

  37.             set;  

  38.         }  

  39.         public Address Address {  

  40.             get;  

  41.             set;  

  42.         }  

  43.     }  

  44.     public class Address {  

  45.         public string City {  

  46.             get;  

  47.             set;  

  48.         }  

  49.         public string State {  

  50.             get;  

  51.             set;  

  52.         }  

  53.     }  

  54. }  

We are getting Employee object from the end user and trying to assign it to EmployeeModel with each property like below, which is a tedious job and in real time scenarios, we may have plenty of properties as well as complex types. So this is an actual problem.

  1. [HttpPost]  

  2. public EmployeeModel Post([FromBody] Employee employee) {  

  3.     EmployeeModel empmodel = new EmployeeModel();  

  4.     empmodel.Id = employee.Id;  

  5.     empmodel.Name = employee.Name;  

  6.     empmodel.Designation = employee.Designation;  

  7.     empmodel.Address = new Address() {  

  8.         City = employee.City, State = employee.State  

  9.     };  

  10.     return empmodel;  

  11. }  

To overcome this situation, we have a library called AutoMapper.

Incorporate this library into your application by following the below steps.

Open Visual Studio and Click on File – New Project and select ASP.NET CORE WEB APPLICATION,

Click on Ok and you’ll get the below window where you have to select WebApp (MVC).

As soon as you click on the Ok button your application is ready.

Now, the actual auto mapper should take place. For that, we need to add NuGet reference to the solution. Make sure we have to add two references to solution

  • Add Main AutoMapper Package to the solution,

  • Now, add the Auto mapper dependency Injection Package,

  • Now, call AddAutoMapper from StartUp.cs file as shown below,

    1. public void ConfigureServices(IServiceCollection services) {  

    2.     services.AddMvc();  

    3.     services.AddAutoMapper();  

    4. }  

  • Now, create the MappingProfile.cs file under the root project and write the below snippet

    1. public class MappingProfile: Profile {  

    2.     public MappingProfile() {  

    3.         CreateMap < Employee, EmployeeModel > ()  

    4.     }  

    5. }  

  • Here CreateMap method is used to map data between Employee and EmployeeModel.

If you observe here we called ForMember method, which is used when we have different datatypes in source and destination classes.

Employee Class should be like this,

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class Employee {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public string City {  

  16.             get;  

  17.             set;  

  18.         }  

  19.         public string State {  

  20.             get;  

  21.             set;  

  22.         }  

  23.     }  

  24. }  

EmployeeModel.cs should be like this,

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class EmployeeModel {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public Address Address {  

  16.             get;  

  17.             set;  

  18.         }  

  19.     }  

  20.     public class Address {  

  21.         public string City {  

  22.             get;  

  23.             set;  

  24.         }  

  25.         public string State {  

  26.             get;  

  27.             set;  

  28.         }  

  29.     }  

  30. }  

In Employee.cs file having City and State properties but in EmployeeModel.cs we have Address type. So if we try to map these two models we may end up missing type configuration error. So to overcome that issue we have to use ForMember method which tells mapper what properties it should map for that particular Address field. So we have to tweak the MappingProfile.cs file like below:

  1. public class MappingProfile: Profile {  

  2.     public MappingProfile() {  

  3.         CreateMap < Employee, EmployeeModel > ().ForMember(dest => dest.Address, opts => opts.MapFrom(src => new Address {  

  4.             City = src.City, State = src.State  

  5.         }));  

  6.     }  

  7. }  

So the next step is we have to hook this up from our controller;  just follow the below snippet

  1. namespace ASPNETCORE_AUTOMAPPER.Controllers {  

  2.     public class EmployeeController: Controller {  

  3.         private readonly IMapper _mapper;  

  4.         public EmployeeController(IMapper mapper) {  

  5.             _mapper = mapper;  

  6.         }  

  7.         public IActionResult Index() {  

  8.                 return View();  

  9.             }  

  10.             [HttpPost]  

  11.         public EmployeeModel Post([FromBody] Employee employee) {  

  12.             EmployeeModel empmodel = new EmployeeModel();  

  13.             empmodel = _mapper.Map < Employee, EmployeeModel > (employee);  

  14.             return empmodel;  

  15.         }  

  16.     }  

  17. }  

Here we have injected IMapper to EmployeeController and performed mapping operation between Employee and EmployeeModel

If you pass data to Employee object,  it will directly map to EmployeeModel object with the help of Mapper.

Now if you observe, EmployeeModel is filled in with all the property data with the help of Mapper. This is the actual beauty of AutoMapper.

So if you come across a requirement to map data from your DTOs to Domain object choose Automapper and it will do all your work with less code.

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

Introduction

Given some implementation through an interface (like a collection), where "IsSynchronized" defines the implementation type — either synchronized or not — this simple pattern inserts either a "real" lock, or a no-op.

The result is a single implementation, with either selected synchronization behavior.

Using the code

The example use case for the pattern is, again, providing an implementation for an interface that may be implemented as either "synchronized" or not. The single implementation always inserts a lock; and that will simply be a no-op for the non-synchronized implementation.

For example, this interface:

public interface IBag
{
    bool IsSynchronized { get; }

    void Invoke();
}

… Which specifies either IsSynchronized or not, can be implemented with a single implementation:

public class Bag
{
    private readonly ISyncLock syncLock;


    /// <summary>
    /// Constructor.
    /// </summary>
    public Bag(bool isSynchronized)
    {
        syncLock = isSynchronized
                ? new SyncLock()
                : NoOpSyncLock.Instance;
        IsSynchronized = isSynchronized;
    }


    public bool IsSynchronized { get; }

    public void Invoke()
    {
        using (syncLock.Enter()) {
            // Act ...
        }
    }
}

And so … the implementation is IOC, and DRY.

The ISyncLock implementation is like this:

/// <summary>
/// Defines a Monitor that is entered explicitly, and
/// exited when a returned <see cref="IDisposable"/> object is Disposed.
/// </summary>
public interface ISyncLock
{
    /// <summary>
    /// Enters the Monitor. Dispose the result to exit.
    /// </summary>
    IDisposable Enter();
}


/// <summary>
/// Implements <see cref="ISyncLock"/> with <see cref="Monitor.Enter(object)"/>
/// and <see cref="Monitor.Exit"/>.
/// </summary>
public sealed class SyncLock
        : ISyncLock,
                IDisposable
{
    private readonly object syncLock;


    /// <summary>
    /// Constructor.
    /// </summary>
    /// <param name="syncLock">Optional: if null, <see langword="this"/>
    /// is used.</param>
    public SyncLock(object syncLock)
        => this.syncLock = syncLock ?? this;


    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public IDisposable Enter()
    {
        Monitor.Enter(syncLock);
        return this;
    }

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public void Dispose()
        => Monitor.Exit(syncLock);
}


/// <summary>
/// Implements a do-nothing <see cref="ISyncLock"/>.
/// </summary>
public readonly struct NoOpSyncLock
        : ISyncLock,
                IDisposable
{
    /// <summary>
    /// Provides a "singleton" instance.
    /// </summary>
    public static readonly NoOpSyncLock Instance = new NoOpSyncLock();


    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public IDisposable Enter()
        => this;

    [MethodImpl(MethodImplOptions.AggressiveInlining)]
    public void Dispose() { }
}

The NoOpSyncLock should be very cheap.

Notice that you do not lock the ISyncLock object; AND you do not Dispose that directly: use the Enter method to get the correct IDisposable. That detail ensures correct encapsulation. That’s why the interface does not extend IDisposable.

Expanding:

Notice also that the lock implementation can be instrumented: if you want to enter using TryEnter and optionally throw exceptions, you can change that in a single place:

    public IDisposable Enter()
    {
        if (Monitor.TryEnter(syncLock, myTimeout))
            return this;
        throw new TimeoutException("Failed to acquire the lock.");
    }

OR, augment the interface with a TryEnter method:

    public IDisposable TryEnter(out bool gotLock)
    {
        if (Monitor.TryEnter(syncLock, myTimeout)) {
            gotLock = true;
            return this;
        }
        gotLock = false;
        return NoOpSyncLock.Instance;
    }

And then your implementation MUST check the out gotLock result; and do nothing unsafe.

Points of Interest.

  • The lock implementation can be instrumented, and changed globally
  • The complete implementation selector is implemented in a single class (albeit, if this pattern can be used to implement your specific scenario) … DRY
  • The implementation is now using Inversion Of Control
  • The implementation cost for the no-op should be very cheap

History

First posting.

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

Virtually all web applications use some form of user analytics to determine which aspects of an application are popular and which are causing issues for users. Probably the most well known is Google Analytics but there are other similar services that offer additional options and features. One such service is Segment which can act as a funnel into other analytics engines such as Google Analytics, Mixpanel, or Salesforce.

In this post I show how you can add the Segment analytics.js library to your ASP.NET Core application, to provide analytics for your application.

I’m only looking at how to add client-side analytics to a server-side rendered ASP.NET Core application i.e. an MVC application using Razor. If you want to add analytics to an SPA app that uses Angular for example, see the Segment documentation.

Client-side vs. Server-side tracking

Segment supports two types of tracking: client-side and server-side. The difference should be fairly obvious:

  • Client-side tracking uses JavaScript to make calls to the Segment API, to track page views, sign-ins, page clicks etc.
  • Server-side tracking happens on the server. That means you can send data that’s only available on the server, or that you wouldn’t want to send to a client.

Whether you want server-side tracking, client-side tracking, or both, depends on your requirements. Segment has a good breakdown of the pros and cons of both approaches on their docs.

In this post I’m going to add client-side tracking using Segment to an ASP.NET Core application.

Fetching an API key

I’ll assume you already have a Segment account – if not, head to https://app.segment.com/signup and signup.

Signup page

Once you have configured your account, you’ll need to obtain an API key for your app. If you haven’t already, create a new source by clicking Add Source on the Home screen. Select the JavaScript source, and enter all the required fields.

Connect source

Once the source is configured, view the API keys for the source, and make a note of the Write key. This is the API key you will provide when calling the Segment API.

write key

With the Segment side complete, we can move on to your application. Even though we’re doing client-side tracking here, we need to do some work on the server.

Configuring the server-side components

Given I said I’m only looking at client-side tracking, you might be surprised to know you need any server-side components. However, if you’re rendering your pages server side using Razor, you need a way of passing the API keys, and the user’s ID to the JavaScript code. The easiest way is to write the values directly in the JavaScript rendered in your layout.

Adding the configuration

First thing’s first, you need somewhere to store the API key. The simplest place would be to dump it in appsettings.json, but you shouldn’t put values like API keys in there. The Segment key isn’t really that sensitive (we’ll be exposing it in JavaScript anyway) but out of principle, it just shouldn’t be there.

Never store API keys in appsettings.json – store them in User Secrets, environment variables, or a password vault like Azure Key Vault.

Store the API key in the User Secrets JSON file for now, using a suitably descriptive key name:

{
  "Segment": {
    "ApiKey": "56f7fggjsGGyishfuknvyfGFDfg3643"
  }
}

Assuming you’re using the default web host builder (or similar) this value will be added to your IConfiguration object. Create a strongly-typed settings object for good measure:

public class SegmentSettings
{
    public string ApiKey { get; set; }
}

And bind it to your configuration in Startup.ConfigureServices:

public class Startup
{
    public Startup(IConfiguration configuration)
    {
        Configuration = configuration;
    }

    public IConfiguration Configuration { get; set; }
    public void ConfigureServices(IServiceCollection services)
    {
        services.Configure<SegmentSettings>(Configuration.GetSection("Segment"));
    }
}

Now you’ve got the Segment API key available in your application, you can look at rendering the analytics.js JavaScript code.

Rendering the analytics code in Razor

The Segment JavaScript API is exposed as the analytics.js library. This library lets you send all sorts of analytics to Segment from a client, but at it’s simplest you just need to do three things:

  1. Load the analytics.js library
  2. Initialise the library with your API key
  3. Call page() to track a page view.

You can read about this and all the other options available in the quickstart guide in Segment’s documentation. I’m going to create a partial view called _SegmentPartial.cshtml, for rendering the JavaScript snippet. You can add this partial to your application by adding the following to your _Layout.cshtml.

@await Html.PartialAsync("_SegmentPartial");

The Razor partial itself consists almost entirely of the JavaScript snippet provided by Segment:

@inject IOptions<SegmentSettings> Settings
@{
    var apiKey = Settings.Value.ApiKey
}
 !function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","debug","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t

There’s a couple of things to note here. We’re injecting the API key using the strongly-typed SegmentSettings options object directly into the view, and then writing the key out using @apiKey. This will HTML encode the output, but given we know the apiKey is alphanumeric, this shouldn’t be an issue.

This is a special case as we know the key is not coming form user input and contains a known set of safe values, but it’s bad practice really. Generally speaking you should use one of the techniques discussed in the docs to inject values into JavaScript code.

If you reload your website, you should see the JavaScript snippet rendered to the page, and if you look in the debugger of your Segment Source you should see a tracking event for the page:

The Segment Debugger after tracking a page

Associating page data with a user

You’ve now got basic page analytics, but what if you want to send more information. The analytics.js library lets you track a variety of different properties and events, but often one of the most important is tracking individual users. This is extremely powerful as it lets you track a user’s flow through your application, and where they hit stumbling blocks for example.

User tracking and privacy is obviously a hot-topic at the moment, but I’m going to just avoid that for now. You should always take into consideration your user’s expectation of privacy, especially with the recent GDPR legislation.

To associate multiple analytics.page() and analytics.track() calls with a specific user, you must first call analytics.identify() in your page. You should add this call just after the analytics.load() call and just before analytics.page() in our JavaScript snippet.

In order to track a user, you need a unique identifier. If a user is browsing anonymously, then Segment will assign an anonymous ID automatically; you don’t need to do anything. However, if a user has logged in to your app, you can associate your Segment data with them by providing a unique ID.

In this example, I’m going to assume you’re using a default ASP.NET Core Identity setup, so that when a user logs in to your app, a ClaimsPrincipal is set which contains two claims:

  • ClaimTypes.NameIdentifier: the unique identifier for the user
  • ClaimTypes.Name : the name of the user (often an email address)

For privacy/security reasons, you may not want to expose the unique id of your users to a third-party API (and the client browser). You can work around this by creating an additional unique GUID for each user, and adding an additional Claim to the ClaimsPrincipal on login. That’s beyond the scope of this post, so I’ll just use the two main claims for now.

The following Razor uses the User property on the page to check if the current user is authenticated. If they are, it extracts the id and email of the principal, and creates an anonymous "traits" object, with the details about the user we’re going to send to Segment. Finally, after loading the snippet and assigning the API key, we call analytics.identify(), passing in the user id, and the serialized traits object.

@inject IOptions<SegmentSettings> Settings
@using System.Security.Claims
@using System.Text.Encodings.Web
@using System.Security.Claims
@{
    var apiKey = Settings.Value.ApiKey
    var isAuthenticated = User?.Identity?.IsAuthenticated ?? false;
    if (isAuthenticated)
    {
        var id = User.Claims.First(x => x.Type == ClaimTypes.NameIdentifier).Value;
        var name = User.Claims.First(x => x.Type == ClaimTypes.Name).Value;
        var traits = new {username = name, email = name};
    }
}
 !function(){var analytics=window.analytics=window.analytics||[];if(!analytics.initialize)if(analytics.invoked)window.console&&console.error&&console.error("Segment snippet included twice.");else{analytics.invoked=!0;analytics.methods=["trackSubmit","trackClick","trackLink","trackForm","pageview","identify","reset","group","track","ready","alias","debug","page","once","off","on"];analytics.factory=function(t){return function(){var e=Array.prototype.slice.call(arguments);e.unshift(t);analytics.push(e);return analytics}};for(var t=0;t

Now if you login to your application, you should see an additional identify call in the Segment Debugger, containing the id and the additional traits. Actions taken by that user will be associated together, so you can easily follow the steps a user took before they ran into an issue for, example.

There’s rather more logic in this partial than I like to see in a view so I suggest encapsulating this logic somewhere else, perhaps by converting it to a ViewComponent.

There’s many more things you can do to provide analytics for your application, but I’ll leave you to check out the excellent Segment documentation if you want to do more.

Summary

In this post I showed how you can use a Segment’s analytics.js library to add client-side analytics to your ASP.NET Core application. Adding analytics is as simple as including a JavaScript snippet and providing an API key. I also showed how you can associate page actions with users by reading Claims from the ClaimsPrincipal and calling analytics.identify().

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

A Stovepipe Enterprise is a management anti-pattern that happens when different groups in a single company are responsible for designing their own systems from the ground up, completely independently of the other groups, and with no cross-group direction.  What results is that each team has their own environment, standards, architecture, and data sources and none of the groups can work effectively with one another’s systems.

What you end up with is similar to a pipe organ: lots of individual pipes, none interacting with one another, each only able to produce a single sound.  Individually they are something, but together they could be much, much more.

Watch out for that guy in a mask!

The Rundown

  • Name: Stovepipe Enterprise
  • AKA: Silos
  • Summary: An organization allows its teams to develop completely independently of one another, resulting in "silos" or "pipes" of groups whose systems cannot work together well, if at all.
  • Type: Management
  • Common?: DUNNNNN…!  dun dun dun dun dunnnnnnnnnn!

Tell Me a Story!

I have a sneaking suspicion that any organization of sufficient size eventually becomes a stovepipe enterprise, all other things being equal.  Without an overall guiding vision, a benevolent dictator at the helm, people tend to organize into like-minded groups, which results in software that fits with these preconceptions.  We all want to fit it.

Up until recently, this was precisely what was happening to my company.  

When I got here seven years ago (!), we were fully-independent cowboy coders.  We each maintained our own little apps and our own little standards, and didn’t really work in teams.

Well, sir, I was just fidlin’ with that there vahriable, and suddenly…

Then, our growing stage began.  Our director of IT (we’ll call him Bill) started organizing us developers into groups.  At first, it was really just "put these guys over here so we can manage them more easily" type of stuff, but later each group got a dedicated manager, and then started subdividing into programmer-lead teams.  This process took a few years, but at the end of it, programmers were working as units, getting software done much more quickly, and generally producing better results.

On the one hand, we were now developing better software much more quickly than we had been.  On the other, the teams had developer their own infrastructures, and so each team was, for example, getting common data from disparate sources. None of the teams’ systems could talk to one another easily, resulting in the company having to maintain a great many different data sources for different teams.  

So now we find ourselves in a difficult situation.  We are, in many ways, a stovepipe enterprise, though not to some irretrievable extent.  Obviously this whole "get the same data from different sources" thing cannot continue.  But how do we get the teams to use the same source?  And who decides what that source is?

The solution to the stovepipe enterprise is a counter-intuitive and incredibly rare one.  What we need, what my company thankfully already has, is a benevolent dictator.

Do you suppose he knew he was going to be a salad?

Of course, "benevolence" is in the eye of the beholder.

Strategies to Solve

The benefit of a benevolent dictator, and why it works in business as opposed to government, is that there’s a single point of responsibility for all company-wide decisions.  That person now can direct each team how they see fit, and decide the "canonical" way to do things across teams, such as accessing common data.

In our particular case, our benevolent dictator is our IT director Bill.  Around eighteen months ago, he split off a few developers from several teams into a single "infrastructure" team, which is charged with making and maintaining the services the entire company IT division needs (and if you think this sounds like exacerbating the problem instead of solving it, you weren’t the only one).  

This infrastructure team is only just now coming out with new company-wide platforms, systems through which we can access common company data.  And boy, let me tell you, they have been a joy to use.  For one, we now have a single source of truth for this data, so if it’s ever wrong, we know where to fix it.

For another, we have a solitary point of contact for any of these services if they accidentally fail or don’t yet do what we need.  It’s been refreshing to know that we only need to consume one service, use one API, instead of wading through the muck of disused software only to find that our princess is in another castle.

Our benevolent dictator is slowly, surely, pulling us out of our stovepipes. I’ve always had faith that we were going to get there, and now, we’ve started seeing the results.

Summary

A stovepipe enterprise occurs when an organization’s teams are independent to the point of not being able to work with each other, and their software systems cannot interact.  The solution is usually guidance from above, in the form a benevolent dictator, willing to make difficult decisions and fall on their sword should the need arise.  If you’ve got one of the people at your company, consider yourself lucky.

Why yes, my ears DO wobble to and fro, thanks for asking.

Does your company suffer through being a stovepipe enterprise?  Did you find that elusive benevolent dictator, or did you solve this problem another way?  Share in the comments!  Or else head on back to the series index page to see some more anti-patterns.

Happy Coding!

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

29 Aug

One of the most common Software Design Pattern you’ll find when developing C# application is the Simple Factory Pattern, which software design pattern which returns an instance of one of several possible classes depending on the type of data provided. It is quite common to return a common parent class and common methods, but each may preform a task differently or optimised for different data or bahviours.

When software developers discuss typical software design patterns they generally talk about the 23 GOF Patterns discussed in the book – Design patterns : elements of reusable object-oriented software – which describes various development techniques and pitfalls in addition to providing twenty-three object-oriented programming design patterns. However the Simple Factory Pattern is not discused within the book!

In this post we will explore the Simple Factory Pattern further to understand when and why to use this pattern in a software development project. Code available on Github

In software development a Software Design Pattern is a reusable solution to commonly recurring problems. A software design pattern is a description or template used to solve a problem that can be used in many different situations.

In 1994, the so called Gang Of Four (GOF) published their book, Design patterns : elements of reusable object-oriented software, in which they presented a catalog of simple and succinct solutions to commonly occurring design problems.

The book captured 23 Patterns that enabled software architects to create flexible, elegant and ultimately reusable design patterns without having to rediscover or reinvent the design solutions for themselves. It’s a great resource, for those developers who want to learn more about common software design patterns and how to implement them. However, the book doesn’t cover all software design patterns, and it definitely doesn’t cover some of the most frequently found software patterns.

The Simple Factory Pattern, is probably one of the most widely used patterns and at the same time it is also one of the most under used software patterns. I frequently come across scenarios in code bases, when developers have encountered a problem and instead of eleganly handling it, they often pollute function methods with additional lines of cruft logic. Which can often be the source of additional logic bugs or lead to scalability and adaptability issues later on.

Simple Factory Pattern is a Factory class in its simplest form, compared to Factory Method Pattern or Abstract Factory Pattern, is a factory object for creating other objects. In simplelest terms Factory helps to keep all object creation in one place and avoid of spreading new key value across codebase.

The classes a Simple Factory Pattern returns will have the same parent class and methods, but will perform task differently dependent of the type of data supplied.

How the Simple Factory Pattern Works

Let’s take a look at the diagram to get a high-level view of how the Simple Factory Pattern works in C# and .net core. The simple factory design pattern is a form of abstraction, which hides the actual logic of an implementation of an object so the initialization code can focus on usage, rather than the inner workings.

In the diagram Manufacture is a base class, and classes Chocolate and MotorVehicle are derived from it, the Manufacture class decides which of the subclasses to return, depending on the arguments provided.

The developer using the Manufacture class doesn’t really care which of the subclasses is returned, because each of them have the same methods but will have different implementations. How the factory class decides which one to return, is entirely up to what data is suuplied. It could be very complex logic, however most often it is very simple logic.

Code Example

To build a Hypothetical sample of a Simple Factory Pattern, consider a typical Firstname and Lastname scenario. If we consider we’re going to build an application that will take a username string input from various applications. However, there is some inconsistency in how the string is supplied. In one application it is passed through as "FirstName LastName" and the other application it is passed through as "LastName, FirstName".

We wouldn’t want to pollute our code base with different string manipulation strategies when we could make use of different classes to handle this scenario based on the input received.

Deciding which version of the name to display can be a simple decision making use of the simple If then statement.

We’ll start by defining a simple class that takes the username as string argument in the constructor and allows you to fetch the split names back.

First, we’ll define a simple base class for the Username, consisting of the basic properties we need.

public class UserName

    {

        public string FirstName { get; set; }

        public string LastName { get; set; }

    }

Derived classes

We can develop two very simple derived classes that implement the abstract class and split the username into two parts in the constructor.  In this example we’ll base the class on the assumption that the username is separated by a space, when we need to use the FirstName first scenario

 

public class FirstNameFirst : UserName

    {

        public FirstNameFirst(string username)

        {

            var index = username.Trim().IndexOf(" ", StringComparison.Ordinal);

 

            if (index <= 0) return;

 

            FirstName = username.Substring(0,index).Trim();

            LastName = username.Substring(index + 1).Trim();

        }

    }

 

In the second class, we’ll base it on the assumption that the name is split by a comma.

public class LastNameFirst : UserName

    {

       public LastNameFirst(string username)

       {

            var index = username.Trim().IndexOf(",", StringComparison.Ordinal);

 

           if (index <= 0) return;

 

           LastName = username.Substring(0,index).Trim();

           FirstName = username.Substring(index + 1).Trim();

       }

    }

Build a Simple Factory Pattern

We’ll build a Simple Factory which simply tests for the existence of comma and then return an instance of one class or the other.

public class UsernameFactory

    {

        public UserName GetUserName(string name)

        {

            if (name.Contains(",")) return new LastNameFirst(name);

 

            return new FirstNameFirst(name);

        }

    }

Testing Our Factory Pattern

We can now simply test our factory pattern using xUnit unit testing framework as follows.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

public class UsernameFactoryTests

    {

        private UsernameFactory _factory;

 

        public UsernameFactoryTests()

        {

            _factory = new UsernameFactory();

        }

 

        [Fact]

        public void ShouldGetFirstNameFirst()

        {

            //arrange

            var user = "Gary Woodfine";

          

            //act

            var username = _factory.GetUserName(user);

 

 

            //assert

            Assert.Equal("Gary", username.FirstName);

            Assert.Equal("Woodfine", username.LastName);

 

        }

 

        [Fact]

        public void ShouldGetLastNameFirst()

        {

            //arrange

            var user = "Woodfine, Gary";

 

            //act

            var username = _factory.GetUserName(user);

 

 

            //assert

            Assert.Equal("Gary", username.FirstName);

            Assert.Equal("Woodfine", username.LastName);

 

        }

    }

As you can see now, when we make a call to GetUserName function we simply pass in the string and the factory method decides which version of the UserName instance to pass back to the application. The developer doesn’t need to concern themselves with any of the string manipulation required. The UserNameFactory simple factory design pattern is a form of abstraction, which hides the actual logic of implementation of an object so the initialization code can focus on usage, rather than the inner workings. provides us the detail we need, all we need to do is pass the name string in and the factory will determine which class to use to format the data.

Summary

That is the fundamental principle of the Simple Factory Pattern, to create an abstraction that decides which of the several possible classes to return. A developer simply calls a method of the class without knowing the implementation detail or which subclass it actually uses to implement the logic.

This approach helps to keep issues of data dependence separated from the classes’ useful methods.

Helps businesses by improving their technical proficiencies and eliminating waste from the software development pipelines.A unique background as business owner, marketing, software development and business development ensures that he can offer the optimum business consultancy services across a wide spectrum of business challenges.

Latest posts by Gary Woodfine (see all)

Perşembe, 30 Ağustos 2018 / Published in Uncategorized

In this demo, I will show how to utilize the Automapper library efficiently. Automapper makes our lives easy with minimal steps. In a nutshell, AutoMapper is an object-object mapper. It transforms the input object of one type into an output object of another type.

Requirements

  • Visual Studio 2017.

  • Auto Mapper NuGet Package

  • Auto Mapper Dependency Injection Package

In this example, I’ve taken two classes, Employee and EmployeeModel.

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class Employee {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public string City {  

  16.             get;  

  17.             set;  

  18.         }  

  19.         public string State {  

  20.             get;  

  21.             set;  

  22.         }  

  23.     }  

  24. }  

  25. namespace ASPNETCORE_AUTOMAPPER.Models {  

  26.     public class EmployeeModel {  

  27.         public int Id {  

  28.             get;  

  29.             set;  

  30.         }  

  31.         public string Name {  

  32.             get;  

  33.             set;  

  34.         }  

  35.         public string Designation {  

  36.             get;  

  37.             set;  

  38.         }  

  39.         public Address Address {  

  40.             get;  

  41.             set;  

  42.         }  

  43.     }  

  44.     public class Address {  

  45.         public string City {  

  46.             get;  

  47.             set;  

  48.         }  

  49.         public string State {  

  50.             get;  

  51.             set;  

  52.         }  

  53.     }  

  54. }  

We are getting Employee object from the end user and trying to assign it to EmployeeModel with each property like below, which is a tedious job and in real time scenarios, we may have plenty of properties as well as complex types. So this is an actual problem.

  1. [HttpPost]  

  2. public EmployeeModel Post([FromBody] Employee employee) {  

  3.     EmployeeModel empmodel = new EmployeeModel();  

  4.     empmodel.Id = employee.Id;  

  5.     empmodel.Name = employee.Name;  

  6.     empmodel.Designation = employee.Designation;  

  7.     empmodel.Address = new Address() {  

  8.         City = employee.City, State = employee.State  

  9.     };  

  10.     return empmodel;  

  11. }  

To overcome this situation, we have a library called AutoMapper.

Incorporate this library into your application by following the below steps.

Open Visual Studio and Click on File – New Project and select ASP.NET CORE WEB APPLICATION,

Click on Ok and you’ll get the below window where you have to select WebApp (MVC).

As soon as you click on the Ok button your application is ready.

Now, the actual auto mapper should take place. For that, we need to add NuGet reference to the solution. Make sure we have to add two references to solution

  • Add Main AutoMapper Package to the solution,

  • Now, add the Auto mapper dependency Injection Package,

  • Now, call AddAutoMapper from StartUp.cs file as shown below,

    1. public void ConfigureServices(IServiceCollection services) {  

    2.     services.AddMvc();  

    3.     services.AddAutoMapper();  

    4. }  

  • Now, create the MappingProfile.cs file under the root project and write the below snippet

    1. public class MappingProfile: Profile {  

    2.     public MappingProfile() {  

    3.         CreateMap < Employee, EmployeeModel > ()  

    4.     }  

    5. }  

  • Here CreateMap method is used to map data between Employee and EmployeeModel.

If you observe here we called ForMember method, which is used when we have different datatypes in source and destination classes.

Employee Class should be like this,

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class Employee {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public string City {  

  16.             get;  

  17.             set;  

  18.         }  

  19.         public string State {  

  20.             get;  

  21.             set;  

  22.         }  

  23.     }  

  24. }  

EmployeeModel.cs should be like this,

  1. namespace ASPNETCORE_AUTOMAPPER.Models {  

  2.     public class EmployeeModel {  

  3.         public int Id {  

  4.             get;  

  5.             set;  

  6.         }  

  7.         public string Name {  

  8.             get;  

  9.             set;  

  10.         }  

  11.         public string Designation {  

  12.             get;  

  13.             set;  

  14.         }  

  15.         public Address Address {  

  16.             get;  

  17.             set;  

  18.         }  

  19.     }  

  20.     public class Address {  

  21.         public string City {  

  22.             get;  

  23.             set;  

  24.         }  

  25.         public string State {  

  26.             get;  

  27.             set;  

  28.         }  

  29.     }  

  30. }  

In Employee.cs file having City and State properties but in EmployeeModel.cs we have Address type. So if we try to map these two models we may end up missing type configuration error. So to overcome that issue we have to use ForMember method which tells mapper what properties it should map for that particular Address field. So we have to tweak the MappingProfile.cs file like below:

  1. public class MappingProfile: Profile {  

  2.     public MappingProfile() {  

  3.         CreateMap < Employee, EmployeeModel > ().ForMember(dest => dest.Address, opts => opts.MapFrom(src => new Address {  

  4.             City = src.City, State = src.State  

  5.         }));  

  6.     }  

  7. }  

So the next step is we have to hook this up from our controller;  just follow the below snippet

  1. namespace ASPNETCORE_AUTOMAPPER.Controllers {  

  2.     public class EmployeeController: Controller {  

  3.         private readonly IMapper _mapper;  

  4.         public EmployeeController(IMapper mapper) {  

  5.             _mapper = mapper;  

  6.         }  

  7.         public IActionResult Index() {  

  8.                 return View();  

  9.             }  

  10.             [HttpPost]  

  11.         public EmployeeModel Post([FromBody] Employee employee) {  

  12.             EmployeeModel empmodel = new EmployeeModel();  

  13.             empmodel = _mapper.Map < Employee, EmployeeModel > (employee);  

  14.             return empmodel;  

  15.         }  

  16.     }  

  17. }  

Here we have injected IMapper to EmployeeController and performed mapping operation between Employee and EmployeeModel

If you pass data to Employee object,  it will directly map to EmployeeModel object with the help of Mapper.

Now if you observe, EmployeeModel is filled in with all the property data with the help of Mapper. This is the actual beauty of AutoMapper.

So if you come across a requirement to map data from your DTOs to Domain object choose Automapper and it will do all your work with less code.