Salı, 13 Kasım 2018 / Published in Uncategorized

Introduction

In this post, I would like to talk about extending your application and your DbContext to run arbitrary code when a save occurs.

The Backstory

While working with quite a few applications that work with databases, especially using entity framework, I noticed the pattern of saving changes to the database and then do something else based on those changes. A few examples of that are as follows:

  • When the user state changes, reflect that in the UI.
  • When adding or updating a product, update the stock.
  • When deleting an entity, do another action like check for validity.
  • When an entity changes in any way (add, update, delete), send that out to an external service.

These are mostly akin to having database triggers when the data changes, some action needs to be performed, but those actions are not always database related, more as a response to the change in the database, which sometimes is just business logic.

As such, in one of these applications, I found a way to incorporate that behavior and clean up the repetitive code that would follow, while also keeping it maintainable by just registering the triggers into the IoC container of ASP.NET core.

In this post, we will be having a look at the following:

  • How to extend the DbContext to allow for the triggers
  • How to register multiple instances into the container using the same interface or base class
  • How to create entity instances from tracked changes so we can work with concrete items
  • How to limit our triggers to only fire under certain data conditions
  • Injecting dependencies into our triggers
  • Avoiding infinite loops in our triggers

We have a long enough road ahead so let’s get started.

Creating the Triggers Framework

ITrigger Interface

We will start off with the root of our triggers and that is the ITrigger interface.

using Microsoft.EntityFrameworkCore.ChangeTracking;

public interface ITrigger
{
    void RegisterChangedEntities(ChangeTracker changeTracker);
    Task TriggerAsync();
}
  • The RegisterChangedEntities method accepts a ChangeTracker so that if need be, we can store the changes that happened for later use.
  • The TriggerAync method actually runs our logic, the reason why these two are separate we will see when we will make changes to the DbContext.

TriggerBase Base Class

Next off, we will be looking at a base class that is not mandatory though it does exist for two main reasons:

  1. To house the common logic of the triggers, including the state of the tracked entities
  2. To be able to filter out trigger based on the entity they are meant for
using System.Collections.Generic;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore.ChangeTracking;

public abstract class TriggerBase<T> : ITrigger
{
    protected IEnumerable<TriggerEntityVersion<T>> TrackedEntities;

    protected abstract IEnumerable<TriggerEntityVersion<T>> 
              RegisterChangedEntitiesInternal(ChangeTracker changeTracker);

    protected abstract Task TriggerAsyncInternal(TriggerEntityVersion<T> trackedTriggerEntity);

    public void RegisterChangedEntities(ChangeTracker changeTracker)
    {
        TrackedEntities = RegisterChangedEntitiesInternal(changeTracker).ToArray();
    }

    public async Task TriggerAsync()
    {
        foreach (TriggerEntityVersion<T> triggerEntityVersion in TrackedEntities)
        {
            await TriggerAsyncInternal(triggerEntityVersion);
        }
    }
}

Let’s break it down member by member and understand what’s with this base class:

  1. The class is a generic type of T, this ensures that the logic that will be running in any of its descendants will only apply to a specific entity that we want to run our trigger against.
  2. The protected TrackedEntities field holds on to the changed entities, both before and after the change so we can run our trigger logic against them.
  3. The abstract method RegisterChangedEntitiesInternal will be overridden in concrete implementations of this class and ensures that given a ChangeTracker, it will return a set of entities we want to work against. This is not to say that it cannot return an empty collection, it’s just that if we opt to implement a trigger via the TriggerBase class, then it’s highly likely we would want to hold onto those instances for later use.
  4. The abstract method TriggerAsyncInternal runs our trigger logic against n entity we saved from the collection.
  5. The public method RegisterChangedEntities ensures that the abstract method RegisterChangedEntitiesInternal is called, then it calls .ToArray() to ensure that if we have an IEnumerable query, that it also actually executes so that we don’t end up with a collection that is updated later on in the process in an invalid state. This is mostly a judgment call on my end because it is easy to forget that IEnumerable queries have a deferred execution mechanic.
  6. The public method TriggerAsync just enumerates over all of the entities calling TriggerAsyncInternal on each one.

Now that we discussed the base class, it’s time we move on to the definition of a TriggerEntityVersion.

The TriggerEntityVersion Class

The TriggerEntityVersion class is a helper class that serves the purpose of housing the old and the new instance of a given entity.

using System.Linq;
using System.Reflection;
using Microsoft.EntityFrameworkCore.ChangeTracking;

public class TriggerEntityVersion<T>
{
    public T Old { get; set; }
    public T New { get; set; }

    public static TriggerEntityVersion<TResult> 
     CreateFromEntityProperty<TResult>(EntityEntry<TResult> entry) where TResult : class, new()
    {
        TriggerEntityVersion<TResult> returnedResult = new TriggerEntityVersion<TResult>
        {
            New = new TResult(),
            Old = new TResult()
        };

        foreach (PropertyInfo propertyInfo in typeof(TResult)
             .GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.GetProperty)
             .Where(pi => entry.OriginalValues.Properties.Any(property => property.Name == pi.Name)))
        {
            if (propertyInfo.CanRead && (propertyInfo.PropertyType == typeof(string) || 
                                         propertyInfo.PropertyType.IsValueType))
            {
                propertyInfo.SetValue(returnedResult.Old, entry.OriginalValues[propertyInfo.Name]);
            }
        }

        foreach (PropertyInfo propertyInfo in typeof(TResult)
             .GetProperties(BindingFlags.Instance | BindingFlags.Public | BindingFlags.GetProperty)
             .Where(pi => entry.OriginalValues.Properties.Any(property => property.Name == pi.Name)))
        {
            if (propertyInfo.CanRead && (propertyInfo.PropertyType == typeof(string) || 
                                         propertyInfo.PropertyType.IsValueType))
            {
                propertyInfo.SetValue(returnedResult.New, entry.CurrentValues[propertyInfo.Name]);
            }
        }

        return returnedResult;
    }
}

The breakdown for this class is as follows:

  1. We have two properties of the same type, one representing the Old instance before any modifications were made and the other representing the New state after the modifications have been made.
  2. The factory method CreateFromEntityProperty uses reflection so that we can turn an EntityEntry which into our own entity so it’s easier to work with, since an EntityEntry is not something so easy to interrogate and work with, this will create instances of our entity and copy over the original and current values that are being tracked, but only if they can be written to and are strings or value types (since classes would represent other entities most of the time, excluding owned properties). Additionally, we only look at the properties being tracked.

We will see an example of how this is used in the following section where we see how to implement concrete triggers.

Concrete Triggers

We will be creating two triggers to show off how they can differ and also how to register multiple triggers later on when we do the integration into the ServiceProvider.

Attendance Trigger

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using DbBroadcast.Models; // this is just to point to the `TriggerEntityVersion`, 
                          // will differ in your system
using Microsoft.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using Microsoft.Extensions.Logging;

public class AttendanceTrigger : TriggerBase
{
    private readonly ILogger _logger;

    public AttendanceTrigger(ILogger logger)
    {
        _logger = logger;
    }

    protected override IEnumerable RegisterChangedEntitiesInternal(ChangeTracker changeTracker)
    {
        return changeTracker
                .Entries()
                .Where(entry => entry.State == EntityState.Modified)
                .Select(TriggerEntityVersion.CreateFromEntityProperty);
    }

    protected override Task TriggerAsyncInternal(TriggerEntityVersion trackedTriggerEntity)
    {
        _logger.LogInformation($"Update attendance for user {trackedTriggerEntity.New.Id}");
            return Task.CompletedTask;
    }
}

From the definition of this trigger, we can see the following:

  1. This trigger will apply for the entity ApplicationUser.
  2. Since the instance of the trigger is created via ServiceProvider, we can inject dependencies via its constructor as we did with the ILogger.
  3. The RegisterChangedEntitiesInternal method implements a query on the tracked entities of type ApplicationUser only if they have been modified. We could check for additional conditions but I would suggest doing that after the .Select call so that you can work with actual instances of your entity.
  4. The TriggerAsyncInternal implementation will just log out the new Id of the user (or any other field we might want).

Ui Trigger

using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.EntityFrameworkCore.ChangeTracking;
using Microsoft.Extensions.Logging;

using DbBroadcast.Models;

public class UiTrigger : TriggerBase<ApplicationUser>
{
    private readonly ILogger<AttendanceTrigger> _logger;

    public UiTrigger(ILogger<AttendanceTrigger> logger)
    {
        _logger = logger;
    }

    protected override IEnumerable<TriggerEntityVersion<ApplicationUser>> 
                          RegisterChangedEntitiesInternal(ChangeTracker changeTracker)
    {
        return changeTracker.Entries<ApplicationUser>().Select
                   (TriggerEntityVersion<ApplicationUser>.CreateFromEntityProperty);
    }

    protected override Task TriggerAsyncInternal
                  (TriggerEntityVersion<ApplicationUser> trackedTriggerEntity)
    {
        _logger.LogInformation($"Update UI for user {trackedTriggerEntity.New.Id}");;
        return Task.CompletedTask;
    }
}

This class is the same as the previous one, this more for example purposes, except it has a different message and also, it will track all changes to ApplicationUser entities regardless of their state.

Registering the Triggers

Now that we have written up our triggers, it’s time to register them. To register multiple implementations of the same interface or base class, all we need to do is make a change in the Startup.ConfigureServices method (or wherever you’re registering your services) as follows:

services.TryAddEnumerable(new []
{
    ServiceDescriptor.Transient<ITrigger, AttendanceTrigger>(), 
    ServiceDescriptor.Transient<ITrigger, UiTrigger>(), 
});

This way, you can have triggers of differing lifetimes, as many as you want (though they should be in line with the lifetime of your context, else you will get an error), and easy to maintain. You could even have a configuration file to enable at will certain triggers :D.

Modifying the DbContext

Here, I will show two cases which can be useful depending on your requirement. You will also see that the implementation is the same, the difference being a convenience since for simple cases, all you need to do is inherit, for complex cases, you would need to make these changes manually.

Use a Base Class

If your context only inherits from DbContext, then you could use the following base class:

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using DbBroadcast.Data.Triggers;
using Microsoft.EntityFrameworkCore;
using Microsoft.Extensions.DependencyInjection;

public abstract class TriggerDbContext : DbContext
{
    private readonly IServiceProvider _serviceProvider;

    public TriggerDbContext(DbContextOptions<ApplicationDbContext> options, 
                            IServiceProvider serviceProvider)
        : base(options)
    {
        _serviceProvider = serviceProvider;
    }

    public override async Task<int> SaveChangesAsync
               (CancellationToken cancellationToken = new CancellationToken())
    {
        IEnumerable<ITrigger> triggers =
            _serviceProvider?.GetServices<ITrigger>()?.ToArray() ?? Enumerable.Empty<ITrigger>();

        foreach (ITrigger userTrigger in triggers)
        {
            userTrigger.RegisterChangedEntities(ChangeTracker);
        }

        int saveResult = await base.SaveChangesAsync(cancellationToken);

        foreach (ITrigger userTrigger in triggers)
        {
            await userTrigger.TriggerAsync();
        }

        return saveResult;
    }
}

Things to point out here are as follows:

  1. We inject the IServiceProvider so that we can reach out to our triggers.
  2. We override the SaveChangesAsync (same would go for all the other save methods of the context, though this one is the most used nowadays) and implement the changes.
    1. We get the triggers from the ServiceProvider (we could even filter them out for a specific trigger type but it’s better to have them as is cause it keeps it simple)
    2. We run through each trigger and save the entities that have changes according to our trigger registration logic.
    3. We run the actual save inside the database to ensure that everything worked properly (is there a database error, then the trigger would get cancelled due to the exception bubbling)
    4. We then run each trigger.
    5. We return the result as if nothing happened :D.

Keep in mind that given this implementation you wouldn’t want to have a trigger that updates the same entity or you might end up in a loop, so either you must have firm rules for your trigger or just don’t change the same entity inside the trigger.

Using Your Existing Context

using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading;
using System.Threading.Tasks;
using DbBroadcast.Data.Triggers;
using Microsoft.AspNetCore.Identity.EntityFrameworkCore;
using Microsoft.EntityFrameworkCore;
using DbBroadcast.Models;
using Microsoft.Extensions.DependencyInjection;

public class ApplicationDbContext : IdentityDbContext<ApplicationUser>
{
    private readonly IServiceProvider _serviceProvider;

    public ApplicationDbContext(DbContextOptions<ApplicationDbContext> options, 
                                IServiceProvider serviceProvider)
        : base(options)
    {
        _serviceProvider = serviceProvider;
    }

    public override async Task<int> SaveChangesAsync
                (CancellationToken cancellationToken = new CancellationToken())
    {
        IEnumerable<ITrigger> triggers =
            _serviceProvider?.GetServices<ITrigger>()?.ToArray() ?? Enumerable.Empty<ITrigger>();

        foreach (ITrigger userTrigger in triggers)
        {
            userTrigger.RegisterChangedEntities(ChangeTracker);
        }

        int saveResult = await base.SaveChangesAsync(cancellationToken);

        foreach (ITrigger userTrigger in triggers)
        {
            await userTrigger.TriggerAsync();
        }

        return saveResult;
    }
}

As you can see, this is nearly identical to the base class but since this context already inherits from IdentityDbContext, then you have to implement your own.

To implement your own, you need to both update your constructor to accept a ServiceProvider and override the appropriate save methods.

Conclusion

For this to work, we’ve taken advantage of inheritance, the strategy pattern for the triggers, playing with the ServiceProvider and multiple registrations.

I hope you enjoyed this as much as I did tinkering with it, and I’m curious to find out what kinda trigger you might come up with.

Thank you and happy coding.

Salı, 13 Kasım 2018 / Published in Uncategorized

Hello and Welcome

As some of you might have heard, ASP.NET Core 2.1.0 went live at the end of last month. One of the features that come with this release is also the release of SignalR which for those who do not know about it, it’s a library that allows for two-way communication between a web server and a browser over HTTP or WebSockets.

I won’t be going into details as to how SignalR works because there’s quite a bit of documentation on it provided in the link above, including a tutorial, so in this post we will be having a look at how we can unit test a SignalR hub so that we can make sure that our server is sending out the right signals.

The code for this exercise can be found here.

The Setup

Creating the web project

For this post, we’re going to create a new ASP.NET Core 2.1 application (should work with all of the ASP.Core web application templates), with no authentication or any other details because we have no interest in those.

Creating the test project

Then we will create a .Net Core test project which will have a reference to the following Nuget packages:

I find that these are the bare minimum packages I work best with when unit testing.

Then we reference our own web application from the test project.

Creating the hub

Now that we have the projects out of the way, let’s register a simple hub in our web application.

Let’s create a file in the web application called SimpleHub and it will look like this:

namespace SignalRWebApp
{
    using System.Threading.Tasks;

    using Microsoft.AspNetCore.SignalR;

    public class SimpleHub : Hub
    {
        public override async Task OnConnectedAsync()
        {
            await Welcome();

            await base.OnConnectedAsync();
        }

        public async Task Welcome()
        {
            await Clients.All.SendAsync("welcome", new[] { new HubMessage(), new HubMessage(), new HubMessage() });
        }
    }
}

For this we will also create the class HubMessage that is just a placeholder so that we don’t use anonymous objects and it looks like this:

namespace SignalRWebApp
{
    public class HubMessage
    {

    }
}

This will just send a series of 3 messages to anyone who connects to the hub. I chose the arbitrary number 3 so that I can also test the content and length of messages sent by the server

In Startup.cs we add the following lines:

  • In ConfigureServices we add the line services.AddSignalR();
  • In Configure before the line app.UseMvc we add the line app.UseSignalR(builder => builder.MapHub("/hub"));

With this, we now have a working hub to which clients can connect to.

Creating the client connection

To test this out we will be using the install steps found here to install javascript SignalR so that we can use it in the browser and make out own script as follows:

@section Scripts
{

        $(document).ready(() => {
            const connection = new signalR.HubConnectionBuilder().withUrl("/hub").build();

            connection.on("welcome", (messages) => {
                alert(messages);
            });

            connection.start().catch(err => console.error(err.toString()));
        });

}

And now all we need to do is run the website and see that we get an alert with 3 objects.

The test

Now we get to the interesting part, I will paste the test here and then break it down.

namespace SignalRWebApp.Tests
{
    using System.Threading;
    using System.Threading.Tasks;

    using Microsoft.AspNetCore.SignalR;

    using Moq;

    using NUnit.Framework;

    [TestFixture]
    public class Test
    {
        [Test]
        public async Task SignalR_OnConnect_ShouldReturn3Messages()
        {
            // arrange
            Mock<IHubCallerClients> mockClients = new Mock<IHubCallerClients>();
            Mock<IClientProxy> mockClientProxy = new Mock<IClientProxy>();

            mockClients.Setup(clients => clients.All).Returns(mockClientProxy.Object);


            SimpleHub simpleHub = new SimpleHub()
            {
                Clients = mockClients.Object
            };

            // act
            await simpleHub.Welcome();


            // assert
            mockClients.Verify(clients => clients.All, Times.Once);

            mockClientProxy.Verify(
                clientProxy => clientProxy.SendCoreAsync(
                    "welcome",
                    It.Is<object[]>(o => o != null && o.Length == 1 && ((object[])o[0]).Length == 3),
                    default(CancellationToken)),
                Times.Once);
        }
    }
}

Now let’s break it down:

  1. In true testing fashion, the test is split up in 3 sections, arrange which handles the setup for the test, act which does the actual logic we want to test, and assert which tests that our logic actually behaved as we intended.
  2. SignalR hubs don’t really contain a lot of logic, all they do is to delegate the work onto IHubCallerClients which in turn when sending a message will delegate the call to an IClientProxy
  3. We then create a mock for both IHubCallerClients and IClientProxy
  4. On line 22 we set up the mock so that when the All property is called, then the instance of the IClientProxy mock is returned.
  5. We then create a SimpleHub and tell it to use our mock for it’s Clients delegation. Now we have full control over the flow.
  6. We do the call to SimpleHub.Welcome which starts the whole process of sending a message to the connected clients.
  7. On line 35 we check that indeed our mock of IHubCallerClients was used and that it was only called once.
  8. Line 37 is a bit more specific:
    • Firstly, we’re checking for a call to SendCoreAsync, this is because the method we used in the hub called SendAsync is actually an extension method that just wraps the parameters into an array and sends it to SendCoreAsync.
    • We check that indeed the method that is to be called on the client side is actually named welcome.
    • We then check that the message that was sent is not null (being an and clause then it would short circuit if it is null), that it has a Length of 1 (remember from earlier that the messages get wrapped into an additional array) and that the first element in that collection is indeed an object array with 3 items (our messages).
    • We also have to provide the default for CancelationToken since Moq can’t validate for optional parameters.
    • And lastly we check that the message was sent only once

And with that, we have now tested that our SignalR hub is indeed working as intended. Using this approach, in a separate project, I could also fine grain test everything that was being passed in, including when the message is for only one specific client.

And that concludes our post for testing SignalR for ASP.NET Core 2.1.0. Hope you enjoyed it and see you next time,

Vlad V.

Salı, 13 Kasım 2018 / Published in Uncategorized

Hello and Welcome :),

Today we are going to discuss how we can add functionality to an APS.Net Core application outside of a request.

The code for this post can be found here.

The story

As some, if not all of you know, web servers usually only work in the context of a request. So when we deploy an ASP.Net Core (or any other web server) and it doesn’t receive a request to the server a response to, then, it will stay insert on the server waiting for a request, be it from a browser or an API endpoint.

But there might be occasions when, depending on the application that is being built, we need to do so some work outside of the context of a request. A list of such possible scenarios goes as follows:

  • Serving notifications to users
  • Scraping currency exchange rates
  • Doing data maintenance and archival
  • Communicating with a non-deterministic external system
  • Process an approval workflow

Though there are not a whole lot of scenarios in which a web server would do more than just serve responses to requests, otherwise this would common knowledge, it is useful to know how to embed such behavior in our applications without creating worker applications.

The Setup

The Project

First, let’s create an ASP.Net Core application, in my example, I created a 2.1 MVC Application.

We’re going to use this project to create a background worker as an example.

The Injectable Worker

Though this step is not mandatory for our work, we will create a worker class that will be instantiated via injection so we can test out the worker class and keep it decoupled from the main application.

namespace AspNetBackgroundWorker
{
    using Microsoft.Extensions.Logging;

    public class BackgroundWorker
    {
        private readonly ILogger _logger;

        private int _counter;

        public BackgroundWorker(ILogger logger)
        {
            _counter = 0;
            _logger = logger;
        }

        public void Execute()
        {
            _logger.LogDebug(_counter.ToString());
            _counter++;
        }
    }
}

Notice that for this example this class doesn’t do much except log out a counter, though the reason we’re using an ILogger is so that we can see it in action with it being created and having dependencies injected.

Registering the worker in the inversion of control container

Inside the ConfigureServices method from the Startup.cs file we will introduce the following line:

services.AddSingleton();

It doesn’t need to be a singleton, but it will serve well for our purpose.

The implementation

Now that we have a testable and injectable worker class created and registered, we will move on to making it run in the background.

For this, we will be going into the Program.cs file and change it to the following:

using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;

namespace AspNetBackgroundWorker
{
    using System;
    using System.Threading;

    using Microsoft.Extensions.DependencyInjection;

    public class Program
    {
        public static void Main(string[] args)
        {
            // We split up the building of the webHost with running it so that we can do some additional work before the server actually starts
            var webHost = CreateWebHostBuilder(args).Build(); 

            // We create a dedicate background thread that will be running alongside the web server.
            Thread counterBackgroundWorkerThread = new Thread(CounterHandlerAsync) 
            {
                IsBackground = true
            };

            // We start the background thread, providing it with the webHost.Service so that we can benefit from dependency injection.
            counterBackgroundWorkerThread.Start(webHost.Services); 

            webHost.Run(); // At this point, we're running the server as normal.
        }

        private static void CounterHandlerAsync(object obj)
        {
            // Here we check that the provided parameter is, in fact, an IServiceProvider
            IServiceProvider provider = obj as IServiceProvider 
                                        ?? throw new ArgumentException($"Passed in thread parameter was not of type {nameof(IServiceProvider)}", nameof(obj));

            // Using an infinite loop for this demonstration but it all depends on the work you want to do.
            while (true)
            {
                // Here we create a new scope for the IServiceProvider so that we can get already built objects from the Inversion Of Control Container.
                using (IServiceScope scope = provider.CreateScope())
                {
                    // Here we retrieve the singleton instance of the BackgroundWorker.
                    BackgroundWorker backgroundWorker = scope.ServiceProvider.GetRequiredService();

                    // And we execute it, which will log out a number to the console
                    backgroundWorker.Execute();
                }

                // This is only placed here so that the console doesn't get spammed with too many log lines
                Thread.Sleep(TimeSpan.FromSeconds(1));
            }
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup();
    }
}

I have provided some inline comments so that it’s easier to follow along.

To test out this code, we need to run the application in console/project mode so that we can follow along on the console window.

Conclusion

Although this example doesn’t do much in the sense of a real-life scenario, it does show us how to make a background thread and run it alongside the web server.

Also, it is not mandatory to run the thread from the Program.cs file, but since this will be a background worker that will do it’s things forever, I thought it would have been a nice spot. Some other places this could be used at would be:

  • From a middleware
  • From a controller
  • Creating a class that can receive methods and delegates to run ad-hoc and arbitrary methods.

And since we are making use of IServiceProvider then we can use all the registered services at our disposal, not only the ones we registered but also the ones the web server registered, for example Logger, Options, DbContext.

I personally used it in a scenario where a signalR hub would send out periodical notifications to specific users, and that needed to run outside the context of a web request.

I hope you enjoyed this post and found it useful, See you next time. Vlad V.

Salı, 13 Kasım 2018 / Published in Uncategorized

Hello and welcome,

In this post, we are going to have a look at how we can write automated tests using entity framework core.

As in the recent previous posts, the code can be found here.

The story

A while back when working on a project, I wanted to do an integration test with a mock database so that I can validate that the database constraints are actually working and are configured properly.

So in my search to find a viable solution that will do the job but also won’t carry a lot of overhead in both processing power and cleanup I ended up on this documentation page.

If you took a look at the documentation page, you will see that the documentation actually specifies two approaches for this. So after having a look over both of them, I thought the InMemory approach would be better for me since I didn’t want to include an additional dependency to SQLite. Well, soon after I followed the example and implemented it, I found out that it wasn’t in fact what I needed, mostly because of two major reasons:

  • The InMemory database requires a name. The reason for this so that you can use the same database across tests. This was mostly an annoyance for me since I didn’t want to recreate the database in every test, and if I can help not repeating myself, all the better.
  • The InMemory database is not a relational database, it’s not really a much of a database at all since I found out the hard way that all the constraints I configured into my database context weren’t even validated. If I wasn’t following the Test Driven Development approach I would have found out about a bug in my code when manually testing or worse, in production.

This brings me to this post’s topic, doing in memory testing using SQLite, reusing the functionality, and even inspecting the generated queries.

The setup

First off, we will create a new ASP.Net Core MVC project with individual authentication (this is mostly because it comes already configured with a database context, of course, you can roll your own).

Next step, and where we will do most of the work, is the test project. For that, we will need a .Net Core Console project.

Once we have the test project and referenced our WebApplication project (so we can reach the DbContext), we will need to install some Nuget packages:

  • Microsoft.NET.Test.Sdk (15.7.2)
    • This is needed to actually run the units tests.
  • NUnit (3.10.1)
    • My testing framework of choice, though you can use any framework that suits your needs
  • NUnit3TestAdapter (3.10)
    • This is so that ReSharper and visual studio can find and run the tests
  • Microsoft.AspNetCore.App
    • Installing this package since we’re making use of the IdentityDbContext, though this is only because of the template I opted to use for this example.
  • Microsoft.EntityFrameworkCore.Sqlite (2.1.0)

The implementation

Now that we have all that we need, let’s move on to the implementation.

Out of habit, inside the test project, I usually create a folder called TestUtilities and inside that a static class called TestDatabaseContextFactory.

I’ve annotated the class line by line to explain what everything does.

namespace Test.TestUtilities
{
    using Microsoft.Data.Sqlite;
    using Microsoft.EntityFrameworkCore;
    using Microsoft.Extensions.Logging;

    using WebApplication.Data;

    public static class TestDatabaseContextFactory
    {
        public static ApplicationDbContext CreateDbContext()
        {
            DbContextOptionsBuilder<ApplicationDbContext> contextOptionsBuilder = // declaring the options we are going to use for the context we will be using for tests
                new DbContextOptionsBuilder<ApplicationDbContext>();

            LoggerFactory loggerFactory = new LoggerFactory(); // this will allow us to add loggers so we can actually inspect what code and queries EntityFramework produces.
            loggerFactory
                .AddDebug() // this logger will log to the Debug output
                .AddConsole(); // this logger will output to the console

            SqliteConnectionStringBuilder connectionStringBuilder = new SqliteConnectionStringBuilder { Mode = SqliteOpenMode.Memory }; // this is more syntax friendly approach to defining and InMemory connection string for our database, the alternative is to write it out as a string.
            SqliteConnection connection = new SqliteConnection(connectionStringBuilder.ConnectionString); // create a connection to the InMemory database.
            connection.Open(); // open the connection

            contextOptionsBuilder.UseLoggerFactory(loggerFactory); // register the loggers inside the context options builder, this way, entity framework logs the queries
            contextOptionsBuilder.UseSqlite(connection); // we're telling entity framework to use the SQLite connection we created.
            contextOptionsBuilder.EnableSensitiveDataLogging(); // this will give us more insight when something does go wrong. It's ok to use it here since it's a testing project, but be careful about enabling this in production.

            ApplicationDbContext context = new ApplicationDbContext(contextOptionsBuilder.Options); // creating the actual DbContext

            context.Database.EnsureCreated(); // this command will create the schema and apply configurations we have made in the context, like relations and constraints

            return context; // return the context to be further used in tests.
        }
    }
}

With this in place we can now actually use it in our tests like so:

namespace Test
{
    using NUnit.Framework;

    using Test.TestUtilities;

    using WebApplication.Data;

    [TestFixture]
    public class TestingDatabaseCreation
    {
        [Test]
        public void TestCreation()
        {
            ApplicationDbContext context = TestDatabaseContextFactory.CreateDbContext();
        }
    }
}

Even though this test doesn’t assert anything, it will still pass, and with the added bonus that if we look in the output of the test run we will see all the queries that have been run against the database.

Conclusion

This might not seem much, but it has become a standard to use it in my projects because even though the idea of unit tests is nice, there are times when our tests need to be small and fast but still do actual application logic, which most of the times especially when using SQL, the functionality doesn’t only live inside the application.

Keep in mind that this approach is using SQLite, so if you’re using some other database or a NoSQL database, look into their in memory variants if they have them. That being said SQLite and SQL Server are different as well, but 9 out 10 functionalities that we will be using Entity Framework for is present in both.

I hope you enjoyed this post, and there are a few others related to Entity Framework Core on the horizon.

Cheers, Vlad V.

Salı, 13 Kasım 2018 / Published in Uncategorized

Enhancing Asp.Net Core logging pipeline with Serilog

Hello and welcome :),

Today I want to show how we can switch out the default logging pipeline in favor of Serilog which has a lot more providers implemented by the community and also provides a way to log structured data.

The backstory

For those of you who have created projects in asp.net core 1.1 or earlier, you might remember the Program.cs file looking like this:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Hosting;

namespace WebApplication1
{
    public class Program
    {
        public static void Main(string[] args)
        {
            var host = new WebHostBuilder()
                .UseKestrel()
                .UseContentRoot(Directory.GetCurrentDirectory())
                .UseIISIntegration()
                .UseStartup()
                .UseApplicationInsights()
                .Build();

            host.Run();
        }
    }
}

As you can see, during previous versions of asp.net core the setup for the entry point of the application used to be more explicit. Now, starting from asp.net core 2.0 and higher the default Program.cs file looks like this:

using System;
using System.Collections.Generic;
using System.IO;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore;
using Microsoft.AspNetCore.Hosting;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;

namespace WebApplication1
{
    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .UseStartup();
    }
}

Thought the default builders cleans up the code nicely it does add some default (as the name implies) configurations that aren’t all that obvious.

If we take a look at what WebHost.CreateDefaultBuilder actually does, we will see the following:

public static IWebHostBuilder CreateDefaultBuilder(string[] args)
{
    var builder = new WebHostBuilder();

    if (string.IsNullOrEmpty(builder.GetSetting(WebHostDefaults.ContentRootKey)))
    {
        builder.UseContentRoot(Directory.GetCurrentDirectory());
    }

    if (args != null)
    {
        builder.UseConfiguration(new ConfigurationBuilder().AddCommandLine(args).Build());
    }

    builder.UseKestrel((builderContext, options) =>
        {
            options.Configure(builderContext.Configuration.GetSection("Kestrel"));
        })
        .ConfigureAppConfiguration((hostingContext, config) =>
        {
            var env = hostingContext.HostingEnvironment;

            config.AddJsonFile("appsettings.json", optional: true, reloadOnChange: true)
                  .AddJsonFile($"appsettings.{env.EnvironmentName}.json", optional: true, reloadOnChange: true);

            if (env.IsDevelopment())
            {
                var appAssembly = Assembly.Load(new AssemblyName(env.ApplicationName));
                if (appAssembly != null)
                {
                    config.AddUserSecrets(appAssembly, optional: true);
                }
            }

            config.AddEnvironmentVariables();

            if (args != null)
            {
                config.AddCommandLine(args);
            }
        })
        // THIS IS THE PART WE'RE INTERESTED IN. (INTEREST!!!)
        .ConfigureLogging((hostingContext, logging) =>
        {
            logging.AddConfiguration(hostingContext.Configuration.GetSection("Logging"));
            logging.AddConsole();
            logging.AddDebug();
        })
        .ConfigureServices((hostingContext, services) =>
        {
            // Fallback
            services.PostConfigure(options =>
            {
                if (options.AllowedHosts == null || options.AllowedHosts.Count == 0)
                {
                    // "AllowedHosts": "localhost;127.0.0.1;[::1]"
                    var hosts = hostingContext.Configuration["AllowedHosts"]?.Split(new[] { ';' }, StringSplitOptions.RemoveEmptyEntries);
                    // Fall back to "*" to disable.
                    options.AllowedHosts = (hosts?.Length > 0 ? hosts : new[] { "*" });
                }
            });
            // Change notification
            services.AddSingleton<IOptionsChangeTokenSource>(
                new ConfigurationChangeTokenSource(hostingContext.Configuration));

            services.AddTransient();
        })
        .UseIISIntegration()
        .UseDefaultServiceProvider((context, options) =>
        {
            options.ValidateScopes = context.HostingEnvironment.IsDevelopment();
        });

    return builder;
}

Well, that is sure a whole lot of configuration for the start, good thing it’s hidden behind such an easy call like CreateDefaultBuilder.

Now, if we look in the snippet of code above (I marked it with INTEREST!!! so it’s easy to find), you will see that by default the configuration setups so that logging is sent to the console and to the debug channel, we won’t be needing this since we’ll be using a different console and there’s no use in having two providers write to the same console at the same time.

The changes

So the first change we will do is the following:

public class Program
{
    public static void Main(string[] args)
    {
        CreateWebHostBuilder(args).Build().Run();
    }

    public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
        WebHost.CreateDefaultBuilder(args)
            .ConfigureLogging(
                (webHostBuilderContext, loggingBuilder) =>
                    {
                        loggingBuilder.ClearProviders();
                    })
            .UseStartup();
}

With this change, we’re clearing out both the console and the debug providers, so essentially now we don’t have any logging set up.

Now we’re gonna add the following Nuget packages (note that only two of them are required for this to work, all the other sinks are up to your own choice):

  • Serilog (this is the man package and is required)
  • Serilog.Extensions.Logging (this is used to integrate with the asp.net core pipeline, it will also install Serilog as a dependency)
  • Serilog.Sinks.ColoredConsole (this package adds a colored console out that makes it easier to distinguish between logging levels and messages, also this will install Serilog as a dependency)
  • Serilog.Enrichers.Demystify (this package is in pre-release but it makes it so that long stack traces from exceptions that cover async methods turn into a stack trace that is more developer friendly)

With these packages installed, we’re gonna change the Program.cs file again and it will end up looking like this:

namespace WebApplication1
{
    using System;

    using Microsoft.AspNetCore;
    using Microsoft.AspNetCore.Hosting;
    using Microsoft.Extensions.Logging;

    using Serilog;
    using Serilog.Extensions.Logging;

    public class Program
    {
        public static void Main(string[] args)
        {
            CreateWebHostBuilder(args).Build().Run();
        }

        public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>
            WebHost.CreateDefaultBuilder(args)
                .ConfigureLogging(
                    (webHostBuilderContext, loggingBuilder) =>
                        {
                            loggingBuilder.ClearProviders();

                            Serilog.Debugging.SelfLog.Enable(Console.Error); // this outputs internal Serilog errors to the console in case something breaks with one of the Serilog extensions or the framework itself

                            Serilog.ILogger logger = new LoggerConfiguration()
                                .Enrich.FromLogContext() // this adds more information to the output of the log, like when receiving http requests, it will provide information about the request
                                .Enrich.WithDemystifiedStackTraces() // this will change the stack trace of an exception into a more readable form if it involves async
                                .MinimumLevel.Verbose() // this give the minimum level to log, in production the level would be higher
                                .WriteTo.ColoredConsole() // one of the logger pipeline elements for writing out the log message
                                .CreateLogger();

                            loggingBuilder.AddProvider(new SerilogLoggerProvider(logger)); // this adds the serilog provider from the start
                        })
                .UseStartup();
    }
}

Now we have integrated Serilog into the main pipeline for logging used by all the components from Asp.Net Core. Notice that we also have access to the webHostBuilderContext which has a Configuration property which allows us to read from the application configuration so that we can set up a more complex pipeline, and there is also a nuget package that allows Serilog to read from an appsettings.json file.

Optionally, Serilog also allows that a log message also carry some additional properties, for that we would need to change the defaule outputTemplate from this "{Timestamp:yyyy-MM-dd HH:mm:ss} {Level:u3} {Message}{NewLine}{Exception}" to this "{Timestamp:yyyy-MM-dd HH:mm:ss} {Level} {Properties} {Message}{NewLine}{Exception}"; Notice the Properties template placeholder, this where serilog will place all aditional information that is not in the actual message, like data from the http request. To see how this change would look, see the following:

Serilog.ILogger logger = new LoggerConfiguration()
    .Enrich.FromLogContext() // this adds more information to the output of the log, like when receiving http requests, it will provide information about the request
    .Enrich.WithDemystifiedStackTraces() // this will change the stack trace of an exception into a more readable form if it involves async
    .MinimumLevel.Verbose() // this give the minimum level to log, in production the level would be higher
    .WriteTo.ColoredConsole(outputTemplate: "{Timestamp:yyyy-MM-dd HH:mm:ss} {Level} {Properties} {Message}{NewLine}{Exception}") // one of the logger pipeline elements for writing out the log message
    .CreateLogger();

Conclusion

Note that there are as many ways to set up a logging pipeline as there are applications, this is just my personal preference.

Also, in case, you were wondering why I opted to make the changes inside the Program.cs file instead of the Startup.Configure() method, as some examples show it online, is because I believe that if the default logging is set up in its own dedicated function, this should as well, also this introduces Seriloger earlier in the process than by using the Startup method, which in turn provides more information.

I hope you enjoyed this post and that it will help you better debug and maintain your applications.

Thank you and see you next time Cheers, Vlad V.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Dynamically Loading Middleware in ASP.NET Core

Introduction

The concept of middleware has been around since ASP.NET MVC (pre-Core) and OWIN. Essentially, a middleware component lives in a pipeline and handles requests and acts as a chain of responsibility, delegating to any subsequent middleware components registered in the pipeline after itself. The following image (taken from the Microsoft site) shows this.

MVC itself is implemented as a middleware component, as is redirection, exception handling, buffering, etc.

A middleware component can be added in several ways, but in ASP.NET Core, it all goes down to the Use method in IApplicationBuilder. Lots of API-specific methods rely on it to add their own middleware.

For the time being, we’ll make use of the IMiddleware interface that comes with ASP.NET Core. It provides a simple contract that has no dependencies other than the common HTTP abstractions.

One common request is the ability to load and inject middleware components dynamically into the pipeline. Let’s see how we can

Managed Extensibility Framework

.NET Core has Managed Extensibility Framework (MEF), and I previously blogged about it. MEF offers an API that can be used to find and instantiate plugins from assemblies, which makes it an interesting candidate for the discovery and instantiation of such middleware components.

Image result for managed extensibility frameworkWe’ll use the System.Composition NuGet package. As in my previous post, we’ll iterate through all the assemblies in a given path (normally, the ASP.NET Core’s bin folder) and try to find all implementations of our target interface. After that we’ll register them all to the MEF configuration.

Implementation

Our target interface will be called IPlugin and it actually inherits from IMiddleware. If we so wish, we can add more members to it, for now, it really doesn’t matter:

public interface IPlugin : IMiddleware
{
}

The IMiddleware offers an InvokeAsync method that can be called asynchronously and takes the current context and a pointer to the next delegate (or middleware component).

I wrote the following extension method for IApplicationBuilder:

public static class ApplicationBuilderExtensions

{

public static IApplicationBuilder UsePlugins(this IApplicationBuilder app, string path = null)        {

     var conventions = new ConventionBuilder();

        conventions

           .ForTypesDerivedFrom<IPlugin>()

           .Export<IPlugin>()

           .Shared();

           path = path ?? AppContext.BaseDirectory;

            var configuration = new ContainerConfiguration()

            .WithAssembliesInPath(path, conventions);

            using (var container = configuration.CreateContainer())

            {

           var plugins = container

                .GetExports<IPlugin>()

                    .OrderBy(p => p.GetType().GetCustomAttributes<ExportMetadataAttribute>(true)

.SingleOrDefault(x => x.Name == "Order")?.Value as IComparable ?? int.MaxValue); 

               foreach (var plugin in plugins)

                {

                    app.Use(async (ctx, next) =>

                    {

                    await plugin.InvokeAsync(ctx, null);

                        await next();

                    });

                }

          }

          return app;

    }

}

We define a convention that for each type found that implements IPlugin we register it as shared, meaning, as a singleton.

As you can see, if the path parameter is not supplied, it will default to AppContext.BaseDirectory.

We can add to the plugin/middleware implementation an ExportMetadataAttribute with an Order value to specify the order by which our plugins will be loaded, more on this in a moment.

The WithAssembliesInPath extension method comes from my previous post but I’ll add it here for your convenience:

public static class ContainerConfigurationExtensions
{     public static ContainerConfiguration WithAssembliesInPath(this ContainerConfiguration configuration, string path, SearchOption searchOption = SearchOption.TopDirectoryOnly)     {      return WithAssembliesInPath(configuration, path, null, searchOption);     }     public static ContainerConfiguration WithAssembliesInPath(this ContainerConfiguration configuration, string path, AttributedModelProvider conventions, SearchOption searchOption = SearchOption.TopDirectoryOnly)     {         var assemblyFiles = Directory          .GetFiles(path, "*.dll", searchOption);         var assemblies = assemblyFiles             .Select(AssemblyLoadContext.Default.LoadFromAssemblyPath);         configuration = configuration.WithAssemblies(assemblies, conventions);         return configuration;     }
}

If you want to search all assemblies in nested directories, you need to pass SearchOption.AllDirectories as the searchOption parameter, but this, of course, will have a performance penalty if you have a deep directory structure.

Putting it All Together

So, let’s write a few classes that implement the IPlugin interface and therefore are suitable to be used as middleware components:

[Export(typeof(IPlugin))] [ExportMetadata("Order", 1)] public class MyPlugin1 : IPlugin {     public async Task InvokeAsync(HttpContext context, RequestDelegate next)     {         //do something here

//this is needed because this can be the last middleware in the pipeline (next = null)         if (next != null)         {             await next(context);         }

//do something here     } }

Notice how we applied an ExportMetadataAttribute to the class with an Order value; this is not needed and if not supplied, it will default to the highest integer (int.MaxValue), which means it will load after all other plugins. These classes need to be public and have a public parameterless constructor. You can retrieve any registered services from the HttpContext’s RequestServices property.

Now, all we need to do is add a couple of assemblies to the web application’s bin path (or some other path that is passed to UsePlugins) and call this extension method inside Configure:

public void Configure(IApplicationBuilder app, IHostingEnvironment env)

{

//rest goes here

app.UsePlugins(/*path: “some path”*/);

//rest goes here

}

And here you have it: ASP.NET Core will find middleware from any assemblies that it can find on the given path.

Hope you find this useful!

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

White Paper – Modernizing Your .NET Apps: A Path to Pivotal Cloud Foundry

July 18, 2018 // By Robert Sirchia

For companies with a large portfolio of .NET applications, modernizing and moving to the cloud can be a daunting task. Pivotal Cloud Foundry (PCF) enables companies to continuously deliver any app to every major private and public cloud with a single platform.

If you are leading an effort to effectively modernize .NET applications on to PCF, this paper authored by Magenic Practice Lead Robert Sirchia will help navigate the process.

Read the white paper

Categories // Cloud Computing

Tags // PCF , .NET , Cloud

SHARE:
THE LATEST:
Featured Content:

related Posts


OCTOBER 30, 2018 // Cloud Computing // Blog

Cost-Efficient Cloud: Developing A Modernization Action Plan [2 of 3]


OCTOBER 25, 2018 // Cloud Computing // Blog

Cost-Effective Cloud: Assessing Your Legacy Applications Could Save You Money (1 of 3)


OCTOBER 16, 2018 // Cloud Computing // Blog

White Paper – Cloud Migration as a Strategy for the Enterprise


AUGUST 15, 2018 // Cloud Computing // Blog

10-step Guide to the Cost-effective Cloud

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

November 8, 2018 9:00 am

XAML Islands – A deep dive – Part 2

Welcome to the 2nd post of our Xaml Islands deep dive adventure! On the first blog post, we talked a little bit about the history of this amazing feature, how the Xaml Islands infrastructure works and how to use it, and also a little bit of how you can leverage binding in your Island controls.

On this second blog post, we’ll take a quick look on how to use the wrappers NuGet packages and how to host your custom controls inside Win32 Apps.

Wrappers

Creating custom wrappers around UWP controls can be a cumbersome task, and you probably don’t want to do that. For simple things such as Buttons, it should be fine, but the moment you want to wrap complex controls, it can take a long time. To make things a little bit less complicated, some of our most requested controls are already wrapped for you! The current iteration brings you the InkCanvas, the InkToolbar, the MapControl and the MediaPlayerElement. So now, if your WPF app is running on a Windows 10 machine, you can have the amazing and easy-to-use UWP InkCanvas with an InkToolbar inside your WPF App (or WinForms)! You could even use the InkRecognizer to detect shapes, letters and numbers based on the strokes of that InkCanvas.

How much code does it take to integrate with the InkCanvas? Not much, at all!


<Window
...
xmlns:uwpControls="clr-namespace:Microsoft.Toolkit.Wpf.UI.Controls;assembly=Microsoft.Toolkit.Wpf.UI.Controls">

<Grid>
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="*"/>
    </Grid.RowDefinitions>
    <uwpControls:InkToolbar TargetInkCanvas="{x:Reference Name=inkCanvas}"/>
    <uwpControls:InkCanvas Grid.Row="1" x:Name="inkCanvas" />
</Grid>

Most of it is just the Grid definition so, in fact, we added very few lines of code (only 2). And that would give your users an amazing experience that is enabled by XAML Islands and the new UWP Controls.

Custom Control – Managed Code

Everything I explained so far is for platform controls, but what if you want to wrap your own custom UWP UserControl and load it using WindowsXamlHost? Would it work? Yes! XAML controls, when instantiated in the context of an Island, handle resources in a very smart way, meaning that the ms-appx protocol just works, even if you are not running you Win32 process inside a packaged APPX. The root of the ms-appx protocol will map its path to your executable path.

As of right now, you can’t just create a UWP Library and reference it on your WPF or WinForms project, so the whole process of using a custom control is manual. When you develop a UWP App (C#, for example) you are compiling using a UWP flavor of the .NET Core Framework, not the .NET Full Framework. In order for your custom control to work on a WPF or WinForms App that is based on the .NET Full Framework, you must recompile the artifacts of the UWP Library using the .NET Full Framework toolset, by coping them to your WPF/WinForms project. There is a very good documentation about this right here that describes all the necessary steps. Remember that your WPF/WinForms project does not target, by default, any specific Windows 10 version, so you need to manually add references to some WinMD and DLLs files. Again, this is all covered in Enhance your desktop application for Windows 10, which describes how to use Windows 10 APIs on your Desktop Bridge Win32 App. By referencing the WinMDs and DLLs, you will also be able to build this compilation artifacts from the UWP Library on the WPF/WinForms project (.NET Full Framework).

NOTE: There is a whole different process for native code (C++/WinRT), which I’m not going to get into the details in this blog post.

You also can’t build these artifacts as-is. You need to inform the build system to disable type information reflection and x:Bind diagnostics. That’s because the generated code won’t be compatible with the .NET Framework. You can make it work by adding these properties to your UWP Library project:


<PropertyGroup>
  <EnableTypeInfoReflection>false</EnableTypeInfoReflection>
  <EnableXBindDiagnostics>false</EnableXBindDiagnostics>
</PropertyGroup>

Now, you could just manually copy the required files to the WPF/WinForms project, but then you would have multiple copies of it. You can automate that process with a post-build step, just like the documentation does it. If you do it that way though, it will not work if you try to pack your app inside an APPX, because the files will not get copied. To improve that, I created a custom MSBuild snippet that does that for you. The advantage of the Microsoft Build snippet is that is adds the CSharp files as well as the compilation outputs from the library all in the right place. All you must do is copy this script and it will just work.

NOTE: Keep in mind that this will be handled by the Visual Studio in the future, so you’ll have to remove either solution whenever that happens.

This is the snippet:


  <PropertyGroup>
    <IslandPath Condition="$(IslandPath) == ''">..\$(IslandLibrary)</IslandPath>
	<IslandDirectoryName>$([System.IO.Path]::GetFileName($(IslandPath.TrimEnd('\'))))</IslandDirectoryName>
  </PropertyGroup>
  <ItemGroup>
    <IslandLibraryCompile Include="$(IslandPath)\**\*.xaml.cs;$(IslandPath)\obj\$(Configuration)\**\*.g.cs;$(IslandPath)\obj\$(Configuration)\**\*.g.i.cs"/>
  </ItemGroup>
  <ItemGroup>
    <Compile Include="@(IslandLibraryCompile)">
      <LinkBase>$(IslandDirectoryName)\%(RecursiveDir)</LinkBase>
    </Compile>
  </ItemGroup>
  <ItemGroup>
	<IslandLibraryContent Include="$(IslandPath)\**\*.*" Exclude="$(IslandPath)\**\*.user;$(IslandPath)\**\*.csproj;$(IslandPath)\**\*.cs;$(IslandPath)\**\*.xaml;$(IslandPath)\**\obj\**;$(IslandPath)\**\bin\**"/>
	<IslandLibraryContent Include="$(IslandPath)\obj\$(Configuration)\**\*.xbf"/>
  </ItemGroup>
  <ItemGroup>
    <None Include="@(IslandLibraryContent)">
      <Link>$(IslandDirectoryName)\%(RecursiveDir)\%(Filename)%(Extension)</Link>
      <CopyToOutputDirectory>PreserveNewest</CopyToOutputDirectory>
    </None>
  </ItemGroup>

This Microsoft Build snippet is copying files, based on the IslandLibrary Property path, to the project where it resides. The IslandLibraryCompile includes:

  • All the .xaml.cs files. That will enable you to reuse the code behind of your custom controls.
  • All the generated .g.i.cs and .g.cs files. These files are generated files. All that you do under the “x:” prefix, is actually generated code, and this is where this generated code is going after all. So this files are the partial classes that actually hold the fields of all the x:Names inside their corresponding XAML files, and they also hold the code for connecting this fields with their actual instances. They also reference the .XAML file that will be loaded whenever the InitializeComponentmethod is called, usually at the beginning of the control’s constructor. You can look at this as a black box, but it is interesting to understand what is inside these files, but not necessarily how it works. The IslandLibraryContent includes:
  • All the content files of your project. That basically will copy all the files required for your project to run, like PNGs, JPGs, etc. It already copies them to the right folders so ms-appx:///will “just work”™. There are better ways of doing this, but this will cover the basic needs of the most common scenarios.
  • All the generated .xbf files. XBF stands for XAML Binary Format and it is a compiled version of your .xaml files, they load much faster than the XAML files (no XML parsing, for example). Even though the .g.i.cs files might look like they are trying to load the .xaml files, the XAML infrastructure itself always tries to load the .XBF files first, for performance. Only if it can’t find them they will try to load the .xaml files. This MSBuild script is not copying the .xaml files since they bring no advantage compared to the .XBFs.

To make sure that your developer experience is optimal, you also have to add a solution level project dependency, from the WPF/WinForms project to the UWPLibrary project. This means that whenever you change any of the UWP Library’s files, you can just build the WPF/WinForms project and the newest artifacts are already in place, in the correct order of project compilation. All these steps are going away in a future version of Visual Studio, when the tooling gets updated. There steps are described here at the documentation.

With these files included into the project’s build infrastructure and with the build dependency added, your WindowsXamlHost should work just fine if you set it’s InitialTypeName to your custom control’s fully qualified name. You can checkout the sample project here.

With this MSBuild snippet, even your apps packaged with the “Windows Application Packaging Project” template should work. If you want to know more, checkout this blog post.

October 2018 Limitations

Again, this release is in preview, so nothing you see here is production ready code. Just to name a few:

  • Wrapped Controls properly responding to changes in DPI and scale.
  • Accessibility tools that work seamlessly across the application and hosted controls.
  • Inline inking, @Places, and @People for input controls.

For a complete list, check the docs.

What’s next?

The version just released is not the final stable version, meaning that it is a preview. We’re still actively working on improving Xaml Islands. We would love for you to test out the product and provide feedback on the User Voice or at XamlIslandsFeedback@microsoft.com, but we currently are not recommending this for production use.

Tags

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Challenge: Error Handling in C#

Challenges are for testing your skill and understanding of a specific topic. The purpose is not to write complete applications but instead to write just enough to test out your knowledge of one particular piece. This process of testing out what you learn is critical for learning a topic well. You have to practice what you learn. This is that practice. For more information on what this particular challenge is about, check out this video:

VIDEO<span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span><span data-mce-type="bookmark"></span>

Instructions

Error-Handling-Challenge-Instructions.pdf

Video Help

Sometimes you need to refresh your mind on a topic. Here is a link to help you do that for this video: Handling Exceptions in C#

Starter Code

ErrorHandlingChallengeStarterCode.zip

Solution

You don’t have to purchase the solution. You would only need to purchase something if you get stuck or want to see how I would do it.

Tim’s Solution (Source Code): Buy Now: $5.00

Tim’s Take (Video of Tim completing the challenge): Buy Now: $10.00

Challenge Bundle (Source code and Video): Buy Now: $15.00

Let Us Know You Completed It

If you have Twitter (a fairly good social network for developers to be on), click the following link to post that you completed the challenge. Who knows, you might even get retweeted by Tim: Click to Prepare Tweet

Pazartesi, 12 Kasım 2018 / Published in Uncategorized


WEBINAR:On-Demand

Virtual Developer Workshop: Containerized Development with Docker

Introduction

XML still has a place in .NET, quite a big one, actually. With the coming of JSON, you tend to forget about XML and how powerful it could be. In this article, you will learn how to serialize and deserialize XML in .NET quickly and easily.

Practical

You will create a Console Application, so there is no need to design any buttons and so on. You can create the Console Application in either C# or Visual Basic.NET. After the project has been created, open any text editor and enter the following XML.

This widget requires JavaScript to run.
Visit Site for more

XML

<Student>
   <StudentNumber>12345</StudentNumber>
   <StudentName>Hannes</StudentName>
   <StudentSurname>du Preez</StudentSurname>
   <StudentAge>40</StudentAge>
   <Course>Introduction to Computers</Course>
</Student>

The preceding XML creates a student object with the StudentNumber, StudentName, StudentSurname, StudentAge, and Course Elements, along with their respective values. Obviously, you may enter as many students as you want, but I prefer to keep this exercise as simple as possible. Save the file as Student.xml.

Add the Student.xml file to your project by selecting Project, Add Existing item…, and then browsing for it. After the file has been added, your Solution Explorer would look like Figure 1.

Figure 1: Solution Explorer

Right-click now on your Student.xml File and select Properties. This will produce the Properties Window, shown in Figure 2. Ensure that the Build Action property is set to Content, and the Copy to Output Directory Property is set to Copy Always. This property ensures that the file will always be copied to your Bin folder.

Figure 2: Student.xml Properties

Create a Student class and enter the following code:

C#

   public class Student
   {

      public string StudentNumber { get; set; }

      public string StudentName { get; set; }

      public string StudentSurname { get; set; }

      public string StudentAge { get; set; }

      public string Course { get; set; }

   }

VB.NET

Public Class Student

   Public Property StudentNumber As String
   Public Property StudentName As String
   Public Property StudentSurname As String
   Public Property StudentAge As String
   Public Property Course As String

End Class

The Student Class contains the same elements as the Student.xml file does. This is important when serializing and deserializing files into and from objects. All the fields in the XML file have an element to which it can be connected to, thus the value being stored or retrieved.

Add the Serializer class and add the following code into it.

C#

using System.IO;
using System.Xml.Serialization;
   public class Serializer
   {
      public T Deserialize<T>(string input) where T : class
      {
         XmlSerializer ser = new XmlSerializer(typeof(T));

         using (StringReader sr = new StringReader(input))
         {
            return (T)ser.Deserialize(sr);
         }
      }

      public string Serialize<T>(T ObjectToSerialize)
      {
         XmlSerializer xmlSerializer = new
            XmlSerializer(ObjectToSerialize.GetType());

         using (StringWriter textWriter = new StringWriter())
         {
            xmlSerializer.Serialize(textWriter, ObjectToSerialize);
            return textWriter.ToString();
         }
      }

   }

VB.NET

Imports System.IO
Imports System.Xml.Serialization

Public Class Serializer
   Public Function Deserialize(Of T As Class) _
         (ByVal input As String) As T

      Dim ser As XmlSerializer = New XmlSerializer(GetType(T))

      Using sr As StringReader = New StringReader(input)

         Return CType(ser.Deserialize(sr), T)

      End Using

   End Function

   Public Function Serialize(Of T)(ByVal ObjectToSerialize As T) _
         As String

      Dim xmlSerializer As XmlSerializer =_
         New XmlSerializer(ObjectToSerialize.[GetType]())

      Using textWriter As StringWriter = New StringWriter()

         xmlSerializer.Serialize(textWriter, ObjectToSerialize)

         Return textWriter.ToString()

      End Using

   End Function

End Class

In the Deserialize function, you make use of a StringReader object to populate the Student object. The Serialize method makes use of the StringWriter to copy the contents of the Student object into an XML file.

Add the code for the program’s Main procedure.

C#

using System;
using System.IO;
   class Program
   {
      static void Main(string[] args)
      {
         Serializer sSerialize = new Serializer();

         string strPath = string.Empty;
         string strInput = string.Empty;]
         string strOutput = string.Empty;

         strPath = Directory.GetCurrentDirectory() +
            @"\Student.xml";]
         strInput = File.ReadAllText(strPath);

         Student student = sSerialize.Deserialize<Student>
            (strInput);
         strOutput = sSerialize.Serialize<Student>(student);

         Console.WriteLine(student.StudentName);

         Console.WriteLine(strOutput);

         Console.ReadKey();

      }

   }

VB.NET

Imports System.IO

Module Module1

   Sub Main()

      Dim sSerialize As Serializer = New Serializer()

      Dim strPath As String = String.Empty
      Dim strInput As String = String.Empty
      Dim strOutput As String = String.Empty

      strPath = Directory.GetCurrentDirectory() & "\Student.xml"

      strInput = File.ReadAllText(strPath)

      Dim student As Student = _
         sSerialize.Deserialize(Of Student)(strInput)

      strOutput = sSerialize.Serialize(Of Student)(student)

      Console.WriteLine(student.StudentName)
      Console.WriteLine(strOutput)

      Console.ReadKey()]

   End Sub

End Module

You create a new Serializer object; then, you specify where to obtain the XML file you want to read. Lastly, you serialize the file and display the results inside the Command Prompt window (see Figure 3).

Figure 3: Running

Conclusion

XML is still very relevant. Because it is such an easy format to use, it is still quite popular. Knowing how to work with XML files properly is a vital skill to have, and I hope I have helped you learn a thing or two.