Perşembe, 15 Kasım 2018 / Published in Uncategorized

The article shows how to implement user management for an ASP.NET Core application using ASP.NET Core Identity. The application uses custom claims, which need to be added to the user identity after a successful login, and then an ASP.NET Core policy is used to authorize the identity.

Code: https://github.com/damienbod/AspNetCoreAngularSignalRSecurity

Setting up the Project

The demo application is implemented using ASP.NET Core MVC and uses the IdentityServer and IdentityServer4.AspNetIdentity NuGet packages.

ASP.NET Core Identity is then added in the Startup class ConfigureServices method. SQLite is used as a database. A scoped service for the IUserClaimsPrincipalFactory is added so that the additional claims can be added to the Context.User.Identity scoped object.

An IAuthorizationHandler service is added, so that the IsAdminHandler can be used for the IsAdmin policy. This policy can then be used to check if the identity has the custom claims which was added to the identity in the AdditionalUserClaimsPrincipalFactory implementation.

public void ConfigureServices(IServiceCollection services)
{
	...
	
	services.AddDbContext<ApplicationDbContext>(options =>
	 options.UseSqlite(Configuration.GetConnectionString("DefaultConnection")));

	services.AddIdentity<ApplicationUser, IdentityRole>()
	 .AddEntityFrameworkStores<ApplicationDbContext>()
	 .AddDefaultTokenProviders();

	services.AddScoped<IUserClaimsPrincipalFactory<ApplicationUser>, 
	 AdditionalUserClaimsPrincipalFactory>();

	services.AddSingleton<IAuthorizationHandler, IsAdminHandler>();
	services.AddAuthorization(options =>
	{
		options.AddPolicy("IsAdmin", policyIsAdminRequirement =>
		{
			policyIsAdminRequirement.Requirements.Add(new IsAdminRequirement());
		});
	});

	...
}

The application uses IdentityServer4. The UseIdentityServer extension is used instead of the UseAuthentication method to use the authentication.

public void Configure(IApplicationBuilder app, 
  IHostingEnvironment env, 
  ILoggerFactory loggerFactory)
{
	...
	
	app.UseStaticFiles();

	app.UseIdentityServer();

	app.UseMvc(routes =>
	{
		routes.MapRoute(
			name: "default",
			template: "{controller=Home}/{action=Index}/{id?}");
	});
}

The ApplicationUser class implements the IdentityUser class. Additional database fields can be added here, which will then be used to create the claims for the logged in user.

using Microsoft.AspNetCore.Identity;

namespace StsServer.Models
{
    public class ApplicationUser : IdentityUser
    {
        public bool IsAdmin { get; set; }
        public string DataEventRecordsRole { get; set; }
        public string SecuredFilesRole { get; set; }
    }
}

The AdditionalUserClaimsPrincipalFactory class implements the UserClaimsPrincipalFactory class, and can be used to add the additional claims to the user object in the HTTP context. This was added as a scoped service in the Startup class. The ApplicationUser is then used, so that the custom claims can be added to the identity.

using IdentityModel;
using Microsoft.AspNetCore.Identity;
using Microsoft.Extensions.Options;
using StsServer.Models;
using System.Collections.Generic;
using System.Security.Claims;
using System.Threading.Tasks;

namespace StsServer
{
    public class AdditionalUserClaimsPrincipalFactory 
          : UserClaimsPrincipalFactory<ApplicationUser, IdentityRole>
    {
        public AdditionalUserClaimsPrincipalFactory( 
            UserManager<ApplicationUser> userManager,
            RoleManager<IdentityRole> roleManager, 
            IOptions<IdentityOptions> optionsAccessor) 
            : base(userManager, roleManager, optionsAccessor)
        {
        }

        public async override Task<ClaimsPrincipal> CreateAsync(ApplicationUser user)
        {
            var principal = await base.CreateAsync(user);
            var identity = (ClaimsIdentity)principal.Identity;

            var claims = new List<Claim>
            {
                new Claim(JwtClaimTypes.Role, "dataEventRecords"),
                new Claim(JwtClaimTypes.Role, "dataEventRecords.user")
            };

            if (user.DataEventRecordsRole == "dataEventRecords.admin")
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "dataEventRecords.admin"));
            }

            if (user.IsAdmin)
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "admin"));
            }
            else
            {
                claims.Add(new Claim(JwtClaimTypes.Role, "user"));
            }

            identity.AddClaims(claims);
            return principal;
        }
    }
}

Now the policy IsAdmin can check for this. First a requirement is defined. This is done by implementing the IAuthorizationRequirement interface.

using Microsoft.AspNetCore.Authorization;
 
namespace StsServer
{
    public class IsAdminRequirement : IAuthorizationRequirement{}
}

The IsAdminHandler AuthorizationHandler uses the IsAdminRequirement requirement. If the user has the role claim with value admin, then the handler will succeed.

using Microsoft.AspNetCore.Authorization;
using System;
using System.Linq;
using System.Threading.Tasks;

namespace StsServer
{
    public class IsAdminHandler : AuthorizationHandler<IsAdminRequirement>
    {
        protected override Task HandleRequirementAsync(
          AuthorizationHandlerContext context, IsAdminRequirement requirement)
        {
            if (context == null)
                throw new ArgumentNullException(nameof(context));
            if (requirement == null)
                throw new ArgumentNullException(nameof(requirement));

            var adminClaim = context.User.Claims.FirstOrDefault(t => t.Value == "admin" && t.Type == "role"); 
            if (adminClaim != null)
            {
                context.Succeed(requirement);
            }

            return Task.CompletedTask;
        }
    }
}

The AdminController adds a way to do the CRUD operations for the Identity users. The AdminController uses the Authorize attribute with the policy IsAdmin to authorize. The AuthenticationSchemes needs to be set to “Identity.Application”, because Identity is being used. Now admins can create, or edit Identity users.

using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Authorization;
using Microsoft.AspNetCore.Identity;
using Microsoft.AspNetCore.Mvc;
using Microsoft.EntityFrameworkCore;
using StsServer.Data;
using StsServer.Models;

namespace StsServer.Controllers
{
    [Authorize(AuthenticationSchemes = "Identity.Application", Policy = "IsAdmin")]
    public class AdminController : Controller
    {
        private readonly ApplicationDbContext _context;
        private readonly UserManager<ApplicationUser> _userManager;

        public AdminController(ApplicationDbContext context, UserManager<ApplicationUser> userManager)
        {
            _context = context;
            _userManager = userManager;
        }

        public async Task<IActionResult> Index()
        {
            return View(await _context.Users.Select(user => 
                new AdminViewModel {
                    Email = user.Email,
                    IsAdmin = user.IsAdmin,
                    DataEventRecordsRole = user.DataEventRecordsRole,
                    SecuredFilesRole = user.SecuredFilesRole
                }).ToListAsync());
        }

        public async Task<IActionResult> Details(string id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var user = await _context.Users
                .FirstOrDefaultAsync(m => m.Email == id);
            if (user == null)
            {
                return NotFound();
            }

            return View(new AdminViewModel
            {
                Email = user.Email,
                IsAdmin = user.IsAdmin,
                DataEventRecordsRole = user.DataEventRecordsRole,
                SecuredFilesRole = user.SecuredFilesRole
            });
        }

        public IActionResult Create()
        {
            return View();
        }

        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Create(
         [Bind("Email,IsAdmin,DataEventRecordsRole,SecuredFilesRole")] AdminViewModel adminViewModel)
        {
            if (ModelState.IsValid)
            {
                await _userManager.CreateAsync(new ApplicationUser
                {
                    Email = adminViewModel.Email,
                    IsAdmin = adminViewModel.IsAdmin,
                    DataEventRecordsRole = adminViewModel.DataEventRecordsRole,
                    SecuredFilesRole = adminViewModel.SecuredFilesRole,
                    UserName = adminViewModel.Email
                });
                return RedirectToAction(nameof(Index));
            }
            return View(adminViewModel);
        }

        public async Task<IActionResult> Edit(string id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var user = await _userManager.FindByEmailAsync(id);
            if (user == null)
            {
                return NotFound();
            }

            return View(new AdminViewModel
            {
                Email = user.Email,
                IsAdmin = user.IsAdmin,
                DataEventRecordsRole = user.DataEventRecordsRole,
                SecuredFilesRole = user.SecuredFilesRole
            });
        }

        [HttpPost]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> Edit(string id, [Bind("Email,IsAdmin,DataEventRecordsRole,SecuredFilesRole")] AdminViewModel adminViewModel)
        {
            if (id != adminViewModel.Email)
            {
                return NotFound();
            }

            if (ModelState.IsValid)
            {
                try
                {
                    var user = await _userManager.FindByEmailAsync(id);
                    user.IsAdmin = adminViewModel.IsAdmin;
                    user.DataEventRecordsRole = adminViewModel.DataEventRecordsRole;
                    user.SecuredFilesRole = adminViewModel.SecuredFilesRole;

                    await _userManager.UpdateAsync(user);
                }
                catch (DbUpdateConcurrencyException)
                {
                    if (!AdminViewModelExists(adminViewModel.Email))
                    {
                        return NotFound();
                    }
                    else
                    {
                        throw;
                    }
                }
                return RedirectToAction(nameof(Index));
            }
            return View(adminViewModel);
        }

        public async Task<IActionResult> Delete(string id)
        {
            if (id == null)
            {
                return NotFound();
            }

            var user = await _userManager.FindByEmailAsync(id);
            if (user == null)
            {
                return NotFound();
            }

            return View(new AdminViewModel
            {
                Email = user.Email,
                IsAdmin = user.IsAdmin,
                DataEventRecordsRole = user.DataEventRecordsRole,
                SecuredFilesRole = user.SecuredFilesRole
            });
        }

        [HttpPost, ActionName("Delete")]
        [ValidateAntiForgeryToken]
        public async Task<IActionResult> DeleteConfirmed(string id)
        {
            var user = await _userManager.FindByEmailAsync(id);
            await _userManager.DeleteAsync(user);
            return RedirectToAction(nameof(Index));
        }

        private bool AdminViewModelExists(string id)
        {
            return _context.Users.Any(e => e.Email == id);
        }
    }
}

Running the application

When the application is started, the ADMIN menu can be clicked, and the users can be managed by administrators.

Links

http://benfoster.io/blog/customising-claims-transformation-in-aspnet-core-identity

https://adrientorris.github.io/aspnet-core/identity/extend-user-model.html

https://docs.microsoft.com/en-us/aspnet/core/security/authentication/identity?view=aspnetcore-2.1&tabs=visual-studio

Perşembe, 15 Kasım 2018 / Published in Uncategorized

This article shows how to send Ajax requests in an ASP.NET Core MVC application using jquery-unobtrusive. This can be tricky to setup, for example when using a list of data items with forms using the onchange Javascript event, or the oninput event.

Code: https://github.com/damienbod/AspNetCoreBootstrap4Validation

Setting up the Project

The project uses the npm package.json file, to add the required front end packages to the project. jquery-ajax-unobtrusive is added as well as the other required dependencies.

{
  "version": "1.0.0",
  "name": "asp.net",
  "private": true,
  "devDependencies": {
    "bootstrap": "4.1.3",
    "jquery": "3.3.1",
    "jquery-validation": "1.17.0",
    "jquery-validation-unobtrusive": "3.2.10",
    "jquery-ajax-unobtrusive": "3.2.4"
  }
}

bundleconfig.json is used to package and build the Javascript and the css files into bundles. The BuildBundlerMinifier NuGet package needs to be added to the project for this to work.

The Javascript libraries are packaged into 2 different bundles, vendor-validation.min.js and vendor-validation.min.js.

// Vendor JS
{
    "outputFileName": "wwwroot/js/vendor.min.js",
    "inputFiles": [
      "node_modules/jquery/dist/jquery.min.js",
      "node_modules/bootstrap/dist/js/bootstrap.bundle.min.js"
    ],
    "minify": {
      "enabled": true,
      "renameLocals": true
    },
    "sourceMap": false
},
// Vendor Validation JS
{
    "outputFileName": "wwwroot/js/vendor-validation.min.js",
    "inputFiles": [
      "node_modules/jquery-validation/dist/jquery.validate.min.js",
      "node_modules/jquery-validation/dist/additional-methods.js",
      "node_modules/jquery-validation-unobtrusive/dist/jquery.validate.unobtrusive.min.js",
      "node_modules//jquery-ajax-unobtrusive/jquery.unobtrusive-ajax.min.js"
    ],
    "minify": {
      "enabled": true,
      "renameLocals": true
    },
    "sourceMap": false
}

The global bundles can be added at the end of the _Layout.cshtml file in the ASP.NET Core MVC project.


... 
    http://~/js/vendor.min.js
    http://~/js/site.min.js
    @RenderSection("scripts", required: false)
</body>
</html>

And the validation bundle is added to the _ValidationScriptsPartial.cshtml.

http://~/js/vendor-validation.min.js

This is then added in the views as required.

@section Scripts  {
    @await Html.PartialAsync("_ValidationScriptsPartial")
}

Simple AJAX Form request

A form request can be sent as an Ajax request, by adding the html attributes to the form element. When the request is finished, the div element with the id attribute defined in the data-ajax-update parameter, will be replaced with the partial result response. The Html.PartialAsync method calls the initial view.

@{
    ViewData["Title"] = "Ajax Test Page";
}

<h4>Ajax Test</h4>

<form asp-action="Index" asp-controller="AjaxTest" 
      data-ajax="true" 
      data-ajax-method="POST"
      data-ajax-mode="replace" 
      data-ajax-update="#ajaxresult" >

    
@await Html.PartialAsync("_partialAjaxForm")
</form> @section Scripts { @await Html.PartialAsync("_ValidationScriptsPartial") }

The _partialAjaxForm.cshtml view implements the form contents. The submit button is required to send the request as an Ajax request.

@model AspNetCoreBootstrap4Validation.ViewModels.AjaxValidationModel 

We'll never share your name ...
<button type="submit" class="btn btn-primary">Submit</button>

The ASP.NET Core MVC controller handles the requests from the view. The first Index method in the example below, just responds to a plain HTTP GET.

The second Index method accepts a POST request with the Anti-Forgery token which is sent with each request. When the result is successful, a partial view is returned. The model state must also be cleared, otherwise the validation messages will not be reset.

If the page returns the incorrect result, ie just the content of the partial view, then the request was not sent asynchronously, but as a full page request. You need to check, that the front end packages are included correctly.

public class AjaxTestController : Controller
{
  public IActionResult Index()
  {
    return View(new AjaxValidationModel());
  }

  [HttpPost]
  [ValidateAntiForgeryToken]
  public IActionResult Index(AjaxValidationModel model)
  {
    if (!ModelState.IsValid)
    {
      return PartialView("_partialAjaxForm", model);
    }

    // the client could validate this, but allowed for testing server errors
    if(model.Name.Length < 3)
    {
      ModelState.AddModelError("name", "Name should be longer than 2 chars");
      return PartialView("_partialAjaxForm", model);
    }

    ModelState.Clear();
    return PartialView("_partialAjaxForm");
  }
}

Complex AJAX Form request

In this example, a list of data items are returned to the view. Each item in the list will have a form to update its data, and also the data will be updated using a checkbox onchange event or the input text oninput event and not the submit button.

Because a list is used, the div element to be updated must have a unique id. This can be implemented by creating a new GUID with each item, and can be used then in the name of the div to be updated, and also the data-ajax-update parameter.

@using AspNetCoreBootstrap4Validation.ViewModels
@model AjaxValidationListModel
@{
    ViewData["Title"] = "Ajax Test Page";
}

<h4>Ajax Test</h4>

@foreach (var item in Model.Items)
{
    string guid = Guid.NewGuid().ToString();

    <form asp-action="Index" asp-controller="AjaxComplexList" 
          data-ajax="true" 
          data-ajax-method="POST"
          data-ajax-mode="replace" 
          data-ajax-update="#complex-ajax-@guid">

        
@await Html.PartialAsync("_partialComplexAjaxForm", item)
</form> } @section Scripts { @await Html.PartialAsync("_ValidationScriptsPartial") }

The form data will send the update with an onchange Javascript event from the checkbox. This could be required for example, when the UX designer wants instant updates, instead of an extra button click. To achieve this, the submit button is not displayed. A unique id is used to identify each button, and the onchange event from the checkbox triggers the submit event using this. Now the form request will be sent using Ajax like before.

@model AspNetCoreBootstrap4Validation.ViewModels.AjaxValidationModel
@{
    string guid = Guid.NewGuid().ToString();
}

We'll never share your name ...
@Html.CheckBox("IsCool", Model.IsCool, new { onchange = "$('#submit-" + @guid + "').trigger('submit');", @class = "big_checkbox" })
<button style="display: none" id="submit-@guid" type="submit">Submit</button>

The ASP.NET Core controller returns the HTTP GET and POST like before.

using AspNetCoreBootstrap4Validation.ViewModels;

namespace AspNetCoreBootstrap4Validation.Controllers
{
    public class AjaxComplexListController : Controller
    {
        public IActionResult Index()
        {
            return View(new AjaxValidationListModel {
                Items = new List<AjaxValidationModel> {
                    new AjaxValidationModel(),
                    new AjaxValidationModel()
                }
            });
        }

        [HttpPost]
        [ValidateAntiForgeryToken]
        public IActionResult Index(AjaxValidationModel model)
        {
            if (!ModelState.IsValid)
            {
                return PartialView("_partialComplexAjaxForm", model);
            }

            // the client could validate this, but allowed for testing server errors
            if(model.Name.Length < 3)
            {
                ModelState.AddModelError("name", "Name should be longer than 2 chars");
                return PartialView("_partialComplexAjaxForm", model);
            }

            ModelState.Clear();
            return PartialView("_partialComplexAjaxForm", model);
        }
    }
}

When the requests are sent, you can check this using the F12 developer tools in the browser using the network tab. The request type should be xhr.

Links

https://dotnetthoughts.net/jquery-unobtrusive-ajax-helpers-in-aspnet-core/

https://www.mikesdotnetting.com/article/326/using-unobtrusive-ajax-in-razor-pages

https://www.learnrazorpages.com/razor-pages/ajax/unobtrusive-ajax

https://damienbod.com/2018/07/08/updating-part-of-an-asp-net-core-mvc-view-which-uses-forms/

https://ml-software.ch/blog/extending-client-side-validation-with-fluentvalidation-and-jquery-unobtrusive-in-an-asp-net-core-application

https://ml-software.ch/blog/extending-client-side-validation-with-dataannotations-and-jquery-unobtrusive-in-an-asp-net-core-application

Perşembe, 15 Kasım 2018 / Published in Uncategorized

I’ve been doing workshops showing teams how to properly architect ASP.NET Core applications using Clean Architecture for the last couple of years. The most recent one was a 4-day on site workshop I did a couple of weeks ago. This is just a quick recap of what we covered. Each team is different and has different needs, so the precise agenda varies to suit the needs of the team.

Day One:

  • Unit testing overview
  • Unit testing hands-on labs
  • Introduction to Domain-Driven Design
  • Introduction to ASP.NET Core

Day Two:

  • Introducing Domain-Driven Design and ASP.NET Core (continued)

This material was begun on Day One and includes 2 full days of lecture and hands-on labs covering DDD topics, design patterns, and unit and integration testing, as well as ASP.NET Core. Labs cover Entities, Repositories, DI and Domain Services, Domain Events, Testing, Specifications, and Aggregates.

Day Three:

  • Wrapped up Domain-Driven Design and ASP.NET Core
  • Drilled down into Clean Architecture principles and structure
  • Hands-on labs covering several design patterns (including Builder, Null Object, and Strategy)

Day Four:

  • Advanced ASP.NET Core Topics
  • More Design Patterns
  • Architectural and Code Review of client’s systems

Overall the workshop went extremely well. Some comments from students:

  • Good mix of theory and hands on.”
  • “This workshop is a great learning opportunity and has lots of up to date information.”
  • “The depth of the topics covered and their relationships set this workshop apart from other learning opportunities”
  • “Instructor experience, expertise in clean architecture set this workshop apart”
  • “Worthwhile class with immediate takeaways.”

If you’d like to learn more about Clean Architecture and ASP.NET Core, you can start with my ASP.NET Core Quick Start course for just $49. You can also check out my Clean Architecture Solution Template for ASP.NET Core 2.x available for free on GitHub. Next, check out my eShopOnWeb reference application I wrote for Microsoft along with its companion eBook. And if you’d like me to help make sure your team gets off to a good start with their next ASP.NET Core project, contact me and let’s see if I can come on-site or help you via remote webinar-style workshops.

Perşembe, 15 Kasım 2018 / Published in Uncategorized

ASP.NET SignalR 2.4.0

We’ve just shipped the final 2.4.0 version of ASP.NET SignalR, the version of SignalR for System.Web and/or OWIN-based applications. As we mentioned in a previous post on the future of ASP.NET SignalR, 2.4.0 is a minor release which contains some small bug fixes and updates. The majority of the features and fixes we implemented for ASP.NET SignalR were outlined in the 2.4.0 Preview 2 post.

Support for StackExchange Redis 2.0

In 2.4.0 we’re adding support for the new 2.0 release of the StackExchange.Redis package. If you’re using StackExchange’s Redis package in your SignalR apps and you want to update to the StackExchange Redis 2.0 version, you’ll need to remove your existing package reference to Microsoft.AspNet.SignalR.Redis, then add a reference to the new Microsoft.AspNet.SignalR.StackExchangeRedis package.

Once you make the package reference changes, you’ll also want to replace calls to the UseRedis method with UseStackExchangeRedis.

Your Feedback is Welcome and Appreciated

We recommend you try upgrading to 2.4.0. Your feedback is critical to making sure we produce a stable and compatible update, and has contributed to the continued success of our real-time libraries! You can find details about the release on the releases page of the ASP.NET SignalR GitHub repository.

Perşembe, 15 Kasım 2018 / Published in Uncategorized

Introduction 

Indexer allows accessing the members of a class as an array. We can define an indexer using "this" keyword which indicates that we are creating the indexer for the current class. Once we have defined the indexer for a class, we can access the member of that class with the help of array access operator []. So, we can say that indexer is similar to property but it allows us to access a member of the class using [] operator.

Why do we need to use indexer?

Let us consider a simple example.

Program 1

  1. namespace StudentClass {  
  2.     public class student {  
  3.         Int StudId;  
  4.         string StudName;  
  5.         public student(int StudId, string StudName) {  
  6.             this.StudId = StudId;  
  7.             this.StudName = StudName;  
  8.         }  
  9.     }  
  10. }  

In this example, we have created a simple student class with two members and a parametrized constructor. Once we have created a class and if it is public, then we can consume that class from any other class or project.

Program 2

  1. namespace consumestudent {  
  2.     public static void Main() {  
  3.         student stu = new student(1, ”Hitanshi”);  
  4.         stu[0]; //it is not accessible  
  5.     }  
  6. }  

In our second example, we have created a consumestudent class and we are creating one object of student class. By default, the scope for all members of the class is private, so we cannot access StudId and StudName outside the class. Now, if we want to access member outside a class, then there are three different mechanisms available.

  1. Declare member of a class as public

    If we declare a member of class as public then anyone outside class can get or set the value of member. As a developer, we lose control over members.

    Therefore, we should never declare a member as public. For example, if you declare a prize member as public, then anyone outside the class can set prize.

  2. Use properties

    Property allows restricted access. It allows only read, only write, or read/write access. Property also allows us to set validation on the member. For example, we want to store the age only if it is greater than 18, then we can set a validation on properties before storing them into the database.

  3. Use Indexer

    Indexer is like property but it provides access to the members of class using an index value.

Syntax of Indexer

[<Modifiers>] <Type> this [<parameter list>]

Here, we use modifier as public, type as an object because it may return an integer, a string, or any other valid data type. We use "this" keyword to specify that we are declaring an indexer on the current class  and the instance of the current class will have access to the members of a class. Parameter list can be any valid data type. Mostly, we use int and string as a parameter list. It means we can access the member of the class using int and string.

Where indexers are used?

We can use indexer in many ways,

Indexers are used in class.

 

Example

Inside the class, we can declare an indexer as below.

Program 1

  1. namespace StudentClass {  
  2.     public class student {  
  3.         Int StudId;  
  4.         string StudName;  
  5.         public student(int StudId, string StudName) {  
  6.             this.StudId = StudId;  
  7.             this.StudName = StudName;  
  8.         }  
  9.     }  
  10.     public object this[int index] {  
  11.         get {  
  12.             If(index == 0)  
  13.             return StudId;  
  14.             elseif(index == 1)  
  15.             return StudName;  
  16.         }  
  17.         set {  
  18.             If(index == 0)  
  19.             StudId = (int) value;  
  20.             Elseif(index == 1)  
  21.             StudName = (string) value;  
  22.         }  
  23.         public object this[string name] {  
  24.             get {  
  25.                 If(name.ToUpper() == ”STUDID”)  
  26.                 return StudId;  
  27.                 elseif(name.ToUpper() = ”STUDNAME”)  
  28.                 return StudName;  
  29.             }  
  30.             set {  
  31.                 If(name.ToUpper() == ”STUDID”)  
  32.                 StudId = (int) value;  
  33.                 elseif(name.ToUpper() == ”STUDNAME”)  
  34.                 StudName = (string) value;  
  35.             }  
  36.         }  
  37.     }  

Here, the ‘value’ is an implicit variable that provides access to the value assigned by the user. The data type of the value will be the same as indexer so in our case, the value will be of object type. As value is object type, we need to perform the unboxing by converting the object into int or string. In integral indexer, it is not compulsory to start an index with 0. We can start index with 1 but as indexer is behaving like a virtual array, we are starting our first index with 0 which will point to StudId. In this example, we have overloaded the indexer using int and string. One of the biggest problems with string indexer is that C# is case sensitive. To solve this problem, we have used ToUpper(). This code will work even if the user enters the StudId in lower case because we are converting the name indexer to uppercase and checking it with uppercase.

  1. namespace consumestudent {  
  2.     public static void Main() {  
  3.         student stu = new student(1, ”Hitanshi”);  
  4.         stu[1] = ”HITANSHI”;  
  5.         Console.WriteLine(“Student Id” + stu[“Studid”]);  
  6.         Console.WriteLine(“Student name” + stu[“Studname”]);  
  7.     }  
  8. }  

Output

1 HITANSHI

To store or retrieve the data from session state or application state, we use an indexer.

  1. namespace program {  
  2.     protected void page_Load(object sender, EventArgs a) {  
  3.         session[“data1”] = ”Show data1”;  
  4.         session[“data2”] = ”Show data2”;  
  5.         Response.WritLine(“Session1 stores” + session[0].ToString());  
  6.         Responser.WritLine(“Session2 stores” + session[“data2”].ToString());  
  7.     }  
  8. }  

So here, we have created the first session using “data1” string indexer but when we are printing the data, we are using integral indexer. We have used 0 as index so it will pull the data that is present in the first session key. Here, we have used ToString() because Session object actually returns an object data type so we can store anything into a session state but we know that it is string, so we are going to convert it into string.

 

To retrieve the data from the specific column when we are looping through “SqlDataReader” object, we can use either the integral indexer or string indexer.

  1. using(SqlConnection con = new SqlConnection(cs)) {  
  2.     SqlCommand cmd = new SqlCommand(“select * from employee”, con);  
  3.     con.open();  
  4.     SqlDataReader rd = cmd.ExecuteReader();  
  5.     while (rd.Read()) {  
  6.         Response.WritLine(“Id = ”+rd[0].ToString());  
  7.         Response.WritLine(“Name = ”+rd[“name”].ToString());  
  8.     }  
  9. }  

As illustrated in the example, we can get data from database with the help of string or integer indexer.

Therefore, in .NET Framework many classes are already using this indexer.

Perşembe, 15 Kasım 2018 / Published in Uncategorized

In recent posts I’ve been discussing the Options pattern for strongly-typed settings configuration in some depth. One of the patterns that has come up several times is using IConfigureOptions<T> or IConfigureNamedOptions<T> when you need to use a service from DI to configure your Options. In this post I show a convenient way for registering your IConfigureOptions with the ASP.NET Core DI container using the ConfigureOptions() extension method.

tl;dr; ConfigureOptions<T> is a helper extension method that looks for all IConfigureOptions<>, and IPostConfigureOptions<> implemented by the type T, and registers them in the DI container for you, so you don’t have to do it manually using AddTransient<,>.

Using services to configure strongly-typed options

Whenever you need to use a service that’s registered with the DI container as part of your strongly-typed setting configuration, you need to use IConfigureOptions<T> or IConfigureNamedOptions<T>. By implementing these interfaces in a class, you can configure an options object T using any required services from the DI container.

For example, the following class implements IConfigureOptions<MySettings>. It is used to configure the default MySettings options instance, using the CalculatorService service obtained from the DI container.

public class ConfigureMySettingsOptions : IConfigureOptions<MySettings>
{
    private readonly CalculatorService _calculator;
    public ConfigureMySettingsOptions(CalculatorService calculator)
    {
        _calculator = calculator;
    }

    public void Configure(MySettings options)
    {
        options.MyValue = _someService.DoComplexCalculation();
    }
}

To register this class with the DI container you would use something like:

services.AddTransient<IConfigureOptions<MySettings>, ConfigureMySettingsOptions>();

A similar class for configuring named options might be:

public class ConfigurePublicMySettingsOptions : IConfigureNmedOptions<MySettings>
{
    private readonly CalculatorService _calculator;
    public ConfigureMySettingsOptions(CalculatorService calculator)
    {
        _calculator = calculator;
    }

    public void Configure(string name, MySettings options)
    {
        if(name == "Public")
        {
            options.MyValue = _someService.GetPublicValue();
        }
    }

    public void Configure(string name, MySettings options) => Configure(Options.DefaultName options);
}

Even though this class implements IConfigureNamedOptions<T>, you still have to register it in the DI container using the non-named interface, IConfigureOptions<MySettings>:

public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<IConfigureOptions<MySettings>, ConfigurePublicMySettingsOptions>();
}

In my last post, I showed another interface IPostConfigureOptions<T> which can be used in a similar manner, but only runs its configuration actions after all other configure actions for an options type have been executed. This one also needs to be registered in the DI container:

services.AddTransient<IPostConfigureOptions<MySettings>, PostConfigureMySettings>();

Remember, there is no named-options-specific IPostConfigureOptions<T>IPostConfigureOptions<T> is used to configure both default and named options.

Automatically registering the correct interfaces with ConfigureOptions()

Having to remember which version of the interface to use when registering your class in the DI container is a bit cumbersome. This is especially true if your configuration class implements multiple configuration interfaces! This class:

public class ConfigureInternalCookieOptions :
    IConfigureNamedOptions<CookieAuthenticationOptions>,
    IPostConfigureOptions<CookieAuthenticationOptions>,
    IPostConfigureOptions<OpenIdConnectOptions>,
    IConfigureOptions<CorsOptions>,
    IConfigureOptions<CachingOptions>
{}

would need all of these registrations:

public void ConfigureServices(IServiceCollection services)
{
    services.AddTransient<IConfigureOptions<CookieAuthenticationOptions>, ConfigureInternalCookieOptions>();
    services.AddTransient<IPostConfigureOptions<CookieAuthenticationOptions>, ConfigurePublicMySettingsOptions>();
    services.AddTransient<IPostConfigureOptions<OpenIdConnectOptions>, ConfigurePublicMySettingsOptions>();
    services.AddTransient<IConfigureOptions<CorsOptions>, ConfigureInternalCookieOptions>();
    services.AddTransient<IConfigureOptions<CachingOptions>, ConfigureInternalCookieOptions>();
}

Luckily, there’s a convenient extension method that can dramatically simplify the registration process, called ConfigureOptions(). With this method, your registrations slim down to the following:

public void ConfigureServices(IServiceCollection services)
{
    services.ConfigureOptions<ConfigureInternalCookieOptions>();
}

Much better!

Behind the scenes, ConfigureOptions<> finds all of the IConfigureOptions<> (including IConfigureNamedOptions<>) and IPostConfigureOptions<> interfaces implemented by the provided type, and registers them in the DI container:

public static IServiceCollection ConfigureOptions<T>(this IServiceCollection services)
{
    var configureType = typeof(T);
    services.AddOptions(); // Adds the infrastructure classes if not already added
    var serviceTypes = FindIConfigureOptions(configureType); // Finds all the IConfigureOptions and IPostConfigure options
    foreach (var serviceType in serviceTypes)
    {
        services.AddTransient(serviceType, configureType); // Adds each registration for you
    }
    return services;
}

Even if your classes only implement one of the configuration interfaces, I suggest always using this extension method instead of manually registering them yourself. Sure, there will be the tiniest startup performance impact in doing so, as it uses reflection to do the registration. But the registration code is so much easier to read, and harder to get wrong, that I suspect it’s probably worth it!

Summary

When you need to use DI services to configure your strongly-typed settings, you have to implement IConfigureOptions<>, IConfigureNamedOptions<>, or IPostConfigureOptions<>, and register your class appropriately in the DI container. The ConfigureOptions() extension method can take care of the registration for you, by reflecting over the type, finding the implemented interfaces, and registering them in the DI container with the appropriate service. If you find your registration code hard to grok it might be worth considering switching to ConfigureOptions() in your own apps.

Perşembe, 15 Kasım 2018 / Published in Uncategorized

In this post I describe a scenario in which a library author wants to use the Options pattern to configure their library, and enforce some default values / constraints. I describe the difficulties with trying to achieve this using the standard Configure<T>() method for strongly-typed options, and introduce the concept of "post" configuration actions such as PostConfigure<T>() and PostConfigureAll<T>().

tl;dr; If you need to ensure a configuration action for a strongly-typed settings instance runs after all other configuration actions, you can use the PostConfigure<T>() method. Actions registered using this method are executed in the order they are registered, but after all other Configure<T>() actions have been applied.

Using the Options pattern as a library author

The Options pattern is the standard way to add strongly-typed settings to ASP.NET Core applications, by binding POCO objects to a configuration object consisting of key-value pairs. If you’re building a library designed for ASP.NET Core, using the Options pattern to configure your library is a standard approach to take.

ASP.NET Core strongly-typed settings are configured for your application in Startup.ConfigureServices(), typically by calling services.Configure<MySettings>() to configure a strongly-typed settings object MySettings. You can use multiple configuration "steps" to configure a single MySettings instance, where each step corresponds to a Configure<MySettings>() invocation. The order that the Configure<MySettings>() calls are made controls the order in which configuration is "applied" to MySettings, and hence its final properties.

public void ConfigureServices(IServiceCollection services)
{
    services.Configure<MySettings>(Configuration.GetSection("MySettings")); 
    services.Configure<MySettings>(opts => opts.SomeValue = "Overriden"); // Overrides SomeValue property (which may have been set in the previous Configure call)
} 

As a library author, this can be both useful and a challenge. On the positive side, if you use the Options pattern then users of your library will have a familiar and extensible mechanism for configuring your library. On the other hand, you lose some control over when and how your library is configured.

An example library using options

Lets explore this a little. Imagine you have a service that sends WhatsApp messages to users. You have a hosted API service that users can register with, which you’ve also open sourced. Users can send requests to the hosted API service to send a message to a user via WhatsApp. Alternatively, as it’s open source, users could also host their own instance of the API to send messages, instead of using your hosted service.

You’ve also created a simple .NET Standard library that users can use to call the API. At the heart of the library is the IWhatsAppService:

public interface IWhatsAppService
{
    Task<bool> SendMessage(string fromNumber, string toNumber, string message)
}

which is implemented as WhatsAppService (not shown). There are a number of configuration settings required, for which you’ve created a strongly-typed settings object, and provided default values:

public class WhatsAppSettings
{
    public string ApiUrl { get; set; } = Constants.DefaultUrl;
    public string ApiKey { get; set; }
    public string Region { get; set; } = Constants.DefaultRegion;
}

public class Constants
{
    public const string DefaultHostedUrl = "https://example.com/api/whatsapp";
    public const string DefaultHostedRegion = "eu-west1";
}

The details of this aren’t really important, what’s more important is that there is a specific rule you need to enforce: when the ApiUrl is set to Constants.DefaultHostedUrl, the Region must be set to DefaultHostedRegion.

By default, the WhatsAppSettings instance will have the correct values for the hosted service, but the user is free to update the values to point to another API if they wish. What we don’t want to happen (and which we’ll come to shortly) is for the user to change the Region, while still using the default ApiUrl.

To help the user add your library to their application, you’ve created a couple of extension methods they can call in ConfigureServices():

public static class WhatsAppServiceCollectionExtensions
{
    public static IServiceCollection AddWhatsApp(this IServiceCollection services)
    {
        // Add ASP.NET Core Options libraries - needed so we can use IOptions<WhatsAppSettings>
        services.AddOptions();

        // Add our required services
        services.AddSingleton<IWhatsAppService, WhatsAppService>();
        return services;
    }
}

This extension method registers all the required services with the DI container. To add the library to an ASP.NET Core application in Startup.ConfigureServices(), and to keep the defaults, you would use

public void ConfigureServices(IServiceCollection services)
{
    services.AddWhatsApp();
}

So that’s our library. The question is, how do we ensure that if the user uses the default hosted URL DefaultHostedUrl, the region is always set to DefaultHostedRegion.

Enforcing constraints on strongly-typed settings

I’m going to leave aside the whole question of whether strongly-typed options are the right place to enforce these sorts of constraints, as well as the fact API URLs definitely shouldn’t be hard coded as constants! This whole scenario is just for me to introduce a feature, so just go with it!

As the library author, we know that we need to add a configuration action for WhatsAppSettings to enforce the constraint on the hosted region. As an initial attempt we simply add the configuration action to the AddWhatsApp extension method:

public static class WhatsAppServiceCollectionExtensions
{
    public static IServiceCollection AddWhatsApp(this IServiceCollection services)
    {
        services.AddOptions();
        services.AddSingleton<IWhatsAppService, WhatsAppService>();

        // Add configuration action for WhatsAppSettings
        services.Configure<WhatsAppSettings>(options => 
        {
            if(options.ApiUrl == Constants.DefaultUrl)
            {
                // if we're using the hosted service URL, use the correct region
                options.Region = Constants.DefaultHostedRegion;
            }
        });
        return services;
    }
}

Unfortunately, this approach isn’t very robust. As I’ve discussed in several previous posts, configuration actions are applied to a strongly-typed settings instance in the same order that they are added to the DI container. That means if the user calls Configure<WhatsAppSettings>() after they call AddWhatsApp(), they will overwrite any changes enforced by the extension method:

public void ConfigureServices(IServiceCollection services)
{
    // Add the necessary settings and also enforce the hosted service region constraint
    services.AddWhatsApp(); 

    // Add another configuration action, overwriting previous configuration
    services.Configure<WhatsAppSettings>(options => 
    {
        options.ApiUrl = Constants.DefaultUrl;
        options.ApiKey = "MY-KEY-123456"
        options.Region = "us-east3"; // "Oh noes, wrong one!"
    });
}

This highlights one of the fundamental difficulties of working with a DI container where the order things are added to the container matters, and you (the library author) are fundamentally not in control of that process. Luckily there’s a solution to this by way of the PostConfigure<T>() family of extension methods.

Configuring strongly-typed options last with PostConfigure()

PostConfigure<T>() is an extension method that works very similarly to the Configure<T>() method, with one exception – PostConfigure<T>() configuration actions run after all Configure<T>() actions have executed. So when configuring a strongly typed settings object, the Options framework will run all "standard" configuration actions (in the order they were added to the DI container), followed by all "post" configuration actions (in the order they were added to the DI container).

This provides a nice, simple solution to our scenario. We can update the AddWhatsApp() extension method to use PostConfigure<T>(), and then we can be sure that it will run after any standard configuration actions:

public static class WhatsAppServiceCollectionExtensions
{
    public static IServiceCollection AddWhatsApp(this IServiceCollection services)
    {
        services.AddOptions();
        services.AddSingleton<IWhatsAppService, WhatsAppService>();

        // Use PostConfigure to ensure it runs after normal configuration
        services.PostConfigure<WhatsAppSettings>(options => 
        {
            if(options.ApiUrl == Constants.DefaultUrl)
            {
                options.Region = Constants.DefaultHostedRegion;
            }
        });
        return services;
    }
}

Now users can place their call to Configure<WhatsAppSettings>() anywhere in ConfigureServices() and it’s fine. That’s a lot easier than relying on users to read your documentation that says "You must call Configure<WhatsAppSettings> before calling AddWhatsApp(). It’s more usable for the user, and hopefully fewer issues raised on GitHub for you!

One of the first thoughts I had when discovering this method was "what if the end user uses PostConfigure<T>() too?" Well, in that situation, there’s not a lot you can do about it. But also, that’s fine – the idea here was to try and enforce certain constraints in normal circumstances. Fundamentally, if an application author wants to misuse your library they’ll always find a way…

Post configuration options for named options

The PostConfigure<T>() extension method doesn’t just support the "default" options instance, it also works with named instances (which I’ve discussed in previous posts). If you’re familiar with named options and they’re use then the PostConfigure methods won’t hold any surprises – there’s a "post" configuration version for most of the "standard" configuration methods and interfaces:

  • PostConfigure<T>(options) – Configure the default options instance T
  • PostConfigure<T>(name, options) – Configure the named options instance T with name name
  • PostConfigureAll<T>(options) – Configure all the options instances T (both the default and named instances)
  • IPostConfigureOptions<T> – This is the "post" version of the IConfigureNamedOptions<T> instance, which allows you to use injected services when configuring your options. There is no "post" version of the IConfigureOptions<T> interface.

Only the last point is worth watching out for, so I’ll reiterate: there is no "post" version of the IConfigureOptions<T> interface. The IPostConfigureOptions<T> interface (shown below) uses named instances:

public interface IPostConfigureOptions<in TOptions> where TOptions : class
{
    void PostConfigure(string name, TOptions options);
}

This means if you aren’t using named options, and you just want to implement an interface that’s equivalent to the IConfigureOptions<T> interface, you should look for the special Options.DefaultName value, which is string.Empty. So for example, imagine you have the following implementation of IConfigureOptions<T> which configures the default options instance:

public class ConfigureMySettingsOptions : IConfigureOptions<MySettings>
{
    private readonly CalculatorService _calculator;
    public ConfigureMySettingsOptions(CalculatorService calculator)
    {
        _calculator = calculator;
    }

    public void Configure(MySettings options)
    {
        options.MyValue = _someService.DoComplexCalcaulation();
    }
}

If you want to create a "post" configuration version that only configures the default options, you should use:

public class ConfigureMySettingsOptions : IPostConfigureOptions<MySettings>
{
    private readonly CalculatorService _calculator;
    public ConfigureMySettingsOptions(CalculatorService calculator)
    {
        _calculator = calculator;
    }

    public void PostConfigure(string name, MySettings options)
    {
        // Only run when name == string.Empty
        if(name == Options.Options.DefaultName)
        {
            options.MyValue = _someService.DoComplexCalcaulation();
        }
    }
}

You can add this post configuration class to your DI container using:

public void ConfigureServices(IServiceCollection services)
{
    services.AddSingleton<IPostConfigureOptions<MySettings>, ConfigureMySettingsOptions>();
}

Summary

In this post I showed how you could use PostConfigure<T>() to configure strongly-typed options after all the standard Configure<T>() actions have run. This is useful as a library author, as it allows you to do things like configure default values only if the user hasn’t configured options, or to enforce constraints. As an application user you generally shouldn’t need to use PostConfigure<T>() as you can already control the order in which configuration occurs, based on the order you call methods in ConfigureServices().

Perşembe, 15 Kasım 2018 / Published in Uncategorized

As mentioned in the announcement of the .NET Core 2.1 roadmap earlier today, at this point we know the overall shape of our next release and we have decided on a general schedule for it. As we approach the release of our first preview later this month, we also wanted to expand on what we have planned for Entity Framework Core 2.1.

New features

Although EF Core 2.1 is a minor release that builds over the foundational 2.0, it introduces significant new capabilities:

  • Lazy loading: EF Core now contains the necessary building blocks for anyone to write entity classes that can load their navigation properties on demand. We have also created a new package, Microsoft.EntityFrameworkCore.Proxies, that leverages those building blocks to produce lazy loading proxy classes based on minimally modified entity classes. In order to use these lazy loading proxies, you only need navigation properties in your entities to be virtual.
  • Parameters in entity constructors: as one of the required building blocks for lazy loading, we enabled the creation of entities that take parameters in their constructor. You can use parameters to inject property values, lazy loading delegates, and services.
  • Value conversions: Until now, EF Core could only map properties of types natively supported by the underlying database provider. Values were copied back and forth between columns and properties without any transformation. Starting with EF Core 2.1, value conversions can be applied to transform the values obtained from columns before they are applied to properties, and vice versa. We have a number of conversions that can be applied by convention as necessary, as well as an explicit configuration API that allows registering delegates for the conversions between columns and properties. Some of the application of this feature are:
    • Storing enums as strings
    • Mapping unsigned integers with SQL Server
    • Transparent encryption and decryption of property values
  • LINQ GroupBy translation: Before EF Core 2.1, the GroupBy LINQ operator would always be evaluated in memory. We now support translating it to the SQL GROUP BY clause in most common cases.
  • Data Seeding: With the new release it will be possible to provide initial data to populate a database. Unlike in EF6, in EF Core, seeding data is associated to an entity type as part of the model configuration. Then EF Core migrations can automatically compute what insert, update or delete operations need to be applied when upgrading the database to a new version of the model.
  • Query types: An EF Core model can now include query types. Unlike entity types, query types do not have keys defined on them and cannot be inserted, deleted or updated (i.e. they are read-only), but they can be returned directly by queries. Some of the usage scenarios for query types are:
    • Mapping to views without primary keys
    • Mapping to tables without primary keys
    • Mapping to queries defined in the model
    • Serving as the return type for FromSql() queries
  • Include for derived types: It will be now possible to specify navigation properties only defined in derived types when writing expressions for the Include() methods. The syntax looks like this:
    var query = context.People.Include(p => ((Student)p).School);
  • System.Transactions support: We have added the ability to work with System.Transactions features such as TransactionScope. This will work on both .NET Framework and .NET Core when using database providers that support it.

Other improvements and new initiatives

Besides the major new features included in 2.1, we have made numerous smaller improvements and we have fixed more than a hundred product bugs. We also made progress on the following areas:

  • Optimization of correlated subqueries: We have improved our query translation to avoid executing N + 1 SQL queries in many common scenarios in which a root query is joined with a correlated subquery.
  • Column ordering in migrations: Based on customer feedback, we have updated migrations to initially generate columns for tables in the same order as properties are declared in classes.
  • Cosmos DB provider preview: We have been developing an EF Core provider for the DocumentDB API in Cosmos DB. This is the first document database provider we have produced, and the learnings from this exercise are going to inform improvements in the design of the subsequent release after 2.1. The current plan is to publish an early preview of the Cosmos DB provider in the 2.1 timeframe.
  • Sample Oracle provider for EF Core: We have produced a sample EF Core provider for Oracle databases. The purpose of the project is not to produce an EF Core provider owned by Microsoft, but to:
    1. Help us identify gaps in EF Core’s relational and base functionality which we need to address in order to better support Oracle.
    2. Help jumpstart the development of other Oracle providers for EF Core either by Oracle or third parties.

    Note that currently our sample is based on the latest available ADO.NET provider from Oracle, which only supports .NET Framework. As soon as an ADO.NET provider for .NET Core is made available, we will consider updating the sample to use it.

    What’s next

    We will be releasing the first preview of EF Core 2.1, including all the features mentioned above, later this month. After that, we intend to release additional previews monthly, and a final release in the first half of 2018.

    A big thank you to everyone that uses EF Core, and to everyone who has helped make the 2.1 release better by providing feedback, reporting bugs, and contributing code.

Perşembe, 15 Kasım 2018 / Published in Uncategorized

Download Portability Analyzer (2.37 MB)

At Build 2018 we announced that we are enabling Windows desktop applications (Windows Forms and Windows Presentation Framework (WPF)) with .NET Core 3.0. You will be able to run new and existing Windows desktop applications on .NET Core and enjoy all the benefits that .NET Core has to offer, such as application-local deployment and improved performance.

We want to make sure that .NET Core 3.0 includes all the APIs that your applications depend on. So, in order to learn about which APIs are being used, we are releasing Portability Analyzer that will report the set of APIs referenced in your apps that are not yet available in NET Core 3.0. This API list will be sent to Microsoft and will help us prioritize which APIs we should incorporate to the product before it ships.

Please download and run the tool (PortabilityAnalyzer.exe) on your Windows Forms and WPF apps to see how ready your apps are for .NET Core 3.0 and to help us shape the .NET Core 3.0 API set.

The Portability Analyzer

The Portability Analyzer is an open source tool that simplifies your porting experience by identifying APIs that are not portable among the various .NET Platforms. This tool has existed for a few years as a console application and a Visual Studio extension. We recently updated it with a Windows Forms UI, which you can find here. You can see what it looks like in the following image.

Running the tool will do two things:

  1. Generate an Excel spreadsheet that will report the level of compatibility that your project has with .NET Core 3.0, including the specific APIs that are currently unsupported.
  2. Send this same data to the .NET team at Microsoft so that we can determine which APIs are needed by the most people.

The data we are collecting is the same as what is in the spreadsheet. None of your source code or binaries will be sent from your machine to Microsoft.

In order for us to know which APIs our users need, we are asking you to run the tool which will help us to provide the best possible experience in porting your apps. You, at the same time, will see how portable your apps are right now since the tool generates a list of APIs referenced in your assemblies, that might not be supported in .NET Core 3.0.

We will prioritize adding new APIs in .NET Core 3.0 based on information we collect. Please help us help you by making sure your application’s API requirements are represented in the data that we use for prioritization. Please run the Portability Analyzer to ensure that your application is counted.

Using Portability Analyzer

Use the following instructions to run Portability Analyzer.

  1. Extract archive anywhere on your local disk.
  2. Run PortabilityAnalyzer.exe
  3. In the Path to application text box enter the directory path to your Windows Forms or WPF app (either by inserting a path string or clicking on Browse button and navigating to the folder).
  4. Click Analyze button.
  5. After the analysis is complete, a report of how portable your app is right now to .NET Core 3.0 will be saved to your disc. You can open it in Excel by clicking Open Report button.

Below is an example of the report you’re getting after running the tool for the popular Paint.NET application:

Note: In your report you might have a tab “Missing Assemblies”. It means that the analyzer could not find the source assemblies that are referenced in your application. Make sure to find those assemblies and add them to the folder you’re analyzing. Otherwise you won’t get the full picture of your application.

Troubleshooting

Currently Portability Analyzer cannot analyze resource assemblies (*..resources.dll). This results in the error message:

Unable to analyze. Details: Detecting assembly references [Failed]

Cannot locate assembly information for System.Object. Microsoft assemblies found are:

We are working on providing a fix. To unblock analysis until the fix is available, you can remove resource files from the folder you are analyzing.

Using the console-based version

If you would like to run the analysis for many applications, you can either run them one by one as described above or you can use the console version of the Portability Analyzer.

Note: You will get one report per invocation of the ApiPort.exe tool. If you prefer to get one report per application, you can automate the invocation either using for in Batch or the ForEach mechanism in PowerShell.

To run the console app:

  1. Download and unzip Console Portability Analyzer.
  2. From the command prompt run the following command specifying multiple directories, dlls, or executables:

    For example:

You can find the portability report saved as an Excel file (.xlsx) in your current directory.

Summary

Please download and use Portability Analyzer on your desktop applications. It will help you determine how compatible your apps are with .NET Core 3.0. This information will help us plan the 3.0 release with the goal of making it easy for you to adopt .NET Core 3.0 for desktop apps.

Download Portability Analyzer (2.37 MB)

Thank you in advance for your help!

Perşembe, 15 Kasım 2018 / Published in Uncategorized

In May, we announced .NET Core 3.0, the next major version of .NET Core that adds support for building desktop applications using WinForms, WPF, and Entity Framework 6. We also announced some exciting updates to .NET Framework which enable you to use the new modern controls from UWP in existing WinForms and WPF applications.

Today, Microsoft is sharing a bit more detail on what we’re building and the future of .NET Core and .NET Framework.

.NET Core 3.0 addresses three scenarios our .NET Framework developer community has asked for, including:

  • Side-by-side versions of .NET that support WinForms and WPF: Today there can only be one version of .NET Framework on a machine. This means that when we update .NET Framework on patch Tuesday or via updates to Windows there is a risk that a security fix, bug fix, or new API can break applications on the machine. With .NET Core, we solve this problem by allowing multiple versions of .NET Core on the same machine. Applications can be locked to one of the versions and can be moved to use a different version when ready and tested.

  • Embed .NET directly into an application: Today, since there can only be one version of .NET Framework on a machine, if you want to take advantage of the latest framework or language feature you need to install or have IT install a newer version on the machine. With .NET Core, you can ship the framework as part of your application. This enables you to take advantage of the latest version, features, and APIs without having to wait for the framework to be installed.

  • Take advantage of .NET Core features: .NET Core is the fast-moving, open source version of .NET. Its side-by-side nature enables us to quickly introduce new innovative APIs and BCL (Base Class Library) improvements without the risk of breaking compatibility. Now WinForms and WPF applications on Windows can take advantage of the latest .NET Core features, which also includes more fundamental fixes for an even better high-DPI support.

.NET Framework 4.8 addresses three scenarios our .NET Framework developer community has asked for, including:

  • Modern browser and modern media controls: Today, .NET desktop applications use Internet Explorer and Windows Media Player for showing HTML and playing media files. Since these legacy controls don’t show the latest HTML or play the latest media files, we are adding new controls that take advantage of Microsoft Edge and newer media players to support the latest standards.

  • Access to touch and UWP Controls: UWP (Universal Windows Platform) contains new controls that take advantage of the latest Windows features and touch displays. You won’t have to rewrite your applications to use these new features and controls. We are going to make them available to WinForms and WPF so that you can take advantage of these new features in your existing code.

  • High DPI improvements: The resolution of displays is steadily increasing to 4K and now even 8K resolutions. We want to make sure your existing WinForms and WPF applications can look great on these displays.

Given these updates, we’re hearing a few common questions, such as “What does this mean for the future of .NET Framework?” and “Do I have to move off .NET Framework to remain supported?” While we’ll provide detailed answers below, the key takeaway is that we will continue to move forward and support the .NET Framework, albeit at a slower pace.

How Do We Think of .NET Framework and .NET Core Moving Forward?

.NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. I mentioned above that even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

.NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build I did a demo showing how the file APIs were faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

We will continue to make it easier to move applications to .NET Core. .NET Core 3.0 takes a huge step by adding WPF, WinForms and Entity Framework 6 support, and we will keep porting APIs and features to help close the gap and make migration easier for those who chose to do so.

If you have existing .NET Framework applications, you should not feel pressured to move to .NET Core. Both .NET Framework and .NET Core will move forward, and both will be fully supported, .NET Framework will always be a part of Windows. But moving forward they will contain somewhat different features. Even inside of Microsoft we have many large product lines that are based on .NET Framework and will remain on .NET Framework.

In conclusion, this is an amazing time to be a .NET developer. We are continuing to advance the .NET Framework with some exciting new features in 4.8 to make your desktop applications more modern. .NET Core is expanding into new areas like Desktop, IoT and Machine Learning. And we are making it easier and easier to share code between all the .NET’s with .NET Standard.

Scott Hunter, Director of Program Management for .NET

Scott Hunter works for Microsoft as a Director of Program Management for .NET. This include .NET Framework, .NET Core, Managed Languages, ASP.NET, Entity Framework and .NET tooling. Before this Scott was the CTO of several startups including Mustang Software and Starbase, where he focused on a variety of technologies – but programming the Web has always been his real passion.