Salı, 29 Mayıs 2018 / Published in Uncategorized

At Microsoft Build Live today, we are sharing a first look at our plans for .NET Core 3. The highlight of .NET Core 3 is support for Windows desktop applications, specifically Windows Forms, Windows Presentation Framework (WPF), and UWP XAML. You will be able to run new and existing Windows desktop applications on .NET Core and enjoy all the benefits that .NET Core has to offer.

We are planning on releasing a first preview of .NET Core 3 later this year and the final version in 2019. We will be looking for developers to partner with us, to give us feedback, and to release versions of your applications in the same timeframe as our releases. We think that .NET Core 3.0 will be one of the most exciting .NET releases we’ve ever released.

ASP.NET Core will continue to move forward in parallel and will have a release with .NET Core 3.0. Our commitment to web and cloud applications remains unchanged. At the same time, it’s time to add Windows desktop applications as another supported workload for .NET Core. We have heard many requests for desktop applications with .NET Core and are now sharing our plan to deliver on that. Let’s take a look at that.

Benefits of .NET Core for Desktop

There are many benefits with .NET Core that are great for desktop apps. There are a few that are worth calling out explicitly:

  • Performance improvements and other runtime updates that will delight your users
  • Super easy to use or test a new version of .NET Core for just one app on a machine
  • Enables both machine-global and application-local deployment
  • Support for the .NET Core CLI tools and SDK-style projects in Visual Studio

We’re also announcing a set of improvements that we’ll be adding to both .NET Core 3.0 and .NET Framework 4.8:

  • Access to the full Windows 10 (AKA “WinRT”) API.
  • Ability to host UWP XAML controls in WPF and Windows Forms applications.
  • Ability to host UWP browser and media controls, enabling modern browser and media content and standards.

.NET Framework 4.8

We’re also announcing our plans for .NET Framework 4.8. after shipping .NET Framework 4.7.2 only a week ago. We expect the next version to be 4.8 and for it to ship in about 12 months. Like the past few releases, the new release will include a set of targeted improvements, including the features you see listed above.

Visualizing .NET Core 3

Let’s take a look at .NET Core 3 in pictorial form.

Support for Windows desktop will be added as a set of “Windows Desktop Packs”, which will only work on Windows. .NET Core isn’t changing architecturally with this new version. We’ll continue to offer a great cross-platform product, focused on the cloud. We have lots of improvements planned for those scenarios that we’ll share later.

From a 1000-meter view, you can think of WPF as a rich layer over DirectX and Windows Forms as thinner layer over GDI Plus. WPF and Windows Forms do a great job of exposing and exercising much of the desktop application functionality in Windows. It’s the C# code in Windows Forms and WPF that we’ll include as a set of libraries with .NET Core 3. Windows functionality, like GDI Plus and DirectX, will remain in Windows.

We’ll also be releasing a new version of .NET Standard at the same time. Naturally, all new .NET Standard APIs will be part of .NET Core 3.0. We have not yet added Span<T>, for example, to the standard. We’ll be doing that in the next version.

C#, F# and VB already work with .NET Core 2.0. You will be able to build desktop applications with any of those three languages with .NET Core 3.

Side-by-side and App-local Deployment

The .NET Core deployment model is one the biggest benefits that Windows desktop developers will experience with .NET Core 3. In short, you can install .NET Core in pretty much any way you want. It comes with a lot of deployment flexibility.

The ability to globally install .NET Core provides much of the same central installation and servicing benefits of .NET Framework, while not requiring in-place updates.

When a new .NET Core version is released, you can update one app on a machine at a time without any concern for affecting other applications. New .NET Core versions are installed in new directories and are not used by existing applications.

For cases where the maximum isolation is required, you can deploy .NET Core with your application. We’re working on new build tools that will bundle your app and .NET Core together as in a single executable, as a new option.

We’ve had requests for deployment options like this for many years, but were never able to deliver those with the .NET Framework. The much more modular architecture used by .NET Core makes these flexible deployment options possible.

Using .NET Core 3 for an Existing Desktop Application

For new desktop applications, we’ll guide everyone to start with .NET Core 3. The more interesting question is what the experience will be like to move existing applications, particularly big ones, to .NET Core 3. We want the experience to be straightforward enough that moving to .NET Core 3 is an easy choice for you, for any application that is in active development. Applications that are not getting much investment and don’t require much change should stay on .NET Framework 4.8.

Quick explanation of our plan:

  • Desktop applications will need to target .NET Core 3 and recompile.
  • Project files will need to be updated to target .NET Core 3.
  • Dependencies will not need to retarget and recompile. There will be additional benefits if you update dependencies.

We intend to provide compatible APIs for desktop applications. We plan to make WPF and Windows Forms side-by-side capable, but otherwise as-is, and make them work on .NET Core. In fact, we have already done this with a number of our own apps and others we have access to.

We have a version of Paint.NET running in our lab. In fact, we didn’t have access to Paint.NET source code. We got the existing Paint.NET binaries working on .NET Core. We didn’t have a special build of WPF available, so we just used the WPF binaries in the .NET Framework directory on our lab machine. As an aside, this exercise uncovered an otherwise unknown bug in threading in .NET Core, which was fixed for .NET Core 2.1. Nice work, Paint.NET!

We haven’t done any optimization yet, but we found that Paint.NET has faster startup on .NET Core. This was a nice surprise.

Similarly, EF6 will be updated to work on .NET Core 3.0, to provide a simple path forward for existing applications using EF6. But we don’t plan to add any major new features to EF6. EF Core will be extended with new features and will remain the recommended data stack for all types of new applications. We will advise that you port to EF Core if you want to take advantage of the new features and improved performance.

There are many design decisions ahead, but the early signs are very good. We know that compatibility will be very important to everyone moving existing desktop applications to .NET Core 3. We will continue to test applications and add more functionality to .NET Core to support them. We will post about any APIs that are hard to support, so that we can get your feedback.

Updating Project Files

With .NET Core projects, we adopted SDK-style projects. One of the key aspects of SDK-style projects is PackageReference, which is a newer way of referencing NuGet packages. PackageReference replaces packages.config. PackageReference also make it possible to reference a whole component area at once, not just a single assembly at a time.

The biggest experience improvements with SDK-style projects are:

  • Much smaller and cleaner project files
  • Much friendlier to source control (fewer changes and smaller diffs)
  • Edit project files in Visual Studio without unloading
  • NuGet is part of the build and responsive to changes like target framework update
  • Supports multi-targeting

The first part of adopting .NET Core 3 for desktop projects will be migrating to SDK-style projects. There will be a migration experience in Visual Studio and available at the command line.

An example of an SDK-style project, for ASP.NET Core 2.1, follows. .NET Core 3 project files will look similar.

Controls, NuGet Packages, and Existing Assembly References

Desktop applications often have many dependencies, maybe from a control vendor, from NuGet or binaries that don’t have source any more. It’s not like all of that can be updated to .NET Core 3 quickly or maybe not even at all.

As stated above, we intend to support dependencies as-is. If you are at the Build conference, you will see Scott Hunter demo a .NET Core 3 desktop application that uses an existing 3rd-party control. We will continue testing scenarios like that to validate .NET Core 3 compatibility.

Next Steps

We will start doing the following, largely in parallel:

  • Test .NET Framework desktop application on .NET Core to determine what prevents them from working easily. We will often do this without access to source code.
  • Enable you to easily and anonymously share dependency data with us so that we can collect broad aggregate data about applications, basically “crowd voting” on the shape of what .NET Core 3 should be.
  • Publish early designs so that we can get early feedback from you.

We hope that you will work with us along the way to help us make .NET Core 3 a great release.

Closing

We have been asking for feedback on surveys recently. Thanks so much for filling those out. The response has been incredible, resulting in thousands of responses within the first day. With this last survey, we asked a subset of respondents over Skype for feedback on our plans for .NET Core 3 with (unknown to them) our Build conference slides. The response has been very positive. Based on everything we have read and heard, we believe that the .NET Core 3 feature set its characteristics are on the right track.

Todays news demonstrates a large investment and commitment in Windows desktop applications. You can expect two releases from us in 2019, .NET Core 3 and .NET Framework 4.8. A number of the features are shared between the two releases and some others are only available in .NET Core 3. We think the commonality and the differences provide a great set of choices for moving forward and modernizing your desktop applications.

It is an exciting time to be a .NET developer.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Today, we are excited to announce that the first release candidate of EF Core 2.1 is available, alongside .NET Core 2.1 RC 1 and ASP.NET Core 2.1 RC 1, for broad testing, and now also for production use!

Go live support

EF Core 2.1 RC1 is a “go live” release, which means once you test that your application works correctly with RC1, you can use it in production and obtain support from Microsoft, but you should still update to the final stable release once it’s available.

Go live support for EF Core RC1 extends to the base functionality in EF Core and to the providers that are developed as part of the Entity Framework Core project, like the SQL Server, SQLite, and in-memory database providers. If you are using any other providers, we recommend you verify what level of support you can get from their corresponding developers.

Changes since Preview 2

For the full details on what has changed since EF Core 2.0, look at the What’s New section of our documentation. The main new features are:

  • GroupBy translation
  • Lazy loading
  • Parameters in entity constructors
  • Value conversion
  • Query types
  • Data seeding
  • System.Transactions support

We have been stabilizing the product since Preview 2, therefore there are no new features in RC1. In fact, all changes in RC1 are either bug fixes or very small functional or performance improvements on existing features.

You can get an up-to-date list of bug fixes and small enhancements using this issue tracker query. Some of the small RC1 enhancements worth mentioning are:

  • #9295 List.Exists is now translated to SQL. Thank you Massimiliano Donini for your contribution!
  • #9347 Several performance improvements in model building
  • #11570 DbContext scaffolding now generates a constructor that allows injection of DbContextOptions<TContext>

Obtaining the bits

The new bits are available in NuGet as individual packages, as part of the ASP.NET Core 2.1 RC1 metapackage and in the .NET Core 2.1 RC1 SDK, also released today.

The recommended way to obtain the new packages for ASP.NET Core applications is through the installation of the new SDK, rather than updating the packages. For other applications, either the SDK can be installed or the packages can be updated using the dotnet command line tool or NuGet.

EF Core 2.1 RC1 and the corresponding versions of the SQL Server and in-memory database providers are included in the ASP.NET Core metapackage. Therefore, if your application is an ASP.NET Core application and you are using one of these providers, you don’t need additional upgrade steps.

If you’re using one of the database providers developed as part of the Entity Framework Core project (for example, SQL Server, SQLite or In-Memory), you can install EF Core 2.1 RC1 bits by simply installing the latest version of a provider. For example, using dotnet on the command-line:

$ dotnet add package Microsoft.EntityFrameworkCore.Sqlite -V 2.1.0-rc1-final

If you’re using another EF Core 2.0-compatible relational database provider, it’s recommended that in order to obtain all the newest EF Core bits, you add a direct reference to the base relational provider in your application, for example:

$ dotnet add package Microsoft.EntityFrameworkCore.Relational -V 2.1.0-rc1-final

When updating packages, make sure that all EF Core packages are updated to the RC1 version. Mixing EF Core or infrastructure packages from older .NET Core versions (including previous 2.1 preview bits) will likely cause errors.

Provider compatibility

As we mentioned in previous announcements, some of the new features in 2.1, such as value conversions, require an updated database provider. However, it was our original goal that existing providers developed for EF Core 2.0 would be compatible with EF Core 2.1 as long as you didn’t try to use the new features.

In practice, testing has shown that some of the EF Core 2.0 providers are not going to be compatible with 2.1. Also, there have been changes in the code necessary to support the new features since Preview 1 and Preview 2. Therefore, we recommend that you use a provider that has been updated for EF Core 2.1 RC1.

We have news that some of these updated providers will be available within days of this announcement. Others may take longer.

We have been working and will continue to work with provider writers to make sure we identify and address any issues with the upgrade. In the particular case of Pomelo.EntityFrameworkCore.MySql, we are actively working with the developers to help them get it ready for 2.1.

If you experience any new incompatibility, please report it by creating an issue in our GitHub repository.

What’s next

We expect to ship the final version of EF Core 2.1 in the first half of 2018, as planned. We’re now getting very close to the finish line!

In the meantime, planning for the next versions of EF Core after 2.1 is ongoing. Stay tuned for upcoming announcements in this blog.

Thank you!

As always, the entire Entity Framework team wants to express our deep gratitude to everyone who has helped in making this release better by trying early builds, providing feedback, reporting bugs, and contributing code.

Please try EF Core 2.1 RC1, and keep posting any new feedback to our issue tracker!

Salı, 29 Mayıs 2018 / Published in Uncategorized

Today at //Build 2018, we are excited to announce the preview of ML.NET, a cross-platform, open source machine learning framework. ML.NET will allow .NET developers to develop their own models and infuse custom ML into their applications without prior expertise in developing or tuning machine learning models.

ML.NET was originally developed in Microsoft Research and evolved into a significant framework over the last decade; it is used across many product groups in Microsoft like Windows, Bing, Azure, and more .

With this first preview release, ML.NET enables ML tasks like classification (e.g. text categorization and sentiment analysis) and regression (e.g. forecasting and price prediction). Along with these ML capabilities, this first release of ML.NET also brings the first draft of .NET APIs for training models, using models for predictions, as well as the core components of this framework, such as learning algorithms, transforms, and core ML data structures.

ML.NET is first and foremost a framework, which means that it can be extended to add popular ML Libraries like TensorFlow, Accord.NET, and CNTK. We are committed to bringing the full experience of ML.NET’s internal capabilities to ML.NET in open source.

To sum it all up, ML.NET is our commitment to make ML great in .NET.

Please come and join us over on GitHub and help shape the future of ML in .NET:

https://github.com/dotnet/machinelearning

Over time, ML.NET will enable other ML scenarios like recommendation systems, anomaly detection, and other approaches, like deep learning, by leveraging popular deep learning libraries like TensorFlow, Caffe2, and CNTK, and general machine learning libraries like Accord.NET.

ML.NET also complements the experience that Azure Machine Learning and Cognitive Services provides by allowing for a code-first approach, supports app-local deployment and the ability to build your own models.

The rest of this blog post provides more details about ML.NET; feel free to jump to the one that interests you the most.

ML.NET Core Components

ML.NET is being launched as a part of the .NET Foundation and the repo today contains the .NET C# API(s) for both model training and consumption, along with a variety of transforms and learners required for many popular ML tasks like regression and classification.

ML.NET is aimed at providing the E2E workflow for infusing ML into .NET apps across pre-processing, feature engineering, modeling, evaluation, and operationalization.

ML.NET comes with support for the types and runtime needed for all aspects of machine learning, including core data types, extensible pipelines, high performance math, data structures for heterogeneous data, tooling support, and more.

The table below describes the entire list of components that are being released as a part of ML.NET 0.1.

We aim to make ML.NET’s APIs generic, such that other frameworks like CNTK, Accord.NET, TensorFlow and other libraries can become usable through one shared API.

Getting Started Installation

To get started with ML.NET, install the ML.NET NuGet from the CLI using:

dotnet add package Microsoft.ML

From package manager:

Install-Package Microsoft.ML

You can build the framework directly from https://github.com/dotnet/machinelearning.

Sentiment Classification with ML.NET

Train your own model

Here is a simple snippet to train a model for sentiment classification (full snippet of code can be found here).


var pipeline = new LearningPipeline();

pipeline.Add(new TextLoader<SentimentData>(dataPath, separator: ","));

pipeline.Add(new TextFeaturizer("Features", "SentimentText"));

pipeline.Add(new FastTreeBinaryClassifier());

pipeline.Add(new PredictedLabelColumnOriginalValueConverter(PredictedLabelColumn = "PredictedLabel"));

var model = pipeline.Train<SentimentData, SentimentPrediction>();

Let’s go through this in a bit more detail. We create a LearningPipeline which will encapsulate the data loading, data processing/featurization, and learning algorithm. These are the steps required to train a machine learning model which allows us to take the input data and output a prediction.

The first part of the pipeline is the TextLoader, which loads the data from our training file into our pipeline. We then apply a TextFeaturizer to convert the SentimentText column into a numeric vector called Features which can be used by the machine learning algorithm (as it cannot take text input). This is our preprocessing/featurization step.

FastTreeBinaryClassifier is a decision tree learner we will use in this pipeline. Like the featurization step, trying out different learners available in ML.NET and changing their parameters may enable identifying better results. PredictedLabelColumnOriginalValueConverter converts the model’s predicted labels back to their original value/format.

pipeline.Train<SentimentData, SentimentPrediction>() trains the pipeline (loads the data, trains the featurizer and learner). The experiment is not executed until this happens.

Use the trained model for predictions


SentimentData data = new SentimentData
{
    SentimentText = "Today is a great day!"
};

SentimentPrediction prediction = model.Predict(data);

Console.WriteLine("prediction: " + prediction.Sentiment);

To get a prediction, we use model.Predict() on new data. Note that the input data is a string and the model includes the featurization, so our pipeline stays in sync during both training and prediction. We didn’t have to write preprocessing/featurization code specifically for predictions.

For more scenarios for getting started please refer to the documentation walkthroughs which go over sentiment analysis and taxi fare prediction in more detail.

The Road Ahead

There are many capabilities we aspire to add to ML.NET, but we would love to understand what will best fit your needs. The current areas we are exploring are:

  • Additional ML Tasks and Scenarios
  • Deep Learning with TensorFlow & CNTK
  • ONNX support
  • Scale-out on Azure
  • Better GUI to simplify ML tasks
  • Integration with VS Tools for AI
  • Language Innovation for .NET

Help shape ML.NET for your needs

Take it for a spin, build something with it, and tell us what ML.NET should be better at. File a couple of issues and suggestions on GitHub and help shape ML.NET for your needs.

https://github.com/dotnet/machinelearning

If you prefer reaching out to us directly, you can do so by providing your details through this very short survey.

Salı, 29 Mayıs 2018 / Published in Uncategorized

7 Things Worth Knowing About ASP.NET Core Logging

Logging is an important aspect of any professional web application. You often need to log data, information, errors, system events, and such things. ASP.NET Core comes with a set of inbuilt logging components that you can use for this purpose. To that end this article enumerates through seven things that are worth knowing about ASP.NET Core logging.

Let’s begin !

1. NuGet packages involved

The inbuilt logging functionality of ASP.NET Core is bundled in the Microsoft.Extensions.Logging namespace. When you create a new ASP.NET Core 2 project the relevant NuGet packages are already included in the Microsoft.AspNetCore.All metapackage.

If they are not added to your project for some reason you should ensure to add them as shown above. 

2. Interfaces

The logging features of ASP.NET Core are built on the foundation of the following interfaces :

  • ILogger
  • ILoggingProvider
  • ILoggerFactory

The ILogger interface contains methods that write the specified information to a log storage. The ILoggingProvider creates an ILogger object. The ILoggingFactory creates an ILogger using a ILoggingProvider.

The LoggerFactory class is a concrete implementation of ILoggingFactory.

3. Inbuilt loggers

ASP.NET Core comes with the following inbuilt ILogger implementations :

  • Console
  • Debug
  • EventSource
  • EventLog
  • TraceSource
  • Azure App Service

The ConsoleLogger writes the log messages to the system console. The DebugLogger outputs the log information to Visual Studio debug window. The other logger classes log messages to their respective stores (for example, Windows event log). 

4. Enable logging in a web application

In order to use any of the loggers mentioned earlier, you need to enable logging your web application. For ASP.NET Core 2 applications the Program.cs file contains this piece of code :

The CreateDefaultBuilder() call registers Console and Debug loggers for you. The other loggers can be registered through the Configure() method as shown below :

public void Configure(IApplicationBuilder app,
ILoggerFactory loggerFactory)
{
    loggerFactory.AddEventSourceLogger();

    loggerFactory.AddEventLog();

    app.UseMvcWithDefaultRoute();

}

As you can see you the code uses AddEventSourceLogger() and AddEventLog() methods to register event source logger and event log logger with the system. Your log messages will now be sent using Windows event log and Windows Event Tracing (ETW) respectively. 

5. Injecting ILogger in MVC and Razor Pages

Before you start logging anything from your controllers or razor pages, you would want to inject an appropriate ILogger object into them. This is how you do that in an MVC controller :

public class HomeController : Controller
{
    private ILogger logger = null;

    public HomeController(ILogger<DebugLogger> logger)
    {
        this.logger = logger;
    }
}

As you can see, the above code receives a DebugLogger in the constructor. You can then use this ILogger implementation to log the messages. 

The razor pages use similar approach. The following code shows the DebugLogger being injected into a page model.

public class IndexModel : PageModel
{
    private ILogger logger = null;

    public IndexModel(ILogger<DebugLogger> logger)
    {
        this.logger = logger;
    }
}

6. Write information to a log

Once you grab an ILogger object you can log messages as shown below :

public IActionResult Index()
{
    logger.LogInformation("This is a log message!");

    return View();
}

The code uses LogInformation() to log a message to the output window. The logger implementations provide methods such as :

  • LogTrace()
  • LogDebug()
  • LogInformation()
  • LogWarning()
  • LogError()
  • LogCritical()

These methods log a message with a certain "log level". You can use various overloads of these logging methods to log exception details. For example you can log exceptions details as shown below :

 logger.LogInformation(ex,"This is a log message!");

You can also use Log() method and explicitly specify details such as log level and event ID.

A sample run of the above code produces the following output :

The usage of these log methods in razor pages is identical to MVC applications.

7. Log levels

Log levels indicate the importance of the message being logger. A log level is indicated by a number between 0 to 5, each level intended for the purpose outlined below :

  • 0 – Tracing : Tracing purpose messages.
  • 1 – Debug : Short term debugging purposes.
  • 2 – Information : Track the flow of the application
  • 3 – Warning : Abnormal program flow
  • 4 – Error : Errors and exceptions that can’t be handled by your code.
  • 5 – Critical : Serious system failures or errors.

As you might have guessed the logging methods discussed earlier are meant for the corresponding log levels. For example, LogInformation() method is intended to log a message with log level of 2. LogError() is intended to log a message with log level of 3 and so on.

To read more about error logging in ASP.NET Core go here.

That’s it for now! Keep coding !!

Salı, 29 Mayıs 2018 / Published in Uncategorized

As I’ve been teaching ASP.NET Core for a while now, some things I’ve been saying I’ve taken on faith. One of these was that building a Configuration Source (a provider that can read configuration and feed it into the configuration system) is fairly easy.

With a break in building my Vue course, I was curious so I decided to build a configuration source that I hope no one uses. It is a configuration source that reads the AppSettings in a web.config file. This is a thought exercise, not a recommendation.

Building the Configuration Source

In order to be a Configuration Source, you must derive from the IConfigurationSource interface. You could implement this interface directly, or if your configuration is file based (like the one I built), you can simply derive from FileConfigurationSource. This class deals with most of the problem of opening and reading a file. Since my Configuration Source is to read the web.config, that’s perfect.

In my constructor, I just set some options that are really about how FileConfigurationSource handles a file:

  public class AppSettingsConfigurationSource : FileConfigurationSource
  {
    public AppSettingsConfigurationSource()
    {
      Path = "web.config";
      ReloadOnChange = true;
      Optional = true;
      FileProvider = null;
    }

    public override IConfigurationProvider Build(IConfigurationBuilder builder)
    {
      EnsureDefaults(builder);
      return new AppSettingsConfigurationProvider(this);
    }
  }

The only thing I really need to do with the Source itself is to override the Build method and just return a new provider that does the actual work.

Building the Provider

The provider is where all the real work happens. Like the Configuration Source, there is an interface you can implement: IConfigurationProvider. For my needs, I am just going to use the FileConfigurationProvider as the base class as it goes well with the FileConfigurationSource:

  public class AppSettingsConfigurationProvider : FileConfigurationProvider
  {
    public AppSettingsConfigurationProvider(AppSettingsConfigurationSource source) 
      : base(source)
    {

    }

The work is done, in the case of FileConfigurationProvider, by overriding the Load method:

    public override void Load(Stream stream)
    {
      try
      {
        Data = ReadAppSettings(stream);
      }
      catch
      {
        throw new FormatException("Failed to read from web.config");
      }
    }

It passes a stream in that contains the contents of the file specified in the Configuration Source. At this point I just need to read the file and set the Data to a Dictionary<string,string>.

Finally, I implement this new method (ReadAppSettings) just by using old, boring System.Xml code:

    private IDictionary<string, string> ReadAppSettings(Stream stream)
    {
      var data = 
        new SortedDictionary<string, string>(StringComparer.OrdinalIgnoreCase);

      var doc = new XmlDocument();
      doc.Load(stream);

      var appSettings = doc.SelectNodes("/configuration/appSettings/add");

      foreach (XmlNode child in appSettings)
      {
        data[child.Attributes["key"].Value] = child.Attributes["value"].Value;
      }

      return data;
    }

With the implementation complete, we need to have a way to register it when we setup the configuration.

Adding Sources to Configuration

If you’re just reading this to figure out how to use my provider, you can get it from Nuget, though I’d prefer you migrated your AppSettings to another configuration format. But if I can’t talk you out of it, just add it to the project via the dotnet command-line:

> dotnet add package WilderMinds.Configuration.AppSettings

I could have required that you just add the source via the Add method:

public static IWebHost BuildWebHost(string[] args) =>
  WebHost.CreateDefaultBuilder(args)
         .ConfigureAppConfiguration(cfg =>
         {
           cfg.Sources.Clear();
           cfg.Add<AppSettingsConfigurationSource>(s => s.Path = "web.config");
         })
         .UseStartup<Startup>()
         .Build();

This is awkward, so I think I’d prefer to create a convenient method to just call "AddAppSettings()" to make it easier. To do this, I just write an extension method for the IConfigurationBuilder so we can add it easily:

  public static class AppSettingConfigurationExtensions
  {
    public static IConfigurationBuilder AddAppSettings(this IConfigurationBuilder bldr)
    {
      return bldr.Add(new AppSettingsConfigurationSource());
    }
  }

This allows the code to do the setup for configuration to be simply:

public static IWebHost BuildWebHost(string[] args) =>
    WebHost.CreateDefaultBuilder(args)
            .ConfigureAppConfiguration(cfg =>
            {
              cfg.Sources.Clear();
              cfg.AddAppSettings();
            })
            .UseStartup<Startup>()
            .Build();

If you want to look at the source code, you can get it on Github:

https://github.com/shawnwildermuth/ConfigurationAppSettings

Let me know if you have a better way to do any of this!

Ready to Learn Vue with ASP.NET Core?

My new Wilder Minds’ course is available as an Early Access for only $79. It will be released on a weekly basis. The first module is now available:

Enroll Today

Salı, 29 Mayıs 2018 / Published in Uncategorized

Using JwtBearer Authentication in an API-only ASP.NET Core Project

Shawn Wildermuth Apr 10, 2018 on 14:16PM ASP.NET Core Security JWT Tokens No Comments.

In my Pluralsight courses1 on ASP.NET Core, I show how to use JWT Tokens to secure your API. In building a new example for my upcoming Vue.js course, I decided to only use JWT (not cookies and JWT like many of my examples are).

But I kept getting redirects on failure to call an API made me realize that I wasn’t sure how to make JWT the only provider. After some fiddling I figured it out. This blog post is mostly to remind me of how to do it.

UPDATED!

After help from @khellang – I found the real culprit. See new section at the bottom.

Prior Blogpost

If you haven’t seem how to handle Cookies & JwtBearer tokens, see my other post:

https://wildermuth.com/2017/08/19/Two-AuthorizationSchemes-in-ASP-NET-Core-2

Using JwtBearer

I knew I had to add the JwtBearer when I setup the AddAuthentication call:

public void ConfigureServices(IServiceCollection services)
{
  services.AddIdentity<IdentityUser, IdentityRole>(cfg =>
  {
    cfg.User.RequireUniqueEmail = true;
  })
    .AddEntityFrameworkStores<StoreContext>();

  services.AddAuthentication()
    .AddJwtBearer(cfg =>
    {
      cfg.TokenValidationParameters = new TokenValidationParameters()
      {
        ValidateIssuer = true,
        ValidIssuer = _config["Security:Tokens:Issuer"],
        ValidateAudience = true,
        ValidAudience = _config["Security:Tokens:Audience"],
        ValidateIssuerSigningKey = true,
        IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_config["Security:Tokens:Key"])),

      };
    });

  services.AddDbContext<StoreContext>();
  services.AddScoped<IStoreRepository, ProductRepository>();
  services.AddScoped<StoreDbInitializer>();

  services.AddMvc();
}

But just adding a single provider didn’t work making it the default. Originally, I decided to use the same method as the dual-authentication by adding the

[Route("api/[controller]")]
[Authorize(JwtBearerDefaults.AuthenticationScheme)]
public class OrdersController : Controller
  

But my goal was not to have to specify it, I want it to be the default. The trick seemed to be that I needed to tell Authentication that the default AuthenticateScheme and the DefaultChallengeScheme needed to be using the JwtBearer:

  services.AddAuthentication(cfg =>
  {
    cfg.DefaultAuthenticateScheme = JwtBearerDefaults.AuthenticationScheme;
    cfg.DefaultChallengeScheme = JwtBearerDefaults.AuthenticationScheme;
  })
    .AddJwtBearer(cfg =>
    {
      cfg.TokenValidationParameters = new TokenValidationParameters()
      {
        ValidateIssuer = true,
        ValidIssuer = _config["Security:Tokens:Issuer"],
        ValidateAudience = true,
        ValidAudience = _config["Security:Tokens:Audience"],
        ValidateIssuerSigningKey = true,
        IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_config["Security:Tokens:Key"])),

      };
    });

Then I could just use Authorize attribute and it would work:

[Route("api/[controller]")]
[Authorize]
public class OrdersController : Controller

UPDATE

While this worked, it wasn’t quite right. One approach I didn’t mention was just setting the default authentication when configuring it:

services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
  ... 

This also works except the redirection from cookie authentication is still there. The problem was that I was using AddIdentity before. When pointed at the source code, it was clear that this code was adding a lot of cookie based identity as the defaults:

AddIdentity

So I could just move the AddIdentity call to after the AddAuthentication but I hate it when order matters. So I was told that what I should have done was AddIdentityCore instead:

  services.AddIdentityCore<identityuser>      services.AddIdentityCore<IdentityUser>(cfg =>
  {
    cfg.User.RequireUniqueEmail = true;
  })
    .AddEntityFrameworkStores<StoreContext>();

  services.AddAuthentication(JwtBearerDefaults.AuthenticationScheme)
    .AddJwtBearer(cfg =>
    {
      cfg.TokenValidationParameters = new TokenValidationParameters()
      {
        ValidateIssuer = true,
        ValidIssuer = _config["Security:Tokens:Issuer"],
        ValidateAudience = true,
        ValidAudience = _config["Security:Tokens:Audience"],
        ValidateIssuerSigningKey = true,
        IssuerSigningKey = new SymmetricSecurityKey(Encoding.UTF8.GetBytes(_config["Security:Tokens:Key"])),

      };
    });

I like that it works and it’s more correct. Thanks Kristian!

1 http://shawnw.me/learnaspnetcore2 & http:// http://shawnw.me/corewebapi

Ready to Learn Vue with ASP.NET Core?

My new Wilder Minds’ course is available as an Early Access for only $79. It will be released on a weekly basis. The first module is now available:

Enroll Today

Salı, 29 Mayıs 2018 / Published in Uncategorized

I’m continuing to update my podcast site. I’ve upgraded it from ASP.NET "Web Pages" (10 year old code written in WebMatrix) to ASP.NET Core 2.1 developed with VS Code. Here’s some recent posts:

I was talking with Ire Aderinokun today for an upcoming podcast episode and she mentioned I should use Lighthouse (It’s built into Chrome, can be run as an extension, or run from the command line) to optimize my podcast site. I, frankly, had not looked at that part of Chrome in a long time and was shocked and how powerful it was!

Lighthouse also told me that I was using an old version of jQuery (I know that) that had known security issues (I didn’t know that!)

It told me about Accessibility issues as well, pointing out that some of my links were not discernable to a screen reader.

Some of these issues were/are easily fixed in minutes. I think I spent about 20 minutes fixing up some links, compressing a few images, and generally "tidying up" in ways that I knew wouldn’t/shouldn’t break my site. Those few minutes took my Accessibility and Best Practices score up measurably, but I clearly have some work to do around Performance. I never even considered my Podcast Site as a potential Progressive Web App (PWA) but now that I have a new podcast host and a nice embedded player, that may be a possibility for the future!

My largest issue is with my (aging) CSS. I’d like to convert the site to use FlexBox or a CSS Grid as well as fixed up my Time to First Meaningful Paint.

I went and updated my Archives page a while back with Lazy Image loading, but it was using jQuery and some older (4+ year old) techniques. I’ll revisit those with modern techniques AND apply them to the grid of 16 shows on the site’s home page as well.

I have only just begun but I’ll report back as I speed things up!

What tools do YOU use to audit your websites?

Sponsor: Get the latest JetBrains Rider for debugging third-party .NET code, Smart Step Into, more debugger improvements, C# Interactive, new project wizard, and formatting code in columns.

© 2018 Scott Hanselman. All rights reserved.





Salı, 29 Mayıs 2018 / Published in Uncategorized

Five years ago I implemented "lazy loading" of the 600+ images on my podcast’s archives page (I don’t like paging, as a rule) over here https://www.hanselminutes.com/episodes. I did it with jQuery and a jQuery Plugin. It was kind of messy and gross from a purist’s perspective, but it totally worked and has easily saved me (and you) hundreds of dollars in bandwidth over the years. The page is like 9 or 10 megs if you load 600 images, not to mention you’re loading 600 freaking images.

Fast-forward to 2018, and there’s the "Intersection Observer API" that’s supported everywhere but Safari and IE, well, because, Safari and IE, sigh. We will return to that issue in a moment.

Following Dean Hume’s blog post on the topic, I start with my images like this. I don’t populate src="", but instead hold the Image URL in the HTML5 data- bucket of data-src. For src, I can use the nothing grey.gif or just style and color the image grey.

<a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/626/christine-spangs-open-source-journey-from-teen-oss-contributor-to-cto-of-nylas" class="showCard">
    <img data-src="https://images.hanselminutes.com/images/626.jpg" 
         class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="Christine Spang's Open Source Journey from Teen OSS Contributor to CTO of Nylas" />
    <span class="shownumber">626</span>                
    
Christine Spang's Open Source Journey from Teen OSS Contributor to CTO of Nylas
</a> <a href="http://feeds.hanselman.com/~/t/0/0/scotthanselman/~https://www.hanselman.com/625/a-new-sega-megadrivegenesis-game-in-2018-with-1995-tools-with-tanglewoods-matt-phillips" class="showCard"> <img data-src="https://images.hanselminutes.com/images/625.jpg" class="lazy" src="https://www.hanselman.com/images/grey.gif" width="212" height="212" alt="A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood's Matt Phillips" /> <span class="shownumber">625</span>
A new Sega Megadrive/Genesis Game in 2018 with 1995 Tools with Tanglewood's Matt Phillips
</a>

Then, if the images get within 50px intersecting the viewPort (I’m scrolling down) then I load them:

// Get images of class lazy
const images = document.querySelectorAll('.lazy');
const config = {
  // If image gets within 50px go get it
  rootMargin: '50px 0px',
  threshold: 0.01
};
let observer = new IntersectionObserver(onIntersection, config);
  images.forEach(image => {
    observer.observe(image);
  });

Now that we are watching it, we need to do something when it’s observed.

function onIntersection(entries) {
  // Loop through the entries
  entries.forEach(entry => {
    // Are we in viewport?
    if (entry.intersectionRatio > 0) {
      // Stop watching and load the image
      observer.unobserve(entry.target);
      preloadImage(entry.target);
    }
  });
}

If the browser (IE, Safari, Mobile Safari) doesn’t support IntersectionObserver, we can do a few things. I *could* fall back to my old jQuery technique, although it would involve loading a bunch of extra scripts for those browsers, or I could just load all the images in a loop, regardless, like:

if (!('IntersectionObserver' in window)) {
    loadImagesImmediately(images);
} else {...}

Dean’s examples are all "Vanilla JS" and require no jQuery, no plugins, no polyfills WITH browser support. There are also some IntersectionObserver helper libraries out there like Cory Dowdy’s IOLazy. Cory’s is a nice simple wrapper and is super easy to implement. Given I want to support iOS Safari as well, I am using a polyfill to get the support I want from browsers that don’t have it natively.

<!-- intersection observer polyfill -->
https://cdn.polyfill.io/v2/polyfill.min.js?features=IntersectionObserver

Polyfill.io is a lovely site that gives you just the fills you need (or those you need AND request) tailored to your browser. Try GETting the URL above in Chrome. You’ll see it’s basically empty as you don’t need it. Then hit it in IE, and you’ll get the polyfill. The official IntersectionObserver polyfill is at the w3c.

At this point I’ve removed jQuery entirely from my site and I’m just using an optional polyfill plus browser support that didn’t exist when I started my podcast site. Fewer moving parts means a cleaner, leaner, simpler site!

Go subscribe to the Hanselminutes Podcast today! We’re on iTunes, Spotify, Google Play, and even Twitter!

Sponsor: Announcing Raygun APM! Now you can monitor your entire application stack, with your whole team, all in one place. Learn more!

© 2018 Scott Hanselman. All rights reserved.





Salı, 29 Mayıs 2018 / Published in Uncategorized

This week in obscure blog titles, I bring you the nightmare that is setting up Signed Git Commits with a YubiKey NEO and GPG and Keybase on Windows. This is one of those "it’s good for you" things like diet and exercise and setting up 2 Factor Authentication. I just want to be able to sign my code commits to GitHub so I might avoid people impersonating my Git Commits (happens more than you’d think and has happened recently.) However, I also was hoping to make it more secure by using a YubiKey 4 or Yubikey NEO security key. They’re happy to tell you that it supports a BUNCH of stuff that you have never heard of like Yubico OTP, OATH-TOTP, OATH-HOTP, FIDO U2F, OpenPGP, Challenge-Response. I am most concerned with it acting like a Smart Card that holds a PGP (Pretty Good Privacy) key since the YubiKey can look like a "PIV (Personal Identity Verification) Smart Card."

NOTE: I am not a security expert. Let me know if something here is wrong (be nice) and I’ll update it. Note also that there are a LOT of guides out there. Some are complete and encyclopedic, some include recommendations and details that are "too much," but this one was my experience. This isn’t The Bible On The Topic but rather  what happened with me and what I ran into and how I got past it. Until this is Super Easy (TM) on Windows, there’s gonna be guides like this.

As with all things security, there is a balance between Capital-S Secure with offline air-gapped what-nots, and Ease Of Use with tools like Keybase. It depends on your tolerance, patience, technical ability, and if you trust any online services. I like Keybase and trust them so I’m starting there with a Private Key. You can feel free to get/generate your key from wherever makes you happy and secure.

I use Windows and I like it, so if you want to use a Mac or Linux this blog post likely isn’t for you. I love and support you and your choice though. 😉

Make sure you have a private PGP key that has your Git Commit Email Address associated with it

I download and installed (and optionally donated) a copy of Gpg4Win here.

Take your private key – either the one you got from Keybase or one you generated locally – and make sure that your UID (your email address that you use on GitHub) is a part of it. Here you can see mine is not, yet. That could be the main email or might be an alias or "uid" that you’ll add.

If not – as in my case since I’m using a key from keybase – you’ll need to add a new uid to your private key. You will know you got it right when you run this command and see your email address inside it.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec#  rsa4096/MAINKEY 2015-02-09 [SCEA]

uid                 [ultimate] keybase.io/shanselman <shanselman@keybase.io>

You can adduid in the gpg command line or you can add it in the Kleopatra GUI.

List them again and you’ll see the added uid.

> gpg --list-secret-keys --keyid-format LONG

------------------------------------------------
sec#  rsa4096/MAINKEY 2015-02-09 [SCEA]
uid                 [ultimate] keybase.io/shanselman <shanselman@keybase.io>
uid                 [ unknown] Scott Hanselman <scott@hanselman.com>

When you make changes like this, you can export your public key and update it in Keybase.io (again, if you’re using Keybase).

Plugin your YubiKey

When you plug your YubiKey in (assuming it’s newer than 2015) it should get auto-detected and show up like this "Yubikey NEO OTP+U2F+CCID." You want it so show up as this kind of "combo" or composite device. If it’s older or not in this combo mode, you may need to download the YubiKey NEO Manager and switch modes.

Test that your YubiKey can be seen as a Smart Card

Go to the command line and run this to confirm that your Yubikey can be see as a smart card by the GPG command line.

> gpg --card-status
Reader ...........: Yubico Yubikey NEO OTP U2F CCID 0
Version ..........: 2.0
....

IMPORTANT: Sometimes Windows machines and Corporate Laptops have multiple smart card readers, especially if they have Windows Hello installed like my SurfaceBook2! If you hit this, you’ll want to create a text file at %appdata%\gnupg\scdaemon.conf and include a reader-port that points to your YubiKey. Mine is a NEO, yours might be a 4, etc, so be aware. You may need to reboot or at least restart/kill the GPG services/background apps for it to notice you made a change. If you want to know what string should go in that file, go to Device Manager, then View | Show Hidden Devices and look under Software Devices. THAT is the string you want. Put this in scdaemon.conf:

reader-port "Yubico Yubikey NEO OTP+U2F+CCID 0"

Yubikey NEO can hold keys up to 2048 bits and the Yubikey 4 can hold up to 4096 bits – that’s MOAR bits! However, you might find yourself with a 4096 bit key that is too big for the Yubikey NEO. Lots of folks believe this is a limitation of the NEO that sucks and is unacceptable. Since I’m using Keybase and starting with a 4096 bit key, one solution is to make separate 2048 bit subkeys for Authentication and Signing, etc.

From the command line, edit your keys then "addkey"

> gpg --edit-key <scott@hanselman.com>

You’ll make a 2048 bit Signing key and you’ll want to decide if it ever expires. If it never does, also make a revocation certificate so you can revoke it at some future point.

gpg> addkey
Please select what kind of key you want:
   (3) DSA (sign only)
   (4) RSA (sign only)
   (5) Elgamal (encrypt only)
   (6) RSA (encrypt only)
Your selection? 4
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048)
Requested keysize is 2048 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0)
Key does not expire at all

Save your changes, and then export the keys. You can do that with Kleopatra or with the command line:

--export-secret-keys --armor KEYID

Here’s a GUI view. I have my main 4096 bit key and some 2048 bit subkeys for Signing or Encryption, etc. Make as many as you like

LEVEL SET – It will be the public version of the 2048 bit Signing Key that we’ll tell GitHub about and we’ll put the private part on the YubiKey, acting as a Smart Card.

Move the signing subkey over to the YubiKey

Now I’m going to take my keychain here, select the signing one (note the ASTERISK after I type "key 1" then "keytocard" to move/store it on the YubyKey’s SmartCard Signature slot. I’m using my email as a way to get to my key, but if your email is used in multiple keys you’ll want to use the unique Key Id/Signature. BACK UP YOUR KEYS.

> gpg --edit-key scott@hanselman.com

gpg (GnuPG) 2.2.6; Copyright (C) 2018 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

sec  rsa4096/MAINKEY
     created: 2015-02-09  expires: never       usage: SCEA
     trust: ultimate      validity: ultimate
ssb  rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
     created: 2015-02-09  expires: 2023-02-07  usage: S
     card-no: 0006 
ssb  rsa2048/KEY2
     created: 2015-02-09  expires: 2023-02-07  usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2)  Scott Hanselman <scott@hanselman.com>
gpg> toggle
gpg> key 1

sec  rsa4096/MAINKEY
     created: 2015-02-09  expires: never       usage: SCEA
     trust: ultimate      validity: ultimate
ssb* rsa2048/THEKEYIDFORTHE2048BITSIGNINGKEY
     created: 2015-02-09  expires: 2023-02-07  usage: S
     card-no: 0006 
ssb  rsa2048/KEY2
     created: 2015-02-09  expires: 2023-02-07  usage: E
[ultimate] (1). keybase.io/shanselman <shanselman@keybase.io>
[ultimate] (2)  Scott Hanselman <scott@hanselman.com>

gpg> keytocard
Please select where to store the key:
   (1) Signature key
   (3) Authentication key
Your selection? 1
gpg> save

If you’re storing thing on your Smart Card, it should have a pin to protect it. Also, make sure you have a backup of your primary key (if you like) because keytocard is a destructive action.

Have you set up PIN numbers for your Smart Card?

There’s a PIN and an Admin PIN. The Admin PIN is the longer one. The default admin PIN is usually ‘12345678’ and the default PIN is usually ‘123456’. You’ll want to set these up with either the Kleopatra GUI "Tools | Manage Smart Cards" or the gpg command line:

>gpg --card-edit
gpg/card> admin
Admin commands are allowed
gpg/card> passwd
*FOLLOW THE PROMPTS TO SET PINS, BOTH ADMIN AND STANDARD*

Tell Git about your Signing Key Globally

Be sure to tell Git on your machine some important configuration info like your signing key, but also WHERE the gpg.exe is. This is important because git ships its own older local copy of gpg.exe and you installed a newer one!

git config --global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
git config --global commit.gpgsign true
git config --global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY

If you don’t want to set ALL commits to signed, you can skip the commit.gpgsign=true and just include -S as you commit your code:

git commit -S -m your commit message

Test that you can sign things

if you are running Kleopatra (the noob Windows GUI) when you run gpg –card-status you’ll notice the cert will turn boldface and get marked as certified.

The goal here is for you to make sure GPG for Windows knows that there’s a private key on the smart card, and associates a signing Key ID with that private key so when Git wants to sign a commit, you’ll get a Smart Card PIN Prompt.

Advanced: If you make SubKeys for individual things so that they might also be later revoked without torching your main private key. Using the Kleopatra tool from GPG for Windows you can explore the keys and get their IDs. You’ll use those Subkey IDs in your git config to remove to your signingkey.

At this point things should look kinda like this in the Kleopatra GUI:

Make sure to prove you can sign something by making a text file and signing it. If you get a Smart Card prompt (assuming a YubiKey) and a larger .gpg file appears, you’re cool.

> gpg --sign .\quicktest.txt
> dir quic*

Mode                LastWriteTime         Length Name
----                -------------         ------ ----
-a----        4/18/2018   3:29 PM              9 quicktest.txt
-a----        4/18/2018   3:38 PM            360 quicktest.txt.gpg

Now, go up into GitHub to https://github.com/settings/keys at the bottom. Remember that’s GPG Keys, not SSH Keys. Make a new one and paste in your public signing key or subkey.

Note the KeyID (or the SubKey ID) and remember that one of them (either the signing one or the primary one) should be the ID you used when you set up user.signingkey in git above.

The most important thing is that:

  • the email address associated with the GPG Key
  • is the same as the email address GitHub has verified for you
  • is the same as the email in the Git Commit
    • git config –global user.email "email@example.com"

If not, double check your email addresses and make sure they are the same everywhere.

Try a signed commit

If pressing enter pops a PIN Dialog then you’re getting somewhere!

Commit and push and go over to GitHub and see if your commit is Verified or Unverified. Unverified means that the commit was signed but either had an email GitHub had never seen OR that you forgot to tell GitHub about your signing public key.

Yay!

Setting up to a second (or third) machine

Once you’ve told Git about your signing key and you’ve got your signing key stored in your YubiKey, you’ll likely want to set up on another machine.

  • Install GPG for Windows
    • gpg –card-status
    • Import your public key. If I’m setting up signing on another machine, I’ll can import my PUBLIC certificates like this or graphically in Kleopatra.
      >gpg --import "keybase public key.asc"
      gpg: key *KEYID*: "keybase.io/shanselman <shanselman@keybase.io>" not changed
      gpg: Total number processed: 1
      gpg:              unchanged: 1

      You may also want to run gpg –expert –edit-key *KEYID* and type "trust" to certify your key as someone (yourself) that you trust.

  • Install Git (I assume you did this) and configure GPG
    • git config –global gpg.program "c:\Program Files (x86)\GnuPG\bin\gpg.exe"
    • git config –global commit.gpgsign true
    • git config –global user.signingkey THEKEYIDFORTHE2048BITSIGNINGKEY
  • Sign something with "gpg –sign" to test
  • Do a test commit.

Finally, feel superior for 8 minutes, then realize you’re really just lucky because you just followed the blog post of someone who ALSO has no clue, then go help a co-worker because this is TOO HARD.

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

© 2018 Scott Hanselman. All rights reserved.





Salı, 29 Mayıs 2018 / Published in Uncategorized

Couple days ago I Added Resilience and Transient Fault handling to your .NET Core HttpClient with Polly. Polly provides a way to pre-configure instances of HttpClient which apply Polly policies to every outgoing call. That means I can say things like "Call this API and try 3 times if there’s any issues before you panic," as an example. It lets me move the cross-cutting concerns – the policies – out of the business part of the code and over to a central location or module, or even configuration if I wanted.

I’ve been upgrading my podcast of late at https://www.hanselminutes.com to ASP.NET Core 2.1 and Razor Pages with SimpleCast for the back end. Since I’ve removed SQL Server as my previous data store and I’m now sitting entirely on top of a third party API I want to think about how often I call this API. As a rule, there’s no reason to call it any more often that a change might occur.

I publish a new show every Thursday at 5pm PST, so I suppose I could cache the feed for 7 days, but sometimes I make description changes, add links, update titles, etc. The show gets many tens of thousands of hits per episode so I definitely don’t want to abuse the SimpleCast API, so I decided that caching for 4 hours seemed reasonable.

I went and wrote a bunch of caching code on my own. This is fine and it works and has been in production for a few months without any issues.

A few random notes:

  • Stuff is being passed into the Constructor by the IoC system built into ASP.NET Core
    • That means the HttpClient, Logger, and MemoryCache are handed to this little abstraction. I don’t new them up myself
  • All my "Show Database" is, is a GetShows()
    • That means I have TestDatabase that implements IShowDatabase that I use for some Unit Tests. And I could have multiple implementations if I liked.
  • Caching here is interesting.
    • Sure I could do the caching in just a line or two, but a caching double check is more needed that one often realizes.
    • I check the cache, and if I hit it, I am done and I bail. Yay!
    • If not, Let’s wait on a semaphoreSlim. This a great, simple way to manage waiting around a limited resource. I don’t want to accidentally have two threads call out to the SimpleCast API if I’m literally in the middle of doing it already.
      • "The SemaphoreSlim class represents a lightweight, fast semaphore that can be used for waiting within a single process when wait times are expected to be very short."
    • So I check again inside that block to see if it showed up in the cache in the space between there and the previous check. Doesn’t hurt to be paranoid.
    • Got it? Cool. Store it away and release as we finally the try.

Don’t copy paste this. My GOAL is to NOT have to do any of this, even though it’s valid.

public class ShowDatabase : IShowDatabase
{
    private readonly IMemoryCache _cache;
    private readonly ILogger _logger;
    private SimpleCastClient _client;

    public ShowDatabase(IMemoryCache memoryCache,
            ILogger<ShowDatabase> logger,
            SimpleCastClient client)
    {
        _client = client;
        _logger = logger;
        _cache = memoryCache;
    }

    static SemaphoreSlim semaphoreSlim = new SemaphoreSlim(1);

    public async Task<List<Show>> GetShows()
    {
        Func<Show, bool> whereClause = c => c.PublishedAt < DateTime.UtcNow;

        var cacheKey = "showsList";
        List<Show> shows = null;

        //CHECK and BAIL - optimistic
        if (_cache.TryGetValue(cacheKey, out shows))
        {
            _logger.LogDebug($"Cache HIT: Found {cacheKey}");
            return shows.Where(whereClause).ToList();
        }

        await semaphoreSlim.WaitAsync();
        try
        {
            //RARE BUT NEEDED DOUBLE PARANOID CHECK - pessimistic
            if (_cache.TryGetValue(cacheKey, out shows))
            {
                _logger.LogDebug($"Amazing Speed Cache HIT: Found {cacheKey}");
                return shows.Where(whereClause).ToList();
            }

            _logger.LogWarning($"Cache MISS: Loading new shows");
            shows = await _client.GetShows();
            _logger.LogWarning($"Cache MISS: Loaded {shows.Count} shows");
            _logger.LogWarning($"Cache MISS: Loaded {shows.Where(whereClause).ToList().Count} PUBLISHED shows");

            var cacheExpirationOptions = new MemoryCacheEntryOptions();
            cacheExpirationOptions.AbsoluteExpiration = DateTime.Now.AddHours(4);
            cacheExpirationOptions.Priority = CacheItemPriority.Normal;

            _cache.Set(cacheKey, shows, cacheExpirationOptions);
            return shows.Where(whereClause).ToList(); ;
        }
        catch (Exception e)
        {
            _logger.LogCritical("Error getting episodes!");
            _logger.LogCritical(e.ToString());
            _logger.LogCritical(e?.InnerException?.ToString());
            throw;
        }
        finally
        {
            semaphoreSlim.Release();
        }
    }
}

public interface IShowDatabase
{
    Task<List<Show>> GetShows();
}

Again, this is great and it works fine. But the BUSINESS is in _client.GetShows() and the rest is all CEREMONY. Can this be broken up? Sure, I could put stuff in a base class, or make an extension method and bury it in there, so use Akavache or make a GetOrFetch and start passing around Funcs of "do this but check here first":

IObservable<T> GetOrFetchObject<T>(string key, Func<Task<T>> fetchFunc, DateTimeOffset? absoluteExpiration = null);

Could I use Polly and refactor via subtraction?

Per the Polly docs:

The Polly CachePolicy is an implementation of read-through cache, also known as the cache-aside pattern. Providing results from cache where possible reduces overall call duration and can reduce network traffic.

First, I’ll remove all my own caching code and just make the call on every page view. Yes, I could write the Linq a few ways. Yes, this could all be one line. Yes, I like Logging.

public async Task<List<Show>> GetShows()
{
    _logger.LogInformation($"Loading new shows");
    List<Show> shows = await _client.GetShows();
    _logger.LogInformation($"Loaded {shows.Count} shows");
    return shows.Where(c => c.PublishedAt < DateTime.UtcNow).ToList(); ;
}

No caching, I’m doing The Least.

Polly supports both the .NET MemoryCache that is per process/per node, an also .NET Core’s IDistributedCache for having one cache that lives somewhere shared like Redis or SQL Server. Since my podcast is just one node, one web server, and it’s low-CPU, I’m not super worried about it. If Azure WebSites does end up auto-scaling it, sure, this cache strategy will happen n times. I’ll switch to Distributed if that becomes a problem.

I’ll add a reference to Polly.Caching.MemoryCache in my project.

I ensure I have the .NET Memory Cache in my list of services in ConfigureServices in Startup.cs:

services.AddMemoryCache();

STUCK…for now!

AND…here is where I’m stuck. I got this far into the process and now I’m either confused OR I’m in a Chicken and the Egg Situation.

Forgive me, friends, and Polly authors, as this Blog Post will temporarily turn into a GitHub Issue. Once I work through it, I’ll update this so others can benefit. And I still love you; no disrespect intended.

The Polly.Caching.MemoryCache stuff is several months old, and existed (and worked) well before the new HttpClientFactory stuff I blogged about earlier.

I would LIKE to add my Polly Caching Policy chained after my Transient Error Policy:

services.AddHttpClient<SimpleCastClient>().
    AddTransientHttpErrorPolicy(policyBuilder => policyBuilder.CircuitBreakerAsync(
            handledEventsAllowedBeforeBreaking: 2,
            durationOfBreak: TimeSpan.FromMinutes(1)
    )).
    AddPolicyHandlerFromRegistry("myCachingPolicy"); //WHAT I WANT?

However, that policy hasn’t been added to the Policy Registry yet. It doesn’t exist! This makes me feel like some of the work that is happening in ConfigureServices() is a little premature. ConfigureServices() is READY, AIM and Configure() is FIRE/DO-IT, in my mind.

If I set up a Memory Cache in Configure, I need to use the Dependency System to get the stuff I want, like the .NET Core IMemoryCache that I put in services just now.

public void Configure(IApplicationBuilder app, IPolicyRegistry<string> policyRegistry, IMemoryCache memoryCache)
{
    MemoryCacheProvider memoryCacheProvider = new MemoryCacheProvider(memoryCache);
    var cachePolicy = Policy.CacheAsync(memoryCacheProvider, TimeSpan.FromMinutes(5));
    policyRegistry.Add("cachePolicy", cachePolicy);
    ...

But at this point, it’s too late! I need this policy up earlier…or I need to figure a way to tell the HttpClientFactory to use this policy…but I’ve been using extension methods in ConfigureServices to do that so far. Perhaps some exception methods are needed like AddCachingPolicy(). However, from what I can see:

  • This code can’t work with the ASP.NET Core 2.1’s HttpClientFactory pattern…yet. https://github.com/App-vNext/Polly.Caching.MemoryCache
  • I could manually new things up, but I’m already deep into Dependency Injection…I don’t want to start newing things and get into scoping issues.
  • There appear to be changes between  v5.4.0 and 5.8.0. Still looking at this.
  • Bringing in the Microsoft.Extensions.Http.Polly package brings in Polly-Signed 5.8.0…

I’m likely bumping into a point in time thing. I will head to bed and return to this post in a day and see if I (or you, Dear Reader) have solved the problem in my sleep.

"code making and code breaking" by earthlightbooks – Licensed under CC-BY 2.0Original source via Flickr

Sponsor: Check out JetBrains Rider: a cross-platform .NET IDE. Edit, refactor, test and debug ASP.NET, .NET Framework, .NET Core, Xamarin or Unity applications. Learn more and download a 30-day trial!

© 2018 Scott Hanselman. All rights reserved.





TOP