Çarşamba, 29 Ağustos 2018 / Published in Uncategorized

Back in March of 2017 I blogged about Zeit and their cool deployment system "now." Zeit will take any folder and deploy it to the web easily. Better yet if you have a Dockerfile in that folder as Zeit will just use that for the deployment.

Zeit’s free Open Source account has a limit of 100 megs for the resulting image, and with the right Dockerfile started ASP.NET Core apps are less than 77 megs. You just need to be smart about a few things. Additionally, it’s running in a somewhat constrained environment so ASP.NET’s assumptions around FileWatchers can occasionally cause you to see errors like

at System.IO.FileSystemWatcher.StartRaisingEvents()
 Unhandled Exception: System.IO.IOException: 
The configured user limit (8192) on the number of inotify instances has been reached.
 at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed(

While this environment variable is set by default for the "FROM microsoft/dotnet:2.1-sdk" Dockerfile, it’s not set at runtime. That’s dependent on your environment.

Here’s my Dockerfile for a simple project called SuperZeit. Note that the project is structured with a SLN file, which I recommend.

Let me call our a few things.

  • First, we’re doing a Multi-stage build here.
    • The SDK is large. You don’t want to deploy the compiler to your runtime image!
  • Second, the first copy commands just copy the sln and the csproj.
    • You don’t need the source code to do a dotnet restore! (Did you know that?)
    • Not deploying source means that your docker builds will be MUCH faster as Docker will cache the steps and only regenerate things that change. Docker will only run dotnet restore again if the solution or project files change. Not the source.
  • Third, we are using the aspnetcore-runtime image here. Not the dotnetcore one.
    • That means this image includes the binaries for .NET Core and ASP.NET Core. We don’t need or want to include them again.
    • If you were doing a publish with a the -r switch, you’d be doing a self-contained build/publish. You’d end up copying TWO .NET Core runtimes into a container! That’ll cost you another 50-60 megs and it’s just wasteful. If you want to do that
    • Go explore the very good examples on the .NET Docker Repro on GitHub https://github.com/dotnet/dotnet-docker/tree/master/samples
    • Optimizing Container Size
  • Finally, since some container systems like Zeit have modest settings for inotify instances (to avoid abuse, plus most folks don’t use them as often as .NET Core does) you’ll want to set ENV DOTNET_USE_POLLING_FILE_WATCHER=true which I do in the runtime image.

So starting from this Dockerfile:

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.sln .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore
# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build
FROM build AS publish
WORKDIR /app/superzeit
RUN dotnet publish -c Release -o out  
FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Remember the layers of the Docker images, as if they were a call stack:

  • Your app’s files
  • ASP.NET Core Runtime
  • .NET Core Runtime
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

For my little app I end up with a 76.8 meg image. If want I can add the experimental .NET IL Trimmer. It won’t make a difference with this app as it’s already pretty simple but it could with a larger one.

BUT! What if we changed the layering to this?

  • Your app’s files along with a self-contained copy of ASP.NET Core and .NET Core
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

Then we could do a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example.

See how he’s doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy’s csproj).

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app
# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config . 
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore
# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build
FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true 
FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Now at this point, I’d want to see how small the IL Linker made my ultimate project. The goal is to be less than 75 megs. However, I think I’ve hit this bug so I will have to head to bed and check on it in the morning.

The project is at https://github.com/shanselman/superzeit and you can just clone and "docker build" and see the bug.

However, if you check the comments in the Docker file and just use the a "FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime" it works fine. I just think I can get it even smaller than 75 megs.

Talk so you soon, Dear Reader! (I’ll update this post when I find out about that bug…or perhaps my bug!)

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.

© 2018 Scott Hanselman. All rights reserved.

 
 
 
 
 

Çarşamba, 29 Ağustos 2018 / Published in Uncategorized

Back in March of 2017 I blogged about Zeit and their cool deployment system "now." Zeit will take any folder and deploy it to the web easily. Better yet if you have a Dockerfile in that folder as Zeit will just use that for the deployment.

Zeit’s free Open Source account has a limit of 100 megs for the resulting image, and with the right Dockerfile started ASP.NET Core apps are less than 77 megs. You just need to be smart about a few things. Additionally, it’s running in a somewhat constrained environment so ASP.NET’s assumptions around FileWatchers can occasionally cause you to see errors like

at System.IO.FileSystemWatcher.StartRaisingEvents()
 Unhandled Exception: System.IO.IOException: 
The configured user limit (8192) on the number of inotify instances has been reached.
 at System.IO.FileSystemWatcher.StartRaisingEventsIfNotDisposed(

While this environment variable is set by default for the "FROM microsoft/dotnet:2.1-sdk" Dockerfile, it’s not set at runtime. That’s dependent on your environment.

Here’s my Dockerfile for a simple project called SuperZeit. Note that the project is structured with a SLN file, which I recommend.

Let me call our a few things.

  • First, we’re doing a Multi-stage build here.
    • The SDK is large. You don’t want to deploy the compiler to your runtime image!
  • Second, the first copy commands just copy the sln and the csproj.
    • You don’t need the source code to do a dotnet restore! (Did you know that?)
    • Not deploying source means that your docker builds will be MUCH faster as Docker will cache the steps and only regenerate things that change. Docker will only run dotnet restore again if the solution or project files change. Not the source.
  • Third, we are using the aspnetcore-runtime image here. Not the dotnetcore one.
    • That means this image includes the binaries for .NET Core and ASP.NET Core. We don’t need or want to include them again.
    • If you were doing a publish with a the -r switch, you’d be doing a self-contained build/publish. You’d end up copying TWO .NET Core runtimes into a container! That’ll cost you another 50-60 megs and it’s just wasteful. If you want to do that
    • Go explore the very good examples on the .NET Docker Repro on GitHub https://github.com/dotnet/dotnet-docker/tree/master/samples
    • Optimizing Container Size
  • Finally, since some container systems like Zeit have modest settings for inotify instances (to avoid abuse, plus most folks don’t use them as often as .NET Core does) you’ll want to set ENV DOTNET_USE_POLLING_FILE_WATCHER=true which I do in the runtime image.

So starting from this Dockerfile:

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build
 
FROM build AS publish
WORKDIR /app/superzeit
RUN dotnet publish -c Release -o out  

FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Remember the layers of the Docker images, as if they were a call stack:

  • Your app’s files
  • ASP.NET Core Runtime
  • .NET Core Runtime
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

For my little app I end up with a 76.8 meg image. If want I can add the experimental .NET IL Trimmer. It won’t make a difference with this app as it’s already pretty simple but it could with a larger one.

BUT! What if we changed the layering to this?

  • Your app’s files along with a self-contained copy of ASP.NET Core and .NET Core
  • .NET Core native dependencies (OS specific)
  • OS image (Alpine, Ubuntu, etc)

Then we could do a self-Contained deployment and then trim the result! Richard Lander has a great dockerfile example.

See how he’s doing the package addition with the dotnet CLI with "dotnet add package" and subsequent trim within the Dockerfile (as opposed to you adding it to your local development copy’s csproj).

FROM microsoft/dotnet:2.1-sdk-alpine AS build
WORKDIR /app

# copy csproj and restore as distinct layers
COPY *.sln .
COPY nuget.config . 
COPY superzeit/*.csproj ./superzeit/
RUN dotnet restore

# copy everything else and build app
COPY . .
WORKDIR /app/superzeit
RUN dotnet build
 
FROM build AS publish
WORKDIR /app/superzeit
# add IL Linker package
RUN dotnet add package ILLink.Tasks -v 0.1.5-preview-1841731 -s https://dotnet.myget.org/F/dotnet-core/api/v3/index.json
RUN dotnet publish -c Release -o out -r linux-musl-x64 /p:ShowLinkerSizeComparison=true 

FROM microsoft/dotnet:2.1-runtime-deps-alpine AS runtime
ENV DOTNET_USE_POLLING_FILE_WATCHER=true
WORKDIR /app
COPY --from=publish /app/superzeit/out ./
ENTRYPOINT ["dotnet", "superzeit.dll"]

Now at this point, I’d want to see how small the IL Linker made my ultimate project. The goal is to be less than 75 megs. However, I thikn I’ve hit this bug so I will have to head to bed and check on it in the morning.

The project is at https://github.com/shanselman/superzeit and you can just clone and "docker build" and see the bug.

However, if you check the comments in the Docker file and just use the a "FROM microsoft/dotnet:2.1-aspnetcore-runtime-alpine AS runtime" it works fine. I just think I can get it even smaller than 75 megs.

Talk so you soon, Dear Reader! (I’ll update this post when I find out about that bug…or perhaps my bug!)

Sponsor: Preview the latest JetBrains Rider with its built-in spell checking, initial Blazor support, partial C# 7.3 support, enhanced debugger, C# Interactive, and a redesigned Solution Explorer.

© 2018 Scott Hanselman. All rights reserved.

 
 
 
 
 

Salı, 28 Ağustos 2018 / Published in Uncategorized

Introduction

In this article, I’m going to be building a React App with CRUD (create, read, update, and delete) operations without writing any code whatsoever using Visual Studio 2017 and ASP.NET Core 2.0. To accomplish this, I’ll be using React App Generator, which you can download for free from http://bssdev.biz/React-App-Generator (look for the free download link at the bottom). Note: You will need to create a free account in order to download the app.

Background

React App Generator is an ASP.NET Core tool that allows .NET developers to quickly build a 100% React application. React App Generator creates everything you need for a 100% React application with full CRUD capabilities from either your Database Schema or your Application Models.

Note: This version of React App Generator requires ASP.NET Core 2.0. It is incompatible with ASP.NET Core 1.x and 2.1, which creates React project with different structures.

Let’s start by building our base app…

Building a New React App

  1. Open Visual Studio and select New Project.
  2. From the New Project dialog box, select .NET Core and then ASP.NET Core Web Application (Figure 1):

    Figure 1.

  3. From the ASP.NET Core Web Application dialog box, select React.js (Figure 2):

    Figure 2.

Creating Our Models

We need to create a model to use with our app. For this article, we’ll be creating a model called Actor as shown below. We’ll name the file models.cs since we’ll be keeping all of our models in this file.

public class Actor
{
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
[Key]
[Required]
[DisplayName("Id")]
public int Id {get; set;}

[Required]
[MaxLength(50)]
[DisplayName("Name")]
public string Name {get; set;}

[MaxLength(10)]
[DisplayName("Gender")]
public string Gender {get; set;}

[DisplayName("Age")]
public int? Age {get; set;}

[MaxLength(255)]
[DisplayName("Picture")]
public string Picture {get; set;}
}

Note: If you’re new to Visual Studio, you may need to hover over any text with squiggly lines and add any necessary “using” statements.

We’ll also need a DbContext for our model since our data will be coming from a SQL database, but it just as easily could have come from MySQL or a NOSQL DocumentDb store. We’ll name it AppDbContext.cs.

public class AppDbContext : DbContext
 {
 public AppDbContext(DbContextOptions<appdbcontext> options)
 : base(options)
 {
 }
 
 public DbSet<actor> Actors {get; set;}

}
}    
</actor></appdbcontext>

That’s all we need. We can now use React App Generator to create our entire app with CRUD operations, and even built-in paging and sorting…

Running React App Generator

  1. Run the React App Generator application.
  2. In the Source section, specify the path to your models.cs file by clicking the browse button (…) and navigating to it.
  3. In the Table/Model box, specify the name of the model to process using the browse button (…) and selecting it. For this example, it’s Actor.
  4. In the DbContext box, specify AppDbContext as the Database Context.
  5. And in the Namespace box, enter the name of the application that you created above in “Building a New React Application”.
  6. Then, click the Let’s Go button to make the magic happen.

If all went well, you should see a Result message box that says everything succeeded and tells you what the next steps are. If you’ve gotten that message, make a mental note of the Next Steps and click the OK button.

To apply the content to the base application we created above, we simply copy the content to our project. To do that, just click the magnifying glass button to the right of the Let’s Go button. Windows Explorer will open showing the output. Simply select all of the files and paste them to the root of your project (the folder that contains the: bin/ and ClientApp/ folders as well as a few others). We just need to configure a few things and we can run our app…

Configuring Your React App

We need to ensure that all of the npm packages required by the app have been downloaded. In Solution Explorer, we need to expand Dependencies, then right-click on the npm folder, and select Restore Packages from the popup menu. It may take a while so you’ll need to be patient.

tsconfig.json

We also need to make sure ‘strict’ mode is turned off in tsconfig.json (located at the root of your application).

"compilerOptions": { "strict": false

AppSettings.json

We also need to make sure we have a connection string for our AppDbContext in appsettings.json. It should look something like this:

"ConnectionStrings": {
 "AppDbContext": "Server=localhost;Database=Movies;Integrated Security=true;"
 },

We can now compile our app by right-clicking on the project in Solution Explorer and selecting Rebuild. If the npm packages have not finished downloading, you’ll get a message indicating the build is being delayed.

And that’s it! Once the app compiles, you can run it…

Figure 3.

…you’ll notice we have links for our CRUD operations: Create, Details, Edit and X (Delete), as well as built-in paging and sorting without writing any code (except for creating our model). And React App Generator also includes support for OpenIDConnect authentication and authorization, which I’ll leave for another article. Ciao!

Salı, 28 Ağustos 2018 / Published in Uncategorized

Introduction

In this article I’m going to be building a React App with CRUD (create, read, update, and delete) operations without writing any code whatsoever using Visual Studio 2017 and ASP.NET Core 2.0. To accomplish this I’ll be using React App Generator, which you can download for free from http://bssdev.biz/React-App-Generator (look for the free download link at the bottom). Note: you will need to create a free account in order to download the app.

Background

React App Generator is an ASP.Net Core tool that allows .NET developers to quickly build a 100% React application. React App Generator creates everything you need for a 100% React application with full CRUD capabilities from either your Database Schema or your Application Models.

Note: This version of React App Generator requires ASP.NET Core 2.0. It is incompatible with ASP.NET Core 1.x and 2.1, which create React project with different structures.

Let’s start by building our base app…

Building a new React App

1. Open Visual Studio and select New Project.

2. From the New Project dialog box, select .NET Core and then ASP.Net Core Web Application (fig 1)

fig 1.

3. From the ASP.Net Core Web Application dialog box, select React.js. (fig 2)

fig 2.

Creating our Models

We need to create a model to use with our app. For this article we’ll be creating a model called Actor as show below. We’ll name the file models.cs since we’ll be keeping all of our models in this file.

public class Actor
{
[DatabaseGenerated(DatabaseGeneratedOption.Identity)]
[Key]
[Required]
[DisplayName("Id")]
public int Id {get; set;}

[Required]
[MaxLength(50)]
[DisplayName("Name")]
public string Name {get; set;}

[MaxLength(10)]
[DisplayName("Gender")]
public string Gender {get; set;}

[DisplayName("Age")]
public int? Age {get; set;}

[MaxLength(255)]
[DisplayName("Picture")]
public string Picture {get; set;}
}

Note: If you’re new to Visual Studio, you may need to hover over any text with squiggly lines and add any necessary “using” statements.

We’ll also need a DbContext for our model since our data will be coming from a SQL database, but it just as easily could have come from MySQL or a NOSQL DocumentDb store. We’ll name it AppDbContext.cs.

public class AppDbContext : DbContext
 {
 public AppDbContext(DbContextOptions<appdbcontext> options)
 : base(options)
 {
 }
 
 public DbSet<actor> Actors {get; set;}

}

}    
</actor></appdbcontext>

 

That’s all we need. We can now use React App Generator to create our entire app with CRUD operations, and even built-in paging and sorting…

Running React App Generator

  1. Run the React App Generator application.
  2. In the Source section specify the path to your models.cs file by clicking the browse button (…) and navigating to it.
  3. In the Table/Model box specify the name of the model to process using the browse button (…) and selecting it. For this example its: Actor
  4. In the DbContxt box specify AppDbContext as the Database Context.
  5. And in the Namespace box enter the name of the application that you created above in “Building a New React Application”.
  6. Then click the Let’s Go button to make the magic happen.

If all went well you should see a Result message box that says everything succeeded and tells you what the Next Steps are. If you’ve gotten that message, make a mental note of the Next Steps and click the OK button.

To apply the content to the base application we created above, we simply copy the content to our project. To do that just click the magnifying glass button to the right of the Let’s Go button. Windows Explorer will open showing the output. Simply select all of the files and paste them to the root of your project (the folder that contains the: bin/ and ClientApp/ folders as well as a few others). We just need to configure a few things and we can run out app…

Configuring your React App

We need to ensure that all of the npm packages requied by the app have been downloaded. In Solution Explorer we need to expand Dependencies, then right-click on the npm folder, and select Restore Packages from the popup menu. It may take a while so you’ll need to be patient.

tsconfig.json

We also need to make sure ‘strict’ mode is turned off in tsconfig.json (located in the root of your application).

“compilerOptions”: { “strict”: false

AppSettings.json

We also need to make sure we have a connection string for our AppDbContext in appsettings.json. It should look something like this:

“ConnectionStrings”: {  “AppDbContext”: “Server=localhost;Database=Movies;Integrated Security=true;”  },

We can now compile our app by right-clicking on the project in Solution Explorer and selecting Rebuild. If the npm packages have not finished downloading, you’ll get a message indicating the build is being delayed.

And that’s it! Once the app compiles you can run it…

 

fig 3.

…you’ll notice we have linnks for our CRUD operations: Create, Details, Edit and X (Delete), as well as built-in paging and sorting without writing any code (except for creating our model). And React App Generator also includes support for OpenIDConnect authentication and authorization, which I’ll leave for another article. Ciao.

 

Salı, 28 Ağustos 2018 / Published in Uncategorized

There is a major shift in the industry away from monoliths towards smaller services. A key reason why organizations are investing in this shift is because smaller services built around business capabilities increase developer productivity. Teams that can own these smaller service/s can be “masters of their own destiny” which means they can evolve their service/s independently of other services in the system.

When breaking monoliths into smaller services, the hardest part is actually breaking up the data that lives in the database of the monolith. It is relatively easy to chop up the logic in the monolith into smaller pieces while still connecting to the same database. In this case, the database is essentially an IntegrationDatabase which gives the semblance of a distributed system that can evolve independently but in fact is a single tightly coupled system at the database level. For services to be truly independent and thus teams to be “master of their own destiny”, they also need to have an independent database – the schema and the corresponding data for the service.

In this article, I will be talking about a pattern, which is a series of steps, for extracting a data-rich service from a monolith while causing minimum disruption to the service consumers.

Service extraction steps

Now, let us dive into the actual service extraction pattern. To make it easy to follow the steps, we will take an example to understand how the service extraction works.

Lets say we have a monolithic Catalog system which provides our eCommerce platform with product information. Over time the Catalog system has grown into a monolith which means along with the core product information such as product name, category name and associated logic, it has also gobbled up product pricing logic and data. There are no clear boundaries between the core product part of the system and the pricing part of the system.

Moreover, the rate of change (rate at which changes are introduced in the system) in the pricing part of the system is much higher than the core product. The data access patterns are also different for the two parts of the system. Pricing for a product changes a lot more dynamically than the core product attributes. Thus, it makes a lot of sense to pull out the pricing part of the system out of the monolith into a separate service that can be evolved independently.

What makes pulling out pricing compelling as opposed to the core product is that pricing is a “leaf” dependency in the catalog monolith. The core product functionality is also a dependency for other functionality in the monolith such as product inventory, product marketing, et al which are not shown here for simplicity. If you were to pull out the core product out as a service it would mean severing too many "connections" in the monolith at the same time which can make the migration process quite risky. To start with, you want to pull apart a valuable business capability that is a leaf dependency in the monolith dependency graph such as the pricing functionality.

Figure 1: Catalog monolith consists of the application logic and database for core product as well as product pricing. The Catalog monolith has two clients – the web application and iOS app.

Initial state of the code

Below is the initial state of the code for the Catalog system. Obviously, the code lacks the real world "messiness" aka complexity of such a system. However, it is sufficiently complex to demonstrate the spirit of a refactoring that involves pulling a data-rich service out of a monolith. We will see how the code below is refactored over the course of the steps.

The code consists of a CatalogService which is representative of the interface that the monolith provides to its clients. It uses a productRepository class to fetch and persist state from the database. Product class is a dumb data class (indicative of an AnemicDomainModel) that contains product information. Dumb data classes are clearly an anti-pattern but they are not the primary focus of this article so as far as this example is concerned we will make do with it. Sku, Price and CategoryPriceRange are "tiny types".

class CatalogService…

  public Sku searchProduct(String searchString) {
      return productRepository.searchProduct(searchString);
  }

  public Price getPriceFor(Sku sku) {
      Product product = productRepository.queryProduct(sku);
      return calculatePriceFor(product);
  }

  private Price calculatePriceFor(Product product) {
      if(product.isOnSale()) return product.getSalePrice();
      return product.getOriginalPrice();
  }

  public CategoryPriceRange getPriceRangeFor(Category category) {
      List<Product> products = productRepository.findProductsFor(category);
      Price maxPrice = null;
      Price minPrice = null;
      for (Product product : products) {
          if (product.isActive()) {
              Price productPrice = calculatePriceFor(product);
              if (maxPrice == null || productPrice.isGreaterThan(maxPrice)) {
                  maxPrice = productPrice;
              }
              if (minPrice == null || productPrice.isLesserThan(minPrice)) {
                  minPrice = productPrice;
              }
          }
      }
      return new CategoryPriceRange(category, minPrice, maxPrice);
  }

  public void updateIsOnSaleFor(Sku sku) {
      final Product product = productRepository.queryProduct(sku);
      product.setOnSale(true);
      productRepository.save(product);
  }

Let’s take our first step towards pulling the "Product pricing" service out of the Catalog monolith.

Step 2. Create a logical separation for the logic of the new service in the monolith

Step 2 and 3 are about creating a logical separation for the logic and data for the product pricing service while still working in the monolith. You essentially isolate the product pricing data and logic from the larger monolith before you actually pull it out into a new service. The advantage of doing this is that, if you get your product pricing service boundary wrong (logic or data) then it is going to be much easier to refactor your code while you are in the same monolith codebase as opposed to pulling it out and refactoring “over the wire”.

As part of Step 2, we will be creating service classes for wrapping the logic for product pricing and core product called ProductPricingService and CoreProductService respectively. These service classes would map one-to-one with our “physical” services – Product pricing and Core product as you will see in the later steps. We would also be creating separate repository classes – ProductPriceRepository and CoreProductRepository. These will be used to access the product pricing data and core product data from the Products table respectively.

The key point to keep in mind during this step is that the ProductPricingService or ProductPriceRepository should not access the Products table for core product information. Instead for any core product related information, product pricing code should go strictly through the CoreProductService. You will see an example of this in the refactored getPriceRangeFor method below.

No table joins are permitted from tables that belong to the core product part of the system to the tables that belong to product pricing. Similarly, there should be no "hard" constraints in the database between the core product data and the product pricing data such as foreign keys or database triggers. All joins as well as constraints have to be moved up to the logic layer from the database layer. This is unfortunately easier said than done and is one of the hardest things to do but absolutely necessary to break apart the database.

Having said that, core product and product pricing do have a shared identifier – the product SKU to uniquely identify the product across the two parts of the system down to the database level. This "cross system identifier" will be used for cross service communication (as demonstrated in later steps) and hence it is important to select this identifier wisely. It should be one service that owns the cross system identifier. All other services should use the identifier as a reference but not change it. It is immutable from their point of view. The service that is best suited to manage the life cycle of the entity for which the identifier exists, should own the identifier. For example, in our case, core product owns the product lifecycle and hence owns the SKU identifier.

Figure 3: Logical separation between core product logic and product pricing logic while connecting to the same Products table.

Below is the refactored code. You will see the newly created ProductPricingService which holds pricing specific logic. We also have the productPriceRepository to talk to the pricing specific data in Products table. Instead of the Product data class, we now have data classes ProductPrice and CoreProduct for holding the respective product pricing and core product data.

The getPriceFor and calculatePriceFor functions are fairly straightforward to convert over to point at the new productPriceRepository class.

class ProductPricingService…

  public Price getPriceFor(Sku sku) {
      ProductPrice productPrice = productPriceRepository.getPriceFor(sku);
      return calculatePriceFor(productPrice);
  }

  private Price calculatePriceFor(ProductPrice productPrice) {
      if(productPrice.isOnSale()) return productPrice.getSalePrice();
      return productPrice.getOriginalPrice();
  }

Getting the price range for a category logic is more involved since it needs to know which products belong to the category which lives in the core product part of the application. getPriceRangeFor method makes a call to the getActiveProductsFor method in coreProductService to get the list of active products for a given category. Thing to note here is that given is_active is an attribute of the core product, we have moved the isActive check over into the coreProductService.

class ProductPricingService…

  public CategoryPriceRange getPriceRangeFor(Category category) {
      List<CoreProduct> products = coreProductService.getActiveProductsFor(category);

      List<ProductPrice> productPrices = productPriceRepository.getProductPricesFor(mapCoreProductToSku(products));

      Price maxPrice = null;
      Price minPrice = null;
      for (ProductPrice productPrice : productPrices) {
              Price currentProductPrice = calculatePriceFor(productPrice);
              if (maxPrice == null || currentProductPrice.isGreaterThan(maxPrice)) {
                  maxPrice = currentProductPrice;
              }
              if (minPrice == null || currentProductPrice.isLesserThan(minPrice)) {
                  minPrice = currentProductPrice;
              }
      }
      return new CategoryPriceRange(category, minPrice, maxPrice);
  }

  private List<Sku> mapCoreProductToSku(List<CoreProduct> coreProducts) {
      return coreProducts.stream().map(p -> p.getSku()).collect(Collectors.toList());
  }

Here is what the new getActiveProductsFor method for getting active products for a given category looks like.

class CoreProductService…

  public List<CoreProduct> getActiveProductsFor(Category category) {
      List<CoreProduct> productsForCategory = coreProductRepository.getProductsFor(category);
      return filterActiveProducts(productsForCategory);
  }

  private List<CoreProduct> filterActiveProducts(List<CoreProduct> products) {
      return products.stream().filter(p -> p.isActive()).collect(Collectors.toList());
  }

In this case, we have kept the isActive check in the service class but this can be easily moved down into the database query. In fact, such a type of refactoring of splitting functionality into multiple services often makes it easy to spot opportunities to move logic into the database query and thus make the code more performant.

The updateIsOnSale logic is also fairly straight forward and would have to be refactored as below.

class ProductPricingService…

  public void updateIsOnSaleFor(Sku sku) {
      final ProductPrice productPrice = productPriceRepository.getPriceFor(sku);
      productPrice.setOnSale(true);
      productPriceRepository.save(productPrice);
  }

The searchProduct method points to the newly created coreProductRepository for searching the product.

class CoreProductService…

  public Sku searchProduct(String searchString) {
      return coreProductRepository.searchProduct(searchString);
  }

The CatalogService (top level interface to the monolith) will be refactored to delegate the service method calls to the appropriate service – CoreProductService or ProductPricingService. This is important, so that we do not break existing contracts with the clients of the monolith.

The searchProduct method gets delegated to coreProductService.

class CatalogService…

  public Sku searchProduct(String searchString) {
      return coreProductService.searchProduct(searchString);
  }

The pricing related methods get delegated to productPricingService.

class CatalogService…

  public Price getPriceFor(Sku sku) {
      return productPricingService.getPriceFor(sku);
  }

  public CategoryPriceRange getPriceRangeFor(Category category) {
      return productPricingService.getPriceRangeFor(category);
  }

  public void updateIsOnSaleFor(Sku sku) {
      productPricingService.updateIsOnSaleFor(sku);
  }

Step 3. Create new table/s to support the logic of the new service in the monolith

As part of this step, you would split the pricing related data into a new table – ProductPrices. At the end of this step, the product pricing logic should access the ProductPrices table and not the Products table directly. For any information that it needs from the Products table related to core product information, it should go through the core product logic layer. This step should result in code changes only in the productPricingRepository class and not in any of the other classes, especially the service classes.

It is important to note that this step involves data migration from the Products table to the ProductPrices table. Make sure you design the columns in the new table to look exactly the same as the product pricing related columns in the Products table. This will keep the repository code simple and make the data migration simple. If you notice bugs after you have pointed the productPricingRepository to the new table, you can point the productPricingRepository code back to the Products table. You can choose to delete the product pricing related fields from the Products table once this step has been successfully completed.

Essentially what we are doing here is a database migration which involves splitting a table into two tables and moving data from the original table into the newly created table. My colleague Pramod Sadalage wrote a whole book on Refactoring Databases which you should check out if you are curious to know more about this topic. As a quick reference, you can refer to the Evolutionary Database Design article by Pramod and Martin Fowler.

At the end of this step, you should be able to get indications of the possible impact the new service would have on the overall system in terms of functional as well cross-functional requirements especially performance. You should be able to see the performance impact of "in memory data joins" in the logic layer. In our case getPriceRangeFor makes an in memory data join between core product and product pricing information. In memory data joins in the logic layer will always be more expensive than making those joins at the database layer but that is cost of having decoupled data systems. If the performance hurts at this stage, it is going to get worse when the data goes back and forth across the physical services over the wire. If the performance requirements (or any other requirements for that matter) are not being met, then it is likely you will have to rethink the service boundary. At least, the clients (Web application and iOS app) are largely transparent to this change since we have not changed any of the client interactions yet. This allows for quick and cheap experimentation with service boundaries which is a beauty of this step.

Figure 4: Logical separation between core product logic and data and product pricing logic and data.

Step 4. Build new service pointing to tables in monolithic database

In this step, you build a brand new “physical” service for product pricing with logic from ProductPricingService while still pointing to the ProductPrices table in the monolith database. Note that at this point, calling the CoreProductService from ProductPricingService will be a network call and will incur a performance penalty along with having to deal with issues concerning remote calls like timeouts which should be handled accordingly.

This might be a good opportunity to create a “business truthful” abstraction for the product pricing service so that you are modeling the service to represent the business intention rather than the mechanics of the solution. For example, when the business user is updating the updateIsOnSale flag they are really creating a “promotion” in the system for a given product. Below is what updateIsOnSaleFor looks like after the refactoring. We have also added the ability to specify the promotion price as part of this change which was not available before. This might also be a good time to simplify the interface by pushing some of the service-related complexity back into the service that might have leaked out into the clients. This would be a welcome change from a service consumer’s point of view.

class ProductPricingService…

  public void createPromotion(Promotion promotion) {
      final ProductPrice productPrice = productPriceRepository.getPriceFor(promotion.getSku());
      productPrice.setOnSale(true);
      productPrice.setSalePrice(promotion.getPrice());
      productPriceRepository.save(productPrice);
  }

However, the limitation around this is that the changes should not require changing the table structure or the data semantics in any way as that will break the existing functionality in the monolith. Once the service has been fully extracted (in Step 9), then you can change the database happily to your heart’s content as that would be just as good as making a code change in the logic layer.

You might want to make these changes before you move over the clients because changing a service interface can be an expensive and time consuming process, especially in a large organization as it involves buy in from different service consumers to move to the new interface in a timely fashion. This will be discussed in further detail in the next step. You can safely release this new pricing service to production and test it. There are no clients for this service yet. Also, there is no change to the clients of the monolith – Web application and iOS app, in this step.

Figure 5: New physical product pricing service that points to the ProductPrices table in the monolith while depending on the monolith for the core product functionality.

Step 5. Point clients to the new service

In this step, the clients of the monolith that are interested in the product pricing functionality need to move over to the new service. The work in this step will depend on two things. First of all it will depend on how much of the interface has changed between the monolith and the new service. Second, and arguably more complex from an organizational standpoint, the bandwidth (capacity) the client teams have to complete this step in a timely fashion.

If this step drags on, it is quite likely that the architecture will be left in a half complete state where some clients point to the new service while some point to the monolith. This arguably leaves the architecture in a worse off state than before you started. This is why the ‘atomic step of architecture evolution’ principle we discussed earlier is important. Make sure you have the organizational alignment from all clients of the new service functionality to move to the new service in a timely fashion before starting on the migration journey. It is very easy to get distracted by other high priority matters while leaving the architecture in a half baked state.

Now the good news is that not all service clients have to migrate at the exact same time or need to coordinate their migration with each other. However, migrating all the clients is important before moving to the next step. If it does not already exist, you can introduce some monitoring at the service level for pricing related methods to identify the "migration laggards" – service consumers that have not migrated over to the new service.

In theory, you could work on some of the next steps before the clients have migrated, especially the next one which involves creating pricing database but for the sake of simplicity, I recommend moving sequentially as much as possible.

Figure 6: The clients of the monolith that are interested in pricing functionality have been migrated to the new product pricing service.

We’re releasing this article in installments. Future installments will continue working through the steps needed to do a good service extraction.

To find out when we publish the next installment subscribe to the site’s RSS feed, Praful’s twitter stream, or Martin’s twitter stream

Salı, 28 Ağustos 2018 / Published in Uncategorized

In Unity 2018.2, the Animation C# Jobs feature extends the animation Playables with the C# Job System released with 2018.1. It gives you the freedom to create original solutions when implementing your animation system, and improve performance with safe multithreaded code at the same time. Animation C# Jobs is a low-level API that requires a solid understanding of the Playable API. It’s therefore aimed at developers who are interested in extending the Unity animation system beyond its out-of-the-box capabilities. If that sounds like you, read on to find out when is it a good idea to use it and how to get the most out of it!

With Animation C# Jobs, you can write C# code that will be invoked at user-defined places in the PlayableGraph, and thanks to the C# Job System, the users can harness the power of modern multicore hardware. For projects which see a significant cost in C# scripts on the main thread, some of the animation tasks can be parallelized. This unlocks valuable performance gains. The user made C# scripts can modify the animation stream that flows through the PlayableGraph.

Features

  • New Playable node: AnimationScriptPlayable
  • Control the animation data stream in the PlayableGraph
  • Multithreaded C# code

Disclaimer

The Animation C# Jobs is still an experimental feature (living in UnityEngine.Experimental.Animations). The API might change a bit over time, depending on your feedback. Please join the discussion on our Animation Forum!

Use cases

So, say, you want to have a foot-locking feature for your brand new dragon character. You could code that with a regular MonoBehaviour, but all the code would be run in the main thread, and not until the animation pass is over. With the Animation C# Jobs, you can write your algorithm and use it directly in a custom Playable node in your PlayableGraph, and the code will run during PlayableGraph processing, in a separate thread.

Or, if you didn’t want to animate the tail of your dragon, the Animation C# Jobs would be the perfect tool for setting up the ability to procedurally compute this movement.

Animation C# Jobs also gives you the ability to write a super-specific LookAt algorithm that would allow you to target the 10 bones in your dragon’s neck, for example.

Another great example would be making your own animation mixer. Let’s say you have something very specific that you need – a node that takes positions from one input, rotations from another, scales from a third node, and mixes them all together into a single animation stream – Animation C# Jobs gives you the ability to get creative and build for your specific needs.

Examples

Before getting into the meaty details of how to use the Animation C# Jobs API, let’s take a look at some examples that showcase what is possible to do with this feature.

All the examples are available on our Animation Jobs Samples GitHub page. To install it you can either git clone it or download the latest release. Once installed, the examples have their own scenes which are all located in the “Scenes” directory:

LookAt

The LookAt is a very simple example that orients a bone (also called a joint) toward an effector. In the example below, you can see how it works on a quadruped from our 3D Game Kit package.

[link VIDEO]

TwoBoneIK

The TwoBoneIK implements a simple two-bone IK algorithm that can be applied to three consecutive joints (e.g. a human arm or leg). The character in this demo is made with a generic humanoid avatar.

[link VIDEO]

FullbodyIK

The FullbodyIK example shows how to modify values in a humanoid avatar (e.g. goals, hints, look-at, body rotation, etc.). This example, in particular, uses the human implementation of the animation stream.

[link VIDEO]

Damping

The Damping example implements a damping algorithm that can be applied to an animal tail or a human ponytail. It illustrates how to generate a procedural animation.

VIDEO<span data-mce-type="bookmark" class="mce_SELRES_start"></span>

SimpleMixer

The SimpleMixer is a sort of “Hello, world!” of animation mixers. It takes two input streams (e.g. animation clips) and mixes them together based on a blending value, exactly like an AnimationMixerPlayable would do.

[link VIDEO]

WeightedMaskMixer

The WeigthedMaskMixer example is a bit more advanced animation mixer. It takes two input streams and mixes them together based on a weight mask that defines how to blend each and every joint. For example, you can play a classic idle animation and take just the animation of the arms from another animation clip. Or you can smooth the blend of an upper-body animation by applying successively higher weights on the spine bones.

[link VIDEO]

API

The Animation C# Jobs feature is powered by the Playable API. It comes with three new structs: AnimationScriptPlayable, IAnimationJob, and AnimationStream.

AnimationScriptPlayable and the IAnimationJob

The AnimationScriptPlayable is a new animation Playable which, like any other Playable, can be added anywhere in a PlayableGraph. The interesting thing about it is that it contains an animation job and acts as a proxy between the PlayableGraph and the job. The job is a user-defined struct that implements IAnimationJob.

A regular job processes the Playable inputs streams and mixes the result in its stream. The animation process is separated in two passes and each pass has its own callback in IPlayableJob:

  1. ProcessRootMotion handles the root transform motion, it is always called before ProcessAnimation and it determines if ProcessAnimation should be called (it depends on the Animator culling mode);
  2. ProcessAnimation is for everything else that is not the root motion.

The example below is like the “Hello, world!” of Animation C# Jobs. It does nothing at all, but it allows us to see how to create an AnimationScriptPlayable with an animation job:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

using UnityEngine;

using UnityEngine.Playables;

using UnityEngine.Animations;

using UnityEngine.Experimental.Animations;

 

 

public struct AnimationJob : IAnimationJob

{

    public void ProcessRootMotion(AnimationStream stream)

    {

    }

 

    public void ProcessAnimation(AnimationStream stream)

    {

    }

}

 

[RequireComponent(typeof(Animator))]

public class AnimationScriptExample : MonoBehaviour

{

    PlayableGraph m_Graph;

    AnimationScriptPlayable m_ScriptPlayable;

 

    void OnEnable()

    {

        // Create the graph.

        m_Graph = PlayableGraph.Create("AnimationScriptExample");

 

        // Create the animation job and its playable.

        var animationJob = new AnimationJob();

        m_ScriptPlayable = AnimationScriptPlayable.Create(m_Graph, animationJob);

 

        // Create the output and link it to the playable.

        var output = AnimationPlayableOutput.Create(m_Graph, "Output", GetComponent<Animator>());

        output.SetSourcePlayable(m_ScriptPlayable);

    }

 

    void OnDisable()

    {

        m_Graph.Destroy();

    }

}

The stream passed as a parameter of the IAnimationJob methods is the one you will be working on during each processing pass.

By default, all the AnimationScriptPlayable inputs are processed. In the case of only one input (a.k.a. a post-process job), this stream will contain the result of the processed input. In the case of multiple inputs (a.k.a. a mix job), it’s preferable to process the inputs manually. To do so, the method AnimationScriptPlayable.SetProcessInputs(bool) will enable or disable the processing passes on the inputs. To trigger the processing of an input and acquire the resulting stream in manual mode, call AnimationStream.GetInputStream().

AnimationStream and the handles

The AnimationStream gives you access to the data that flows through the graph from one playable to another. It gives access to all the values animated by the Animator component

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

public struct AnimationStream

{

    public bool isValid { get; }

    public float deltaTime { get; }

 

    public Vector3 velocity { get; set; }

    public Vector3 angularVelocity { get; set; }

 

    public Vector3 rootMotionPosition { get; }

    public Quaternion rootMotionRotation { get; }

 

    public bool isHumanStream { get; }

    public AnimationHumanStream AsHuman();

 

    public int inputStreamCount { get; }

    public AnimationStream GetInputStream(int index);

}

It isn’t possible to have a direct access to the stream data since the same data can be at a different offset in the stream from one frame to the other (for example, by adding or removing an AnimationClip in the graph). The data may have moved, or may not exist anymore in the stream. To ensure the safety and validity of those accesses, we’re introducing two sets of handles: the stream and the scene handles, which each have a transform and a component property handle.

The stream handles

The stream handles manage, in a safe way, all the accesses to the AnimationStream data. If an error occurs the system throws a C# exception. There are two types of stream handles: TransformStreamHandle and PropertyStreamHandle.

The TransformStreamHandle manages Transform and takes care of the transform hierarchy. That means you can change the local or global transform position in the stream, and future position requests will give predictable results.

The PropertyStreamHandle manages all other properties that the system can animate and find on the other components. For instance, it can be used to read, or write, the value of the Light.m_Intensity property.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

public struct TransformStreamHandle

{

    public bool IsValid(AnimationStream stream);

    public bool IsResolved(AnimationStream stream);

    public void Resolve(AnimationStream stream);

 

    public void SetLocalPosition(AnimationStream stream, Vector3 position);

    public Vector3 GetLocalPosition(AnimationStream stream);

 

    public void SetLocalRotation(AnimationStream stream, Quaternion rotation);

    public Quaternion GetLocalRotation(AnimationStream stream);

 

    public void SetLocalScale(AnimationStream stream, Vector3 scale);

    public Vector3 GetLocalScale(AnimationStream stream);

 

    public void SetPosition(AnimationStream stream, Vector3 position);

    public Vector3 GetPosition(AnimationStream stream);

 

    public void SetRotation(AnimationStream stream, Quaternion rotation);

    public Quaternion GetRotation(AnimationStream stream);

}

 

public struct PropertyStreamHandle

{

    public bool IsValid(AnimationStream stream);

    public bool IsResolved(AnimationStream stream);

    public void Resolve(AnimationStream stream);

 

    public void SetFloat(AnimationStream stream, float value);

    public float GetFloat(AnimationStream stream);

 

    public void SetInt(AnimationStream stream, int value);

    public int GetInt(AnimationStream stream);

 

    public void SetBool(AnimationStream stream, bool value);

    public bool GetBool(AnimationStream stream);

}

The scene handles

The scene handles are another form of safe access to any values, but from the scene rather than from the AnimationStream. As for the stream handles, there are two types of scene handles: TransformSceneHandle and PropertySceneHandle.

A concrete usage of a scene handle is to implement an effector for a foot IK. The IK effector is usually a GameObject not animated by an Animator, and therefore external to the transforms modified by the animation clips in the PlayableGraph. The job needs to know the global position of the IK effector in order to calculate the desired position of the foot. Thus the IK effector is accessed through a scene handle, while stream handles are used for the leg bones.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

public struct TransformSceneHandle

{

    public bool IsValid(AnimationStream stream);

 

    public void SetLocalPosition(AnimationStream stream, Vector3 position);

    public Vector3 GetLocalPosition(AnimationStream stream);

 

    public void SetLocalRotation(AnimationStream stream, Quaternion rotation);

    public Quaternion GetLocalRotation(AnimationStream stream);

 

    public void SetLocalScale(AnimationStream stream, Vector3 scale);

    public Vector3 GetLocalScale(AnimationStream stream);

 

    public void SetPosition(AnimationStream stream, Vector3 position);

    public Vector3 GetPosition(AnimationStream stream);

 

    public void SetRotation(AnimationStream stream, Quaternion rotation);

    public Quaternion GetRotation(AnimationStream stream);

}

 

public struct PropertySceneHandle

{

    public bool IsValid(AnimationStream stream);

    public bool IsResolved(AnimationStream stream);

    public void Resolve(AnimationStream stream);

 

    public void SetFloat(AnimationStream stream, float value);

    public float GetFloat(AnimationStream stream);

 

    public void SetInt(AnimationStream stream, int value);

    public int GetInt(AnimationStream stream);

 

    public void SetBool(AnimationStream stream, bool value);

    public bool GetBool(AnimationStream stream);

}

AnimationJobExtensions

The last piece is the AnimationJobExtension class. It’s the glue that makes it all work. It extends the Animator to create the four handles seen above, thanks to these four methods: BindStreamTransform, BindStreamProperty, BindSceneTransform, and BindSceneProperty.

public static class AnimatorJobExtensions

{

    public static TransformStreamHandle BindStreamTransform(this Animator animator, Transform transform);

    public static PropertyStreamHandle BindStreamProperty(this Animator animator, Transform transform, Type type, string property);

 

    public static TransformSceneHandle BindSceneTransform(this Animator animator, Transform transform);

    public static PropertySceneHandle BindSceneProperty(this Animator animator, Transform transform, Type type, string property);

}

The “BindStream” methods can be used to create handles on already animated properties or for newly animated properties in the stream.

See also

API documentation:

If you encounter a bug, please file it using the Bug Reporter built in Unity.

For any feedback on this experimental feature, please go this forum thread: Animation C# jobs in 2018.2a5

Salı, 28 Ağustos 2018 / Published in Uncategorized

Introduction

In this article, we’ll learn how to use message pack protocol with SignalR using ASP.NET Core and Angular. The message pack is used for binary data transmission over the protocol and binary data is lightweight in comparison to text data.

“MessagePack is an efficient binary serialization format. It allows you to exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves.”

The binary data transmission has a significant performance benefit over the protocol. It is like a JSON but small and fast.

In this article, we’ll see how to enable message pack on both Server and client side. It is mandatory to enable on both sides to get it to work because while negotiating between client and server this feature must be enabled.

This article is part of a series of post on SignalR using ASP.NET Core.

  • Overview of New stack SIgnalR on ASP.NET Core here

  • Getting Started With SignalR Using ASP.NET Core And Angular 5 here

  • Getting started with SignalR using ASP.NET Core: Dynamic Hub here

  • Getting started with SignalR using ASP.NET Core: Streaming Data using Angular 5 here

  • Getting started with SignalR using ASP.NET Core: Azure SignalR Service here

This article demonstrates the following things: 

  • Why Message Pack?

  • Configuring message pack on the server

  • Configuring message pack on Typescript client

  • Configuring message pack on .NET Client

  • Demo 

The source code is available at GitHub,

  • https://github.com/nemi-chand/ASPNETCore-SignalR-Angular-TypeScript

  • https://github.com/nemi-chand/ASPNETCore-SignalR-AzureService

 

WhyMessagePack?

  • It is too small in size, compact and efficient.

  • What you can do with JSON, you can do with Message Pack.

  • Creating an application specific type.

  • It is supported by over 50 programming languages.

(image source https://msgpack.org)

In the above picture, you can see the difference between JSON and MessagePack. It is smaller than JSON so it has better performance over the transmission.

You can read more about MessagePack at the official website.

Configure Message pack on Server

You are ready to use MessagePack (binary data) with SignalR just by changing one line of code on the server side. We are going to use SignalR Protocol MessagePack client in order to use server side. Need to install MessagePack signalr client sdk “Microsoft.AspNetCore.SignalR.Protocols.MessagePack”.

CLI Command

dotnet add package Microsoft.AspNetCore.SignalR.Protocols.MessagePack –version 1.0.0

Nuget Package

Install-Package Microsoft.AspNetCore.SignalR.Protocols.MessagePack

After installing this SDK , we have to add SDK in start up services to use MessagePack. Add AddMessagePackProtocol method after AddSignalR method in configure services in startup class.

  1. services.AddSignalR().AddMessagePackProtocol();  

AddMessagePackProtocol allows you to configure the options to add formatter resolvers so you can create your custom IFormatterResolver in order to add custom resolver. There are plenty of prebuilt formatter resolvers available like standard resolver and unself binary resolver.

  1. services.AddSignalR().AddMessagePackProtocol(configure =>  

  2.             {  

  3.                 configure.FormatterResolvers = new List<IFormatterResolver>()  

  4.                 {  

  5.                     MessagePack.Resolvers.UnsafeBinaryResolver.Instance  

  6.                 };  

  7.             });  

Configure message pack on Client

We have successfully added MessagePack to the server side, now we have to add on the client side as well. In this article, we will see two types of client implementations.

  • TypeScript/Angular Client

  • .Net Client

Typescript/Angular Client

Need to install NPM Package for a messagepack protocol.

NPM Command

npm install @aspnet/signalr-protocol-msgpack

After installing this pack , we are going to use MessagePackHubProtocol class. HubConnectionBuilder class has a method called “withHubProtocol”. It takes IHubProtocol implementation so we are passing MessagePackHubProtocol instance to it. HubConnectionBuilder will configure to use the Message Pack protocol.

  1. import { MessagePackHubProtocol } from ‘@aspnet/signalr-protocol-msgpack’;  

  2.   

  3. const hubConnection = new HubConnectionBuilder()  

  4.       .withUrl(‘/chat’)  

  5.       .withHubProtocol(new MessagePackHubProtocol())  

  6.       .build();  

We have successfully added Messagepack protocol to TypeScript client. Now it’s sending and receiving the Binary data over the protocol.

.NET Client

This is .NET standard 2.0 client so the purpose of this client to consume SIgnalR hub which is must be built on or above 2.1 ASP.NET Core SignalR. To enable Message Pack on .NET Client , install the NuGet Pack “Microsoft.AspNetCore.SignalR.Protocols.MessagePack”.

CLI Command

dotnet add package Microsoft.AspNetCore.SignalR.Protocols.MessagePack –version 1.0.0

Nuget Package

Install-Package Microsoft.AspNetCore.SignalR.Protocols.MessagePack

add method AddMessagePackProtocol to HubConnectionBuilder class while creating hub connection.

  1. var hubConnection = new HubConnectionBuilder()  

  2.                         .WithUrl(“/chat”)  

  3.                         .AddMessagePackProtocol()  

  4.                         .Build();  

Demo

The hub connection is started with messagepack protocol with binary frame in the above image.

Summary

In this article, we have seen messagepack protocol implementation with SignalR using ASP.NET Core and Angular 5. We have learned the following things:

  • Why message pack gives better performance.

  • Enabling messagepack on the server side.

  • Enabling messagepack on multiple clients (Typescript/Angular and .NET client).

References

 

Salı, 28 Ağustos 2018 / Published in Uncategorized

In this article, I will explain how to integrate an ASP.NET MVC project with GitHub. Along with that, I will explain some of the below activities often performed by developers on Git.

  • Create Local Branch

  • Push Changes

  • Pull Request

Prerequisites to achieve this demo are the following –

Let us understand the difference between GitHub and Git

GitHub  – This is a distributed version control and source code management system where we can store our projects. We call these projects Repositories (Repos) in GitHub terminology. From here, we can download projects as well as it provides a repository URL to clone it from Visual Studio.

Git 

An open source version control system that does the following –

  • It keeps track of the changes made to the code by developers.

  • Version control system keeps these revisions straight, storing modifications in a central repository. This allows developers to easily collaborate as they can download a new version of the software, make changes and upload the newest revision. Every developer can see these new changes, download them, and contribute.

  • Git is the preferred version control system of most developers since it has multiple advantages over other systems available. It stores the file system more efficiently and ensures the file integrity.

Let’s dive into the actual concept of how we integrate ASP.NET project with GitHub.

Step 1

First, you have to create your personal account on GitHub. Once you create an account successfully, log in and check it.

Step 2

Switch to Visual Studio, open your project, and go to Team Explorer.

Team Explorer option is available under the View menu in Visual Studio, like below.

If you don’t find these options under the Team Explorer window, that means Git is not installed on your Visual Studio. So, first, let’s get the Git installed by using Nuget. Just go to the Tools menu and under that select Extension and Updates where you have to type GitHub in the search box like below.

On my machine, it is already installed. Now, the Git is ready in your Visual Studio to perform operations.

Step 3

Open your project in Visual Studio and go to Team Explorer where you’ll find  the below screen.

At the bottom of this screen, you can find “Add to Source Control” option. Click on this and it will pop the Git option up. Clicking it, the below options will be displayed.

Here, let us go to the second option – “Push to Remote Repository” section. Under this, click on the “Publish Git Repo” button. It will ask you to enter the repo name. I have given here the git URL along with the project name like this – https://github.com/MVCApplication and tried to push the repo but it was denied because first, we have to commit these changes locally.

In order to commit these changes locally, we have to go back to options with the help of home button on top of this window. Just click on the home symbol and it will take you to the below screen.

Now, click on the “Changes” tab and commit your changes. By default, Visual Studio creates a master branch on its own and save these files under that branch.

Click on “Commit All” button. Then, it will save all the files under the master branch.

Step 4

Go back to the home again where you will find the Sync option. Just click on it.

After clicking on Sync option, it will display the below screen where you have to provide the GitHub URL.

Make sure you have created this URL in GitHub before entering here.

Click on the Publish button. It will ask you for the GitHub credentials; just enter them. On the successful publishing of all the files and folders, this project will appear under GitHub in the name of the MVC App.

Now, go to the GitHub and check your code under MVC APP Repo.

As of now, we have created a project from Visual Studio and successfully pushed to the GitHub.

How to clone an existing project from GitHub in Visual Studio

  • Get URL from GitHub like below.

  • Go to the Visual Studio and go to the Team Explorer where you find the “Clone” option. Click on it and give the GitHub Repo URL.

  • Click on Clone and the project will be downloaded into your local folder. By default, Visual Studio creates a master branch. We don’t work on this branch. We have to create our own local branch out of it and start working on it.

  • To create a local branch, go to Team Explorer under Branches tab, click on “New Branch” and give a branch name you like.

  • Once you click on the “Create Branch” button, you will see the created branch under the branch list.

  • Now, we have to start working the on new branch (Add_Controller) and push changes to remote. Once you push changes on GitHub, the Add_Controller branch would be created.

  • Now, we have to create a pull request in order to merge these changes to the master branch.

  • Once you’ve clicked on the pull request, it will show the below screen where you can add Reviewers who you want to review your code changes.

  • Once the Pull Request is raised successfully, you’ll see the below screen.

  • Once you click on the “Merge pull Request” button, the changes which we made in Add_Controller branch will automatically merge to the master.

  • Once this step is over, go to the master branch and check if these changes have taken effect or not.

Pazartesi, 27 Ağustos 2018 / Published in Uncategorized

Introduction

In Part 1 and Part 2 of my series, we have created conclusions about data such as “This data instance is in this class!” The next question will be “What if we assign a probability to a data instance belonging to a given class?” The answer will be discussed in the this tip.

Background

Before jumping to Python code section, we need to glance a bit of knowledge about probability theory. You can refer to links list in my references section at the end of this tip. Here are summary of my reading.

In classification tasks, our job is to build a function

that takes in a vector of features X (also called "inputs") and predicts a label Y (also called the "class" or "output"). A natural way to define this function is to predict the label with the highest conditional probability, that is choose:

To help with computing the probabilities P(Y = y|X), you are able to use training data consisting of examples of feature–label pairs (X,Y). You are given N of these pairs: (x(1), y(1)), (x(2), y(2)), …, (x(N), y(N)), where x(i) is a vector of m discrete features for the ith training example and y(i) is the discrete label for the ith training example.

We assume that all labels are binary (ꓯi. y(i)  ϵ {0,1}) and all features are binary (ꓯi,j. xj(i)  ϵ {0,1}).

The objective in training is to estimate the probabilities P(Y) and P(Xj |Y) for all 2 ≥ j ≤ m features. Using an MLE estimate:

Using a Laplace MAP estimate:

For an example with x = [x1, x2,…, xm], estimate the value of y as:

Assume that we are building a filtering function for our website. The purpose of this function is to filter comments that are submitted by users in two classes: Abusive and NOTAbusive. That is, we are given a set of m comments xi, denoted by X :={ x1,… , xm} and associated labels yi, denoted by

Y := { y1; …; ym}. Here the labels satisfy yi ϵ {Abusive, NOTAbusive}.

We also notive that, comments are text. One way of converting text into a vector is by using the so-called bag of words representation. In its simplest version it works as follows: Assume we have a list of all possible words occurring in X, that is a dictionary, then we are able to assign a unique number with each of those words (e.g. the position in the dictionary). Now we may simply count for each document xi the number of times a given word j is occurring. This is then used as the value of the j-th coordinate of xi.

In the context of our filtering function, the actual text of the comment x corresponds to the test and the label y is equivalent to the diagnosis. Using Bayes rule:

we may now treat the occurrence of each word in a document as a separate test and combine the outcomes in a naive fashion by assuming that:

Now, we are going to use Python code for our filtering function.

Using the code

The first our task is to prepare data set and vectors.

Asume that, we have collected comments that are submitted by users and  have put them into two files: NOTAbusive.txt and Abusive.txt. The NOTAbusive.txt file contains comments are not abusive as follows:

My dog has flea problems, help please!

My dalmation is so cute. I love him!

Mr licks ate my steak. How to stop him?

The Abusive.txt file contains comments are abusive as follows:

May be not take him to dog park stupid.

Stop posting stupid worthless garbage!

Quit buying worthless dog food stupid.

If we have a text string, we can split it using the Python split() method and use regular expressions to split up the sentence on anything that isn’t a word or number. We also can count the length of each string, return only the items greater than 0 and convert strings to all lowercase by using lower(). The following textParse function is created for all of tasks above:

def textParse(bigString):

                listOfTokens = re.split(r'\W*', bigString)

                return [tok.lower() for tok in listOfTokens if len(tok) >0]

We create a function to return two variables. The first variable (docList) is a set of documents and the text has been broken up into a set of words (also called is tokens). The second variable (classList) is a set of class labels. Here you have two classes, Abusive – assigned the 1 label and NOTAbusive assigned the label 0. The following loadDataSet is our function:

def loadDataSet():

                docList=[]; classList = [];

                wordList = textParse(open('NOTabusive.txt').read())

                docList.append(wordList)

                classList.append(0)

                wordList = textParse(open('Abusive.txt').read())

                docList.append(wordList)

                classList.append(1)

                return docList,classList

If we implement the following statement:

listOPosts, listClasses = loadDataSet()

print(listOPosts)

print(listClasses)

listOPosts can look like this:

[['my', 'dog', 'has', 'flea', 'problems', 'help', 'please', 'my', 
'dalmation', 'is', 'so', 'cute', 'i', 'love', 'him', 'mr', 'licks', 'ate', 'my', 
'steak', 'how', 'to', 'stop', 'him'], 
['may', 'be', 'not', 'take', 'him', 'to', 'dog', 'park', 'stupid', 'stop', 'posting', 
'stupid', 'worthless', 'garbage', 'quit', 'buying', 'worthless', 'dog', 'food', 'stupid']]

Notes that, we have two lists: the first list is the NOT Abusive list and the second one is the Abusive list.

listClasses can look like this:

[0, 1]

Next, we create the function createVocabList() that will create a list of all the unique words in all of our documents:

def createVocabList(dataSet):

                # Create an empty set

                vocabSet = set([])

                for document in dataSet:

                                # Create the union of two sets

                                vocabSet = vocabSet | set(document)

                return list(vocabSet)

if you run the following code:

listOPosts,listClasses = loadDataSet()

myVocabList = createVocabList(listOPosts)

print(myVocabList )

the result will look like this:

['garbage', 'to', 'posting', 'i', 'him', 'mr', 'problems', 'help', 'stop', 'dog', 'has', 'my', 
'food', 'ate', 'dalmation', 'be', 'may', 'so', 'take', 'worthless', 'steak', 
'love', 'how', 'park', 'not', 'cute', 'stupid', 'is', 'licks', 'flea', 'please', 'quit', 'buying']

We also need to create the bagOfWords2Vec function. This function will create a bag of words can have multiple occurrences of each word (whereas a set of words can have only one occurrence of each word). The lines of Python code for the bagOfWords2Vec function:

def bagOfWords2Vec(vocabList, inputSet):

                returnVec = [0]*len(vocabList)

                for word in inputSet:

                                if word in vocabList:

                                                returnVec[vocabList.index(word)] += 1

                return returnVec

we can test this function by using the following lines of Python code:

listOPosts,listClasses = loadDataSet()

myVocabList = createVocabList(listOPosts)

testEntry = ['help','cute']

thisDoc = bagOfWords2VecMN(myVocabList, testEntry)

print(thisDoc)

the result is a vector that will look like this:

[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

We can see that, both the word help and the word cute appear once in our vocabulary.

After preparing data set and vectors, the next task is to create a training function called trainNB. This function will calculate probability of a word wi in the 0 class (NOTAbusive class), p(wi|0), and in the 1 class (Abusive class), p(wi|1). This function is created based on the following formula (log version):

The Python code of the trainNB looks like this:

def trainNB(trainMatrix,trainCategory):

                numTrainDocs = len(trainMatrix)

                numWords = len(trainMatrix[0])

                pAbusive = sum(trainCategory)/float(numTrainDocs)

                p0Num = ones(numWords)

                p1Num = ones(numWords)

                p0Denom = 2.0

                p1Denom = 2.0

                for i in range(numTrainDocs):

                                if trainCategory[i] == 1:

                                                p1Num += trainMatrix[i]

                                                p1Denom += sum(trainMatrix[i])

                                else:

                                                p0Num += trainMatrix[i]

                                                p0Denom += sum(trainMatrix[i])

                p1Vect = log(p1Num/p1Denom) #change to log()

                p0Vect = log(p0Num/p0Denom) #change to log()

                return p0Vect,p1Vect,pAbusive

As we can see above, the trainNB function takes a matrix of documents (abusive document and not abusive document), trainMatrix, and a vector with the class labels for each of the documents, trainCategory. The trainNB returns p0Vect – the vector contains probability values of words in a NOT Abusive document (p(wi|0)), p1Vect –  the vector contains probability values of words in an Abusive document (p(wi|1)), and pAbusive or p(1). Becauce we have two classes, we can get pNOTAbusive or p(0) by 1 – p(1).

So far, we can test this function by using the following lines of code:

listOPosts,listClasses = loadDataSet()

myVocabList = createVocabList(listOPosts)

trainMat=[]

for postinDoc in listOPosts:

                trainMat.append(bagOfWords2Vec(myVocabList, postinDoc))

p0V,p1V,pAb = trainNB(array(trainMat),array(listClasses))

print("p0V = ", p0V)

print("p1V = ", p1V)

print("pAb = ", pAb)

The result:

p0V =  [-2.56494936 -2.56494936 -3.25809654 -3.25809654 -2.56494936 -3.25809654

 -2.56494936 -2.56494936 -2.56494936 -2.56494936 -2.56494936 -2.56494936

 -2.56494936 -3.25809654 -2.56494936 -2.56494936 -2.56494936 -2.56494936

 -2.15948425 -2.56494936 -2.56494936 -3.25809654 -3.25809654 -2.56494936

 -3.25809654 -3.25809654 -1.87180218 -3.25809654 -3.25809654 -2.56494936

 -3.25809654 -3.25809654 -2.56494936]

p1V =  [-3.09104245 -3.09104245 -2.39789527 -2.39789527 -2.39789527 -2.39789527

 -3.09104245 -3.09104245 -3.09104245 -3.09104245 -2.39789527 -3.09104245

 -1.99243016 -1.70474809 -3.09104245 -3.09104245 -3.09104245 -3.09104245

 -2.39789527 -3.09104245 -3.09104245 -2.39789527 -2.39789527 -3.09104245

 -2.39789527 -2.39789527 -3.09104245 -2.39789527 -1.99243016 -3.09104245

 -2.39789527 -2.39789527 -3.09104245]

pAb =  0.5

We can also make a histogram by using matplotlib:

import matplotlib.pyplot as plt

...

legend = ['p0V','p1V','pAb']

plt.hist(p0V,20,color='red',alpha=0.5)

plt.hist(p1V,20,color='blue',alpha=0.5)

plt.hist(pAb,20,color='green',alpha=0.5)

plt.legend(legend)

plt.show()

The result:

After training, we can predict easily what class a word belongs to by using formula:

In short, we have a comment as follows:

Stop him, please!

We can easily calculate p(stop|0), p(him|0), p(please|0), p(stop|1), p(him|1) p(please|1) by using the trainNB function. And then, we can calculate:

Probability of the comment in a NOT Abusive document (P0):

P0 = log(pNOTAbusive) + log (p(stop|0)) + log (p(him|0)) + log (p(please|0))

Probability of the comment in an Abusive document (P1):

P1 = log(pAbusive) + log (p(stop|1)) + log (p(him|1)) + log (p(please|1))

When we have P1 and P0, we can use the following rules:

  • If P1 > P0 then the comment belongs to 1 class
  • Else the comment belongs to 0 class

From all of above, we can create a predicting function called classifyNB as follows:

def classifyNB(vec2Classify, p0Vec, p1Vec, pClass1):

                p1 = sum(vec2Classify * p1Vec) + log(pClass1)

                p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)

                if p1 > p0:

                                return 1

                else:

                                return 0

If you want to know values of p0 and p1, we can create another version of the classifyNB:

def classifyNB1(vec2Classify, p0Vec, p1Vec, pClass1):

                p1 = sum(vec2Classify * p1Vec) + log(pClass1)

                p0 = sum(vec2Classify * p0Vec) + log(1.0 - pClass1)

                return p0, p1

We can test this function:

listOPosts,listClasses = loadDataSet()

myVocabList = createVocabList(listOPosts)

trainMat=[]

for postinDoc in listOPosts:

                trainMat.append(bagOfWords2Vec(myVocabList, postinDoc))

p0V,p1V,pAb = trainNB(array(trainMat),array(listClasses))

testEntry = ['stop', 'him', 'please']

thisDoc = array(bagOfWords2Vec(myVocabList, testEntry))

print('{0} belongs to:{1} class '.format(testEntry,classifyNB(thisDoc,p0V,p1V,pAb)))

The result:

['stop', 'him', 'please'] belongs to:0 class

And we can see histogram of p0 and p1 by using the classifyNB1 function:

p0, p1 = classifyNB1(thisDoc,p0V,p1V,pAb)

legend = ['p0','p1']

plt.hist(p0,10,color='red',alpha=0.5)

plt.hist(p1,10,color='blue',alpha=0.5)

plt.legend(legend)

plt.show()

The result can look like this:

Clearly, we can see p0 > p1, so our comment will belongs to 0 class.

Points of interest

In this tip, we have used probability for our classification tasks. Using probabilities can sometimes be more effective than using hard rules for classification. Bayesian probability and Bayes’ rule gives us a way to estimate unknown probabilities from known values. You can view all of source code in this tip here.

References

My Series

History

  • 30th Jun, 2018: Initial version
  • 26th Aug, 2018: The second version
Pazartesi, 27 Ağustos 2018 / Published in Uncategorized

In this article, you will learn how to consume/use Web API in ASP.NET MVC step by step.

This article is the third part of my ASP.NET MVC Web API series.

Consuming Web API From jQuery

In this article, we have used localhost Web API and called for the GET request. The following is the process:

  • Create ASP.NET MVC Project.
  • Add an HTML file called Members.html file.
  • Write GET call of for jQuery AJAX to fetch the data from ASP.NET Web API.
  • System or process will throw two different errors.
  • Resolve the errors with the solution given in this article.
  • Run the project and check the output.

Step by Step Implementation

Create an new ASP.NET Empty Website project called “ConsumeWebApiFromJquery”.

By default, the Solution Explorer looks like this.

Now, right-click on the project titled “ConsumeWebApiFromJquery”  and select ADD –> ADD NEW ITEM –> HTML Page. Give the page a name as “Members.html”.

Switch to Members.html file and add the following code.

  1. <!DOCTYPE html>  
  2. <html>  
  3. <head>  
  4.     <title></title>  
  5.     <meta charset="utf-8" />  
  6.     “text/javascript” src=“http://ajax.googleapis.com/ajax/libs/jquery/1.8.3/jquery.min.js”>  
  7.     “text/javascript”>  
  8.         $(document).ready(function () {  
  9.              
  10.                 $.ajax({  
  11.                     type: “GET”,  
  12.                     url: “http://localhost:52044/api/member”,  
  13.                     contentType: “application/json; charset=utf-8”,  
  14.                     dataType: “json”,  
  15.                     success: function (response) {  
  16.   
  17.                         //Clear all previous list of members  
  18.                         $(“#MemberList”).empty();  
  19.   
  20.                         //Display Asp.Net Web API response in console log   
  21.                         //You can see console log value in developer tool   
  22.                         //by pressing F12 function key.  
  23.                         console.log(response);  
  24.   
  25.   
  26.                         // Variable created to store 
  27. Memeber Detail
  28.   

  29.                         var ListValue = “”;  
  30.   
  31.                         //Variable created to iterate the json array values.  
  32.                         var i;  
  33.   
  34.                         //Generic loop to iterate the response arrays.  
  35.                         for (i = 0; i 
  36.                             ListValue += 
  37.  + response[i].MemberName + ” — “ + response[i].PhoneNumber  
  38.                         }  
  39.   
  40.                         //Add/Append the formatted values of ListValue variable into ID called “MemberList”  
  41.                         $(“#MemberList”).append(ListValue);  
  42.                     },  
  43.                     failure: function (response) {  
  44.                         alert(response.responseText);  
  45.                         alert(“Failure”);  
  46.                     },  
  47.                     error: function (response) {  
  48.                         alert(response);  
  49.                         alert(“Error”);  
  50.                     }  
  51.                 });  
  52.         });  
  53.       
  54. </head>  
  55. <body>  
  56.   
  57.     <h2>C-Sharpcorner Member List</h2>  
  58.       
  59.     <!–Member List will appened here –>  
  60.     <ul id="MemberList">  
  61.   
  62.     </ul>  
  63.   
  64. </body>  
  65. </html>  

Now, press F5 to execute the project.

You will encounter the following errors in Developer Tools.

ERROR 1

  1. OPTIONS http://localhost:52044/api/member 405 (Method Not Allowed).
  2. Response to the preflight request doesn’t pass access control check: No ‘Access-Control-Allow-Origin’ header is present on the requested resource. Origin ‘http://localhost:50702’ is therefore not allowed access.

To remove the above errors, the Web.Config file needs to be updated.

Update Web.Config file of your WebAPI Project with the following codes.

  1. <system.webServer>  
  2.   <httpProtocol>  
  3.     <customHeaders>  
  4.       <add name="Access-Control-Allow-Origin" value="*" />  
  5.       <add name="Access-Control-Allow-Headers" value="Content-Type, X-Your-Extra-Header-Key" />  
  6.       <add name="Access-Control-Allow-Methods" value="GET,POST,PUT,DELETE,OPTIONS" />  
  7.       <add name="Access-Control-Allow-Credentials" value="true" />  
  8.     </customHeaders>  
  9.   </httpProtocol>  
  10. </system.webServer>  

Now, press F5 to execute the project again.

ERROR 2

  1. OPTIONS http://localhost:52044/api/member 405 (Method Not Allowed)
  1. Failed to load http://localhost:52044/api/member: Response for preflight does not have HTTP ok status.

To remove the above errors, the Global.asax.cs file needs to be updated. Update the Global.asax.cs file of your WebAPI Project with the following codes.

  1. protected void Application_BeginRequest()  
  2.         {  
  3.             if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS")  
  4.             {  
  5.                 Response.End();  
  6.                 Response.Flush();  
  7.             }  
  8.         }  

Now, you can see the developer tools in the output screen. You will get the perfect output in console log.

Now, you can see on the browser that your browser is ready with the following output.

OUTPUT

TOP