Çarşamba, 05 Eylül 2018 / Published in Uncategorized

In my previous article, we got the gist of Web APIs, but we didn’t do anything on the documentation part. So in this article, we will cover the documentation of our Web API which will help users who are using Swagger.

What is Swagger?

Swagger is a standard which is used to define the API so that endpoints can be found and discovered easily with the help of small documentation along with the user interface. If it is clear what the API is doing, one can easily consume these APIs. It is similar to WSDL for Web Servcies.

How to get Swagger?

Swagger is an open source library with the name SwashBuckle and can be taken by any means of importing package. Let’s have a look at how to get it from NuGet,

What code changes are required?

First of all, we have to go and register the service for Swagger:

  1. public void ConfigureServices(IServiceCollection services)  

  2.         {  

  3.             services.AddMvc();  

  4.             services.AddSwaggerGen(options =>   

  5.             {  

  6.                 options.SwaggerDoc(“Version 1”, new Swashbuckle.AspNetCore.Swagger.Info { Title = “Conference Planner”, Description = “This holds methods to perform CRUD operation using in-memory database.” });  

  7.                 });  

  8.             services.AddDbContext<ApplicationDbContext>(context => { context.UseInMemoryDatabase(“ConferencePlanner”); });  

  9.         }  }

Next, we must do configuration related changes by enabling the user interface:

  1. public void Configure(IApplicationBuilder app, IHostingEnvironment env)  

  2.         {  

  3.             if (env.IsDevelopment())  

  4.             {  

  5.                 app.UseDeveloperExceptionPage();  

  6.             }  


  8.             app.UseSwagger();  

  9.             app.UseSwaggerUI(swag =>  

  10.             {  

  11.                 swag.SwaggerEndpoint(“/swagger/version1/swagger.json”, “Conference Planner APIs”);  

  12.             });  

  13.             app.UseMvc();  

  14.         }  

We are almost there. Now, quickly build and run the application with a URL as http://localhost:3394/swagger/Version1/swagger.json. Please note, for you, it can be a different port number. Once the page is loaded, you can see the generated JSON as:

The above JSON contains all the information which is required for any consumer. 

Last but not least, the UI part. To view the UI, the URL has to be changed slightly as http://localhost:3394/swagger/

And we are done with the SwashBuckle, which is an implementation of Swagger. Happy learning.

Çarşamba, 05 Eylül 2018 / Published in Uncategorized

You know what the popular SPA frameworks are, this series explains why.

Modern web development using JavaScript has evolved over the past decade to embrace patterns and good practices that are implemented through various libraries and frameworks. Although it is easy to get caught up in the excitement of frameworks like Angular, it is important to remember the fundamental patterns and repeatable practices that make development easier. In fact, it is difficult to make a qualified decision about your development stack without understanding “how” and “why” a particular tool, library, or framework may benefit the application and, more importantly, your team.

I recently authored a series of articles for the former “Telerik Developer Network” that covers what I believe are three fundamental concepts that have revolutionized modern web app development.

You can read the series here:

  1. Declarative vs. Imperative
  2. Data-Binding
  3. Dependency Injection

The three D’s are just a few of the reasons why JavaScript development drives so many consumer and enterprise experiences today. Although the main point of this series was to demonstrate the maturity of front-end development and the reason why JavaScript development at enterprise scale is both relevant and feasible today, it is also the answer to a question I often receive. “Why use Angular 2 and TypeScript?” My answer is this: together, Angular and TypeScript provide a modern, fast, and practical implementation of the three D’s of modern web development.


Originally published at csharperimage.jeremylikness.com on April 24, 2016.

Çarşamba, 05 Eylül 2018 / Published in Uncategorized

One of the best traits of a well-designed system is composability. Large systems are complex and hierarchical and one of the best ways to fight accidental complexity is to compose a system from smaller components. You write and test each component independently then you glue them together to achieve a higher-level behavior.

Programming languages usually designed to have "composable" features as well. You should be able to use multiple features together and the entire thing should just work. In C# you can compose different features together, and everything works. Unless it isn’t.

Recently I stumbled across a piece of C# code that shows the complexity of the language and demonstrates how easy these days to make mistakes by combining different language features. C# language is complex, and it is important to understand the features deep enough to use them correctly.

Let’s review the following code:

public static IEnumerable<Task<int>> ParseFile(string path){    if (string.IsNullOrEmpty(path)) throw new ArgumentNullException(nameof(path));     // OpenWithRetryPolicyAsync uses RetryPolicy to try to open the file more than once.    // The method throws FileNotFoundException if the file does not exists.    using (var file = OpenWithRetryPolicyAsync(path).Result)    {        using (var reader = new StreamReader(file))        {            // Let's assume that the first line contains a number of entries.             // Using int.Parse for simplicities sake            var numberOfEntries = int.Parse(reader.ReadLine());             for (int entry = 0; entry < numberOfEntries; entry++)            {                yield return ReadAndParseAsync(reader);            }        }    }} private static async Task<int> ReadAndParseAsync(StreamReader reader){    string line = await reader.ReadLineAsync();    return int.Parse(line);} public static IEnumerable<Task<int>> ParseAndLogIfNeeded(string path){    try    {        return ParseFile(path);    }    catch (Exception e)    {        Console.WriteLine($"Failed to parse the file: {e.Message}");        return Enumerable.Empty<Task<int>>();    }}

The code of OpenWithRetryPolicyAsync method is not relevant for our discussion so I moved the code to the end of the blog post.

  1. Exception handling and iterator blocks

Iterator blocks are quite different from regular methods. If a method returns IEnumerable<T> or IEnumerator<T> and has yield return in it, the compiler completely changes the semantics of the method. The method becomes lazy and it’ll be executed not when the method is called, but rather when the sequence is consumed.

This behavior may catch you off-guard especially when the result of the iterator block is "transformed" using other lazily evaluated functions, like LINQ operators:

var s1 = ParseFile(null); // no exceptions var s2 = s1.Select(t => t.GetAwaiter().GetResult()).Where(r => r == 42); // no exceptions if (s2.Any()) // throws an exception, potentially in completely different subsystem!{ }

This means that ParseFile method never throws when the method is called, and the catch block ParseAndLogIfNeeded is effectively unreachable and will never handle any errors.

  1. Observing ex.Message is not enough

Exceptions are like ogres, they have layers. In many cases, exceptions are nested with absolutely unactionable information in the top-level Message property.

Let’s suppose we resolved laziness issue by calling .ToList(): try { return ParseFile(path).ToList();}.

If a given path does not exist, then OpenWithRetryPolicyAsync throws FileNotFoundException. But because the method is asynchronous, the actual error will be wrapped in AggregateException. And in this case, the log file (the console in this case) will contain a very useful error message: "One or more error occurred".

And AggregateException is not the only example: TypeLoadExcpetion and TargetInvocationException never contain relevant information in Messageproperty. In other cases, you need to know a stack trace or an inner exception in order to understand the root cause of an issue.

In the perfect world, you would never even catch System.Exception, but in reality, this suggestion isn’t practical. So if you ended up catching generic exception type without re-throwing it, at least trace the full exception instance. Believe me, you’ll save yourself hours of investigation time in the future.

  1. Prefer calling GetAwaiter().GetResult over .Result

Blocking asynchronous operations is another anti-pattern that should never be used. But similarly to another anti-pattern mentioned above, you have to use it in practice from time to time to avoid viral asynchrony of the code.

There are 3 ways to synchronously wait for the result of a task-based operation: .Result, .Wait and GetAwaiter().GetResult(). The first two operations will throw AggregateException if the task fails and GetAwaiter().GetResult() will unwrap and throw the first exception from task’s AggregateException.

In some rare cases, a task is backed by more than one operation and AggregateException is what you need. But in the majority tasks are backed by an asynchronous operation that could have only one error, and propagating it directly simplifies exception handling a lot, especially when a caller does not expect to get an AggregateException at all.

  1. Using block and the lifetime of asynchronous operations

All the issues we discussed before are important but more or less obvious for experienced developers. The last one is a bit subtler.

Let’s take a closer look at the lifetime of the file variable in ParseFile. The compiler generates a state machine and calls a generated "finally block" to close the file when all the items of the sequence are produced. But the problem is that the method "yields" tasks (not values) and if the last task is not synchronous, then the task may read a file when the file is already closed!

Here what may happen if the caller of ParseFile calls ToList() on the result to eagerly obtain all the elements from the sequence and if the tasks are not completed synchronously:

Step 1: Opening the file Step 2: Yielding Task0 Step 3: Yielding Task1 Step 4: Closing the file Step 5-6: Tasks 0 and 1 are reading the file and failing with `ObjectDisposeException`

The issue may be in your code for years and manifest itself in some weird ways. For instance, the code may work fine on Windows for small files when ReadLineAsync()returns synchronously because the data is prefetched, but fail in other cases if ReadLineAsync will do an actual IO. (This is actually exactly how we discovered the issue: once we tried to run the code on MacOS we started getting the errors consistently.)

How to solve the main issue?

To be honest, I’m not a big fan of IEnumerable<Task>. The main issue from my perspective, that the semantics of a method is not clear. Just by looking at a method signature it is hard to tell if the sequence is "hot" or "cold". If the sequence is "cold" and lazy, then the tasks in the sequence are initiated one by one. If the sequence is "hot" and is backed by a collection, then all the tasks are running already.

"The right" approach depends on your needs. If clients will use all the items of the result all the time, then just use Task<List<T>>. As the author of a function, you should think about an ease of use and clarity of your functions. Types can be a useful source of information and you should try to express your intent as clearly as possible.

Asynchronous sequences could be useful in some cases. For instance, you may use IEnumerable<Task> if the sequence contains many elements, the time for getting the next element is high, and clients may need only the first few elements. But in this case, you should explain your intent in comments or, even better, use something like IAsyncEnumerable<T>.

The idea of async streams is not new and you may find types similar to IAsyncEnumerable<T> in many projects. The idiom is so widely used that the C# language authors consider having a language feature to consume async streams using await foreach syntax. (For more information, see Async Streams proposal).

The source code of OpenWithRetryPolicyAsync

private class RetryIfFileNotFoundPolicy : ITransientErrorDetectionStrategy{    public bool IsTransient(Exception ex) => ex is FileNotFoundException;} public static async Task<FileStream> OpenWithRetryPolicyAsync(string path){    // Trying opening the file more then once.    return await new RetryPolicy<RetryIfFileNotFoundPolicy>(retryCount: 5)        .ExecuteAsync(() => Task.FromResult(new FileStream(path, FileMode.Open)));}

Çarşamba, 05 Eylül 2018 / Published in Uncategorized

Read full article on the Infragistics blog

At my job, I get the opportunity to talk to many .NET developers who want to learn Angular. Often, I’ve seen that they bring their .NET skills and work to map that in the learning of Angular. While the effort and drive to learn is there Angular is not .NET.

Since Angular is a pure JavaScript library, I’ll simplify basic but important concepts of Angular to .NET developers in this post series.  In this article, we’ll learn about Data Bindings in Angular. Luckily, Data Binding in Angular is much simpler than in .NET.

First, Let’s revise some of data binding techniques in .NET. For example, in ASP.NET MVC, you do data binding using a model. View is bound

  1. To an object
  2. To a complex object
  3. To a collection of objects

Essentially, in ASP.NET MVC, you do data binding to a model class. On the other hand, in WPF, you have data binding modes available. You can set the mode of data binding in XAML, as follows:

  1. One-way data binding
  2. Two-way data binding
  3. One-time data binding
  4. One-way to source data binding

If you are following MVVM patterns, then you might be using INotifyPropertyChanged interface to achieve two-way data binding. Therefore, there are many ways data bindings are achieved in world of .NET.

Data binding in Angular, however,  is much simpler.

If you are extremely new in Angular, then let me introduce you to Components. In Angular applications, what you see in the browser (or elsewhere) is a component. A component  consists of the following parts:

  1. A TypeScript class called Component class
  2. A HTML file called Template of the component
  3. An optional CSS file for the styling of the component


Read full article on the Infragistics blog

Published by Dhananjay Kumar

Dhananjay Kumar is Developer Evangelist for Infragistics. He is a 8 times Microsoft MVP and well respected Developer Advocate in India.He is the author of 900+ Blog Posts, and can often be found speaking around India at conferences and hosting free workshops for programmers across the country. So far, he has hosted 60 free workshops on various topics like JavaScript, Angular, WCF, ASP.NET MVC, C#, Azure etc. Follow him on twitter @debug_mode for all the updates about his blog posts and workshops. You can send him email at debugmode [at] outlook [dot] com

Pazartesi, 03 Eylül 2018 / Published in Uncategorized

Restoring your beliefs against this technological adoption

Image Source

Blockchain has become a common term in the current tech industry, not because of all the revolutions and markets it has created in a short duration of 8 years of its existence but all the advances it has yet to give. It has three different versions basically —

Blockchain 1.0 which has implementations in the form of cryptocurrencies like Bitcoin and Ether,

Blockchain 2.0 which comprises applications in Fintech industry and,

Blockchain 3.0 which will have more general applications such as tracking land ownership and settling property disputes on blockchain based consensus.

Version 1.0 developed the base for the usage of blockchain protocol. The very first use case is Bitcoin. As the usage of Bitcoin keeps on increasing, the underlying protocol also keeps on improving with time. Bitcoin is a cryptocurrency which works on Blockchain protocol. It is taken as the proof of concept for the blockchain protocol. Bitcoin uses proof of work as verifying your stake in the blockchain network and on the other hand Ethereum, another popular implementation of blockchain protocol, uses another way of verifying your stake in the blockchain network based on the concept of Proof of Stake. These are the leaders in Blockchain ecosystem right now and are more than five years old.

This generation of development has played an integral part in the development of the blockchain ecosystem and have paved the way for much bigger implementations of the blockchain. Applications of Blockchain are already past the version 1.0 with hundreds of cryptocurrencies and related ICOs (Initial Coin Offering) in existence. Now its time to revolutionize the Fintech industry using Blockchain 2.0 based applications.

Is Conventional Banking at stake?

There have been some early adopters of this technology already like Santander Bank, who has found more than 25 use cases of blockchain in their banking platform. Banking industry works on the trust and loyalty of general public in them and therefore they have to use technologies which are thoroughly tested and have zero failure rate because if the transactions or other banking processes run over erroneous systems then banks can’t be trusted by anyone. This is the main reason that banks have been unchanged for so long and are resistant to change their technology stack when it comes to the adaption of new technology.

Image Source — 

But this time it is not a question of whether they should move to blockchain or not but when they should shift to the blockchain. This is because, when you look at the promises and applications of blockchain in the banking sector, it is certain they don’t have a choice. They have to shift to blockchain if they want to survive this century. Let me tell you why this is the case, But first, understand what banks currently use to serve their customers.

Blockchain to Banking Solutions

Banks today are generally run over digital platform based on Java or any other such languages. They have an interface for their employees and an access control system throughout the organization which defines what an employee can or cannot do. So, for example, a cashier can deduct the amount from the system or increase the amount on the basis of a request of the respective account holder. But there is a major loophole in this system which is the cashier itself. Think about this situation where the bank cashier went rogue due to some reasons and decided to deduct a certain amount from any customer’s account. Of course, there is a human based check on the cashier whether he or she has done some illegal transaction or not but this is only done after the cashier has done some illegal transaction. The system, the technology itself is not able to do anything to stop or put a check over such actions.

Image Source — 

Here come the features of Blockchain to rescue. If such transaction had been done on a platform based on the blockchain, then this would have been stopped on the basis of consensus of the nodes involved and the cashier would have been identified without even conducting an inquiry. This is possible because of a decentralized and trustless system.

Decentralized means no individual owns or runs it. Nobody has complete ownership over everyone else, instead, everyone plays a role in making a decision for the complete system. Each individual has equal voting rights whether to take an action or not and any individual, part of the blockchain, can suggest a process and all the people can vote whether to accept it or not. This makes it a perfect system for the banks where people can actually own their money rather than giving it to a central authority which makes all the decisions and all the people who are actually running the bank have no say in it.

Secondly, blockchain provides a trustless system for managing assets and property. What this means is, you don’t have to trust anyone else to make sure that your transaction is verified or not. All you have to do is perform a transaction of your assets or property and blockchain based platform will itself make sure that it is valid and will also notify you if some invalid transaction has occurred from or to your account.

Picking up the former cashier example, these two features will render any fraud transaction invalid and will stop any harm to the account of the victim. This will render the costs of inquiry and hiring human based checks in the system to null and will help in setting up a transparent system in the society for property management. There has never been such a revolutionary system in existence in human history and that is why people in the sector are excited and spooked over this.

Another way blockchain helps is that it is inherently open in nature. What this means is you don’t have to ask anyone or take permission from anyone to become a part of the network. If you have a machine or computing resources then you can easily become a node and contribute to running the network. This makes the use of blockchain even more necessary because it is above the concepts of discrimination and bias. It treats all the nodes same and only on the basis of their proof of work or proof of stake, it takes the actions. No other node or centralized actor has the power to take and implement decisions over you or your assets.


Image Source — 

In the current scenario, if you have to open an account in a bank, you have to go through a tedious process of norms and regulations. Also, you have to put your faith in the bank that they will protect your life savings and do the right thing with them. But this system is inherently wrong from its nature. Banks don’t need to hold all this information about the account holder and in the same way, the account holder does not have to put faith in the bank. It is your money. You should own it. This is what blockchain offers. If a banking system is running on blockchain then you can just copy the metadata and get started with a bank account. It’s that simple and yet revolutionary.

Just imagine the implications of a platform comprising of above features in banking. This is what makes blockchain, not an option, but rather a necessity for banks. They can, of course, delay the switch but the switch to blockchain is inevitable and the experts of the industry agree to this. That is why all the top 10 to 20 banks of the world are investing heavily in the research over this technology.

Asking for Blockchain to your daily Banking Solutions was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.

Cuma, 31 Ağustos 2018 / Published in Uncategorized

Creating a Data Access Layer (C#)

  • 04/05/2010
  • 33 minutes to read
  • Contributors

In this article

by Scott Mitchell

Download PDF

In this tutorial we’ll start from the very beginning and create the Data Access Layer (DAL), using typed DataSets, to access the information in a database.


As web developers, our lives revolve around working with data. We create databases to store the data, code to retrieve and modify it, and web pages to collect and summarize it. This is the first tutorial in a lengthy series that will explore techniques for implementing these common patterns in ASP.NET 2.0. We’ll start with creating a software architecture composed of a Data Access Layer (DAL) using Typed DataSets, a Business Logic Layer (BLL) that enforces custom business rules, and a presentation layer composed of ASP.NET pages that share a common page layout. Once this backend groundwork has been laid, we’ll move into reporting, showing how to display, summarize, collect, and validate data from a web application. These tutorials are geared to be concise and provide step-by-step instructions with plenty of screen shots to walk you through the process visually. Each tutorial is available in C# and Visual Basic versions and includes a download of the complete code used. (This first tutorial is quite lengthy, but the rest are presented in much more digestible chunks.)

For these tutorials we’ll be using a Microsoft SQL Server 2005 Express Edition version of the Northwind database placed in the App_Data directory. In addition to the database file, the App_Data folder also contains the SQL scripts for creating the database, in case you want to use a different database version. These scripts can be also be downloaded directly from Microsoft, if you’d prefer. If you use a different SQL Server version of the Northwind database, you will need to update the NORTHWNDConnectionString setting in the application’s Web.config file. The web application was built using Visual Studio 2005 Professional Edition as a file system-based Web site project. However, all of the tutorials will work equally well with the free version of Visual Studio 2005, Visual Web Developer.

In this tutorial we’ll start from the very beginning and create the Data Access Layer (DAL), followed by creating the Business Logic Layer (BLL) in the second tutorial, and working on page layout and navigation in the third. The tutorials after the third one will build upon the foundation laid in the first three. We’ve got a lot to cover in this first tutorial, so fire up Visual Studio and let’s get started!

Step 1: Creating a Web Project and Connecting to the Database

Before we can create our Data Access Layer (DAL), we first need to create a web site and setup our database. Start by creating a new file system-based ASP.NET web site. To accomplish this, go to the File menu and choose New Web Site, displaying the New Web Site dialog box. Choose the ASP.NET Web Site template, set the Location drop-down list to File System, choose a folder to place the web site, and set the language to C#.

Figure 1: Create a New File System-Based Web Site (Click to view full-size image)

This will create a new web site with a Default.aspx ASP.NET page and an App_Data folder.

With the web site created, the next step is to add a reference to the database in Visual Studio’s Server Explorer. By adding a database to the Server Explorer you can add tables, stored procedures, views, and so on all from within Visual Studio. You can also view table data or create your own queries either by hand or graphically via the Query Builder. Furthermore, when we build the Typed DataSets for the DAL we’ll need to point Visual Studio to the database from which the Typed DataSets should be constructed. While we can provide this connection information at that point in time, Visual Studio automatically populates a drop-down list of the databases already registered in the Server Explorer.

The steps for adding the Northwind database to the Server Explorer depend on whether you want to use the SQL Server 2005 Express Edition database in the App_Data folder or if you have a Microsoft SQL Server 2000 or 2005 database server setup that you want to use instead.

Using a Database in theApp_DataFolder

If you do not have a SQL Server 2000 or 2005 database server to connect to, or you simply want to avoid having to add the database to a database server, you can use the SQL Server 2005 Express Edition version of the Northwind database that is located in the downloaded website’s App_Data folder (NORTHWND.MDF).

A database placed in the App_Data folder is automatically added to the Server Explorer. Assuming you have SQL Server 2005 Express Edition installed on your machine you should see a node named NORTHWND.MDF in the Server Explorer, which you can expand and explore its tables, views, stored procedure, and so on (see Figure 2).

The App_Data folder can also hold Microsoft Access .mdb files, which, like their SQL Server counterparts, are automatically added to the Server Explorer. If you don’t want to use any of the SQL Server options, you can always download a Microsoft Access version of the Northwind database file and drop into the App_Data directory. Keep in mind, however, that Access databases aren’t as feature-rich as SQL Server, and aren’t designed to be used in web site scenarios. Furthermore, a couple of the 35+ tutorials will utilize certain database-level features that aren’t supported by Access.

Connecting to the Database in a Microsoft SQL Server 2000 or 2005 Database Server

Alternatively, you may connect to a Northwind database installed on a database server. If the database server does not already have the Northwind database installed, you first must add it to database server by running the installation script included in this tutorial’s download or by downloading the SQL Server 2000 version of Northwind and installation script directly from Microsoft’s web site.

Once you have the database installed, go to the Server Explorer in Visual Studio, right-click on the Data Connections node, and choose Add Connection. If you don’t see the Server Explorer go to the View / Server Explorer, or hit Ctrl+Alt+S. This will bring up the Add Connection dialog box, where you can specify the server to connect to, the authentication information, and the database name. Once you have successfully configured the database connection information and clicked the OK button, the database will be added as a node underneath the Data Connections node. You can expand the database node to explore its tables, views, stored procedures, and so on.

Figure 2: Add a Connection to Your Database Server’s Northwind Database

Step 2: Creating the Data Access Layer

When working with data one option is to embed the data-specific logic directly into the presentation layer (in a web application, the ASP.NET pages make up the presentation layer). This may take the form of writing ADO.NET code in the ASP.NET page’s code portion or using the SqlDataSource control from the markup portion. In either case, this approach tightly couples the data access logic with the presentation layer. The recommended approach, however, is to separate the data access logic from the presentation layer. This separate layer is referred to as the Data Access Layer, DAL for short, and is typically implemented as a separate Class Library project. The benefits of this layered architecture are well documented (see the "Further Readings" section at the end of this tutorial for information on these advantages) and is the approach we will take in this series.

All code that is specific to the underlying data source such as creating a connection to the database, issuing SELECT, INSERT, UPDATE, and DELETE commands, and so on should be located in the DAL. The presentation layer should not contain any references to such data access code, but should instead make calls into the DAL for any and all data requests. Data Access Layers typically contain methods for accessing the underlying database data. The Northwind database, for example, has Products and Categories tables that record the products for sale and the categories to which they belong. In our DAL we will have methods like:

  • GetCategories(), which will return information about all of the categories
  • GetProducts(), which will return information about all of the products
  • GetProductsByCategoryID(categoryID), which will return all products that belong to a specified category
  • GetProductByProductID(productID), which will return information about a particular product

These methods, when invoked, will connect to the database, issue the appropriate query, and return the results. How we return these results is important. These methods could simply return a DataSet or DataReader populated by the database query, but ideally these results should be returned using strongly-typed objects. A strongly-typed object is one whose schema is rigidly defined at compile time, whereas the opposite, a loosely-typed object, is one whose schema is not known until runtime.

For example, the DataReader and the DataSet (by default) are loosely-typed objects since their schema is defined by the columns returned by the database query used to populate them. To access a particular column from a loosely-typed DataTable we need to use syntax like: DataTable.Rows[index]["columnName"]. The DataTable’s loose typing in this example is exhibited by the fact that we need to access the column name using a string or ordinal index. A strongly-typed DataTable, on the other hand, will have each of its columns implemented as properties, resulting in code that looks like: DataTable.Rows[index].columnName.

To return strongly-typed objects, developers can either create their own custom business objects or use Typed DataSets. A business object is implemented by the developer as a class whose properties typically reflect the columns of the underlying database table the business object represents. A Typed DataSet is a class generated for you by Visual Studio based on a database schema and whose members are strongly-typed according to this schema. The Typed DataSet itself consists of classes that extend the ADO.NET DataSet, DataTable, and DataRow classes. In addition to strongly-typed DataTables, Typed DataSets now also include TableAdapters, which are classes with methods for populating the DataSet’s DataTables and propagating modifications within the DataTables back to the database.

We’ll use strongly-typed DataSets for these tutorials’ architecture. Figure 3 illustrates the workflow between the different layers of an application that uses Typed DataSets.

Figure 3: All Data Access Code is Relegated to the DAL (Click to view full-size image)

Creating a Typed DataSet and Table Adapter

To begin creating our DAL, we start by adding a Typed DataSet to our project. To accomplish this, right-click on the project node in the Solution Explorer and choose Add a New Item. Select the DataSet option from the list of templates and name it Northwind.xsd.

Figure 4: Choose to Add a New DataSet to Your Project (Click to view full-size image)

After clicking Add, when prompted to add the DataSet to the App_Code folder, choose Yes. The Designer for the Typed DataSet will then be displayed, and the TableAdapter Configuration Wizard will start, allowing you to add your first TableAdapter to the Typed DataSet.

A Typed DataSet serves as a strongly-typed collection of data; it is composed of strongly-typed DataTable instances, each of which is in turn composed of strongly-typed DataRow instances. We will create a strongly-typed DataTable for each of the underlying database tables that we need to work with in this tutorials series. Let’s start with creating a DataTable for the Products table.

Keep in mind that strongly-typed DataTables do not include any information on how to access data from their underlying database table. In order to retrieve the data to populate the DataTable, we use a TableAdapter class, which functions as our Data Access Layer. For our Products DataTable, the TableAdapter will contain the methods GetProducts(), GetProductByCategoryID(categoryID), and so on that we’ll invoke from the presentation layer. The DataTable’s role is to serve as the strongly-typed objects used to pass data between the layers.

The TableAdapter Configuration Wizard begins by prompting you to select which database to work with. The drop-down list shows those databases in the Server Explorer. If you did not add the Northwind database to the Server Explorer, you can click the New Connection button at this time to do so.

Figure 5: Choose the Northwind Database from the Drop-Down List (Click to view full-size image)

After selecting the database and clicking Next, you’ll be asked if you want to save the connection string in the Web.config file. By saving the connection string you’ll avoid having it hard coded in the TableAdapter classes, which simplifies things if the connection string information changes in the future. If you opt to save the connection string in the configuration file it’s placed in the <connectionStrings> section, which can be optionally encrypted for improved security or modified later through the new ASP.NET 2.0 Property Page within the IIS GUI Admin Tool, which is more ideal for administrators.

Figure 6: Save the Connection String to Web.config (Click to view full-size image)

Next, we need to define the schema for the first strongly-typed DataTable and provide the first method for our TableAdapter to use when populating the strongly-typed DataSet. These two steps are accomplished simultaneously by creating a query that returns the columns from the table that we want reflected in our DataTable. At the end of the wizard we’ll give a method name to this query. Once that’s been accomplished, this method can be invoked from our presentation layer. The method will execute the defined query and populate a strongly-typed DataTable.

To get started defining the SQL query we must first indicate how we want the TableAdapter to issue the query. We can use an ad-hoc SQL statement, create a new stored procedure, or use an existing stored procedure. For these tutorials we’ll use ad-hoc SQL statements. Refer to Brian Noyes‘s article, Build a Data Access Layer with the Visual Studio 2005 DataSet Designer for an example of using stored procedures.

Figure 7: Query the Data Using an Ad-Hoc SQL Statement (Click to view full-size image)

At this point we can type in the SQL query by hand. When creating the first method in the TableAdapter you typically want to have the query return those columns that need to be expressed in the corresponding DataTable. We can accomplish this by creating a query that returns all columns and all rows from the Products table:

Figure 8: Enter the SQL Query Into the Textbox (Click to view full-size image)

Alternatively, use the Query Builder and graphically construct the query, as shown in Figure 9.

Figure 9: Create the Query Graphically, through the Query Editor (Click to view full-size image)

After creating the query, but before moving onto the next screen, click the Advanced Options button. In Web Site Projects, "Generate Insert, Update, and Delete statements" is the only advanced option selected by default; if you run this wizard from a Class Library or a Windows Project the "Use optimistic concurrency" option will also be selected. Leave the "Use optimistic concurrency" option unchecked for now. We’ll examine optimistic concurrency in future tutorials.

Figure 10: Select Only the Generate Insert, Update, and Delete statements Option (Click to view full-size image)

After verifying the advanced options, click Next to proceed to the final screen. Here we are asked to select which methods to add to the TableAdapter. There are two patterns for populating data:

  • Fill a DataTable with this approach a method is created that takes in a DataTable as a parameter and populates it based on the results of the query. The ADO.NET DataAdapter class, for example, implements this pattern with its Fill() method.
  • Return a DataTable with this approach the method creates and fills the DataTable for you and returns it as the methods return value.

You can have the TableAdapter implement one or both of these patterns. You can also rename the methods provided here. Let’s leave both checkboxes checked, even though we’ll only be using the latter pattern throughout these tutorials. Also, let’s rename the rather generic GetData method to GetProducts.

If checked, the final checkbox, "GenerateDBDirectMethods," creates Insert(), Update(), and Delete() methods for the TableAdapter. If you leave this option unchecked, all updates will need to be done through the TableAdapter’s sole Update() method, which takes in the Typed DataSet, a DataTable, a single DataRow, or an array of DataRows. (If you’ve unchecked the "Generate Insert, Update, and Delete statements" option from the advanced properties in Figure 9 this checkbox’s setting will have no effect.) Let’s leave this checkbox selected.

Figure 11: Change the Method Name from GetData to GetProducts (Click to view full-size image)

Complete the wizard by clicking Finish. After the wizard closes we are returned to the DataSet Designer which shows the DataTable we just created. You can see the list of columns in the Products DataTable (ProductID, ProductName, and so on), as well as the methods of the ProductsTableAdapter (Fill() and GetProducts()).

Figure 12: The Products DataTable and ProductsTableAdapter have been Added to the Typed DataSet (Click to view full-size image)

At this point we have a Typed DataSet with a single DataTable (Northwind.Products) and a strongly-typed DataAdapter class (NorthwindTableAdapters.ProductsTableAdapter) with a GetProducts() method. These objects can be used to access a list of all products from code like:

NorthwindTableAdapters.ProductsTableAdapter productsAdapter =
    new NorthwindTableAdapters.ProductsTableAdapter();
Northwind.ProductsDataTable products;
products = productsAdapter.GetProducts();
foreach (Northwind.ProductsRow productRow in products)
    Response.Write("Product: " + productRow.ProductName + "<br />");

This code did not require us to write one bit of data access-specific code. We did not have to instantiate any ADO.NET classes, we didn’t have to refer to any connection strings, SQL queries, or stored procedures. Instead, the TableAdapter provides the low-level data access code for us.

Each object used in this example is also strongly-typed, allowing Visual Studio to provide IntelliSense and compile-time type checking. And best of all the DataTables returned by the TableAdapter can be bound to ASP.NET data Web controls, such as the GridView, DetailsView, DropDownList, CheckBoxList, and several others. The following example illustrates binding the DataTable returned by the GetProducts() method to a GridView in just a scant three lines of code within the Page_Load event handler.


<%@ Page Language="C#" AutoEventWireup="true" CodeFile="AllProducts.aspx.cs"
    Inherits="AllProducts" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>View All Products in a GridView</title>
    <link href="Styles.css" rel="stylesheet" type="text/css" />
    <form id="form1" runat="server">

All Products

</form> </body> </html>


using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using NorthwindTableAdapters;
public partial class AllProducts : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        ProductsTableAdapter productsAdapter = new
        GridView1.DataSource = productsAdapter.GetProducts();

Figure 13: The List of Products is Displayed in a GridView (Click to view full-size image)

While this example required that we write three lines of code in our ASP.NET page’s Page_Load event handler, in future tutorials we’ll examine how to use the ObjectDataSource to declaratively retrieve the data from the DAL. With the ObjectDataSource we’ll not have to write any code and will get paging and sorting support as well!

Step 3: Adding Parameterized Methods to the Data Access Layer

At this point our ProductsTableAdapter class has but one method, GetProducts(), which returns all of the products in the database. While being able to work with all products is definitely useful, there are times when we’ll want to retrieve information about a specific product, or all products that belong to a particular category. To add such functionality to our Data Access Layer we can add parameterized methods to the TableAdapter.

Let’s add the GetProductsByCategoryID(categoryID) method. To add a new method to the DAL, return to the DataSet Designer, right-click in the ProductsTableAdapter section, and choose Add Query.

Figure 14: Right-Click on the TableAdapter and Choose Add Query

We are first prompted about whether we want to access the database using an ad-hoc SQL statement or a new or existing stored procedure. Let’s choose to use an ad-hoc SQL statement again. Next, we are asked what type of SQL query we’d like to use. Since we want to return all products that belong to a specified category, we want to write a SELECT statement which returns rows.

Figure 15: Choose to Create a SELECT Statement Which Returns Rows (Click to view full-size image)

The next step is to define the SQL query used to access the data. Since we want to return only those products that belong to a particular category, I use the same SELECT statement from GetProducts(), but add the following WHERE clause: WHERE CategoryID = @CategoryID. The @CategoryID parameter indicates to the TableAdapter wizard that the method we’re creating will require an input parameter of the corresponding type (namely, a nullable integer).

Figure 16: Enter a Query to Only Return Products in a Specified Category (Click to view full-size image)

In the final step we can choose which data access patterns to use, as well as customize the names of the methods generated. For the Fill pattern, let’s change the name to FillByCategoryID and for the return a DataTable return pattern (the GetX methods), let’s use GetProductsByCategoryID.

Figure 17: Choose the Names for the TableAdapter Methods (Click to view full-size image)

After completing the wizard, the DataSet Designer includes the new TableAdapter methods.

Figure 18: The Products Can Now be Queried by Category

Take a moment to add a GetProductByProductID(productID) method using the same technique.

These parameterized queries can be tested directly from the DataSet Designer. Right-click on the method in the TableAdapter and choose Preview Data. Next, enter the values to use for the parameters and click Preview.

Figure 19: Those Products Belonging to the Beverages Category are Shown (Click to view full-size image)

With the GetProductsByCategoryID(categoryID) method in our DAL, we can now create an ASP.NET page that displays only those products in a specified category. The following example shows all products that are in the Beverages category, which have a CategoryID of 1.


<%@ Page Language="C#" AutoEventWireup="true" CodeFile="Beverages.aspx.cs"
    Inherits="Beverages" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Untitled Page</title>
    <link href="Styles.css" rel="stylesheet" type="text/css" />
    <form id="form1" runat="server">


</form> </body> </html>


using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using NorthwindTableAdapters;
public partial class Beverages : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        ProductsTableAdapter productsAdapter = new
        GridView1.DataSource =

Figure 20: Those Products in the Beverages Category are Displayed (Click to view full-size image)

Step 4: Inserting, Updating, and Deleting Data

There are two patterns commonly used for inserting, updating, and deleting data. The first pattern, which I’ll call the database direct pattern, involves creating methods that, when invoked, issue an INSERT, UPDATE, or DELETE command to the database that operates on a single database record. Such methods are typically passed in a series of scalar values (integers, strings, Booleans, DateTimes, and so on) that correspond to the values to insert, update, or delete. For example, with this pattern for the Products table the delete method would take in an integer parameter, indicating the ProductID of the record to delete, while the insert method would take in a string for the ProductName, a decimal for the UnitPrice, an integer for the UnitsOnStock, and so on.

Figure 21: Each Insert, Update, and Delete Request is Sent to the Database Immediately (Click to view full-size image)

The other pattern, which I’ll refer to as the batch update pattern, is to update an entire DataSet, DataTable, or collection of DataRows in one method call. With this pattern a developer deletes, inserts, and modifies the DataRows in a DataTable and then passes those DataRows or DataTable into an update method. This method then enumerates the DataRows passed in, determines whether or not they’ve been modified, added, or deleted (via the DataRow’s RowState property value), and issues the appropriate database request for each record.

Figure 22: All Changes are Synchronized with the Database When the Update Method is Invoked (Click to view full-size image)

The TableAdapter uses the batch update pattern by default, but also supports the DB direct pattern. Since we selected the "Generate Insert, Update, and Delete statements" option from the Advanced Properties when creating our TableAdapter, the ProductsTableAdapter contains an Update() method, which implements the batch update pattern. Specifically, the TableAdapter contains an Update() method that can be passed the Typed DataSet, a strongly-typed DataTable, or one or more DataRows. If you left the "GenerateDBDirectMethods" checkbox checked when first creating the TableAdapter the DB direct pattern will also be implemented via Insert(), Update(), and Delete() methods.

Both data modification patterns use the TableAdapter’s InsertCommand, UpdateCommand, and DeleteCommand properties to issue their INSERT, UPDATE, and DELETE commands to the database. You can inspect and modify the InsertCommand, UpdateCommand, and DeleteCommand properties by clicking on the TableAdapter in the DataSet Designer and then going to the Properties window. (Make sure you have selected the TableAdapter, and that the ProductsTableAdapter object is the one selected in the drop-down list in the Properties window.)

Figure 23: The TableAdapter has InsertCommand, UpdateCommand, and DeleteCommand Properties (Click to view full-size image)

To examine or modify any of these database command properties, click on the CommandText subproperty, which will bring up the Query Builder.

Figure 24: Configure the INSERT, UPDATE, and DELETE Statements in the Query Builder (Click to view full-size image)

The following code example shows how to use the batch update pattern to double the price of all products that are not discontinued and that have 25 units in stock or less:

NorthwindTableAdapters.ProductsTableAdapter productsAdapter =
  new NorthwindTableAdapters.ProductsTableAdapter();
// For each product, double its price if it is not discontinued and
// there are 25 items in stock or less
Northwind.ProductsDataTable products = productsAdapter.GetProducts();
foreach (Northwind.ProductsRow product in products)
   if (!product.Discontinued && product.UnitsInStock <= 25)
      product.UnitPrice *= 2;
// Update the products

The code below illustrates how to use the DB direct pattern to programmatically delete a particular product, then update one, and then add a new one:

NorthwindTableAdapters.ProductsTableAdapter productsAdapter =
    new NorthwindTableAdapters.ProductsTableAdapter();
// Delete the product with ProductID 3
// Update Chai (ProductID of 1), setting the UnitsOnOrder to 15
productsAdapter.Update("Chai", 1, 1, "10 boxes x 20 bags",
  18.0m, 39, 15, 10, false, 1);
// Add a new product
productsAdapter.Insert("New Product", 1, 1,
  "12 tins per carton", 14.95m, 15, 0, 10, false);

Creating Custom Insert, Update, and Delete Methods

The Insert(), Update(), and Delete() methods created by the DB direct method can be a bit cumbersome, especially for tables with many columns. Looking at the previous code example, without IntelliSense’s help it’s not particularly clear what Products table column maps to each input parameter to the Update() and Insert() methods. There may be times when we only want to update a single column or two, or want a customized Insert() method that will, perhaps, return the value of the newly inserted record’s IDENTITY (auto-increment) field.

To create such a custom method, return to the DataSet Designer. Right-click on the TableAdapter and choose Add Query, returning to the TableAdapter wizard. On the second screen we can indicate the type of query to create. Let’s create a method that adds a new product and then returns the value of the newly added record’s ProductID. Therefore, opt to create an INSERT query.

Figure 25: Create a Method to Add a New Row to the Products Table (Click to view full-size image)

On the next screen the InsertCommand‘s CommandText appears. Augment this query by adding SELECT SCOPE_IDENTITY() at the end of the query, which will return the last identity value inserted into an IDENTITY column in the same scope. (See the technical documentation for more information about SCOPE_IDENTITY() and why you probably want to use SCOPE_IDENTITY() in lieu of @@IDENTITY.) Make sure that you end the INSERT statement with a semi-colon before adding the SELECT statement.

Figure 26: Augment the Query to Return the SCOPE_IDENTITY() Value (Click to view full-size image)

Finally, name the new method InsertProduct.

Figure 27: Set the New Method Name to InsertProduct (Click to view full-size image)

When you return to the DataSet Designer you’ll see that the ProductsTableAdapter contains a new method, InsertProduct. If this new method doesn’t have a parameter for each column in the Products table, chances are you forgot to terminate the INSERT statement with a semi-colon. Configure the InsertProduct method and ensure you have a semi-colon delimiting the INSERT and SELECT statements.

By default, insert methods issue non-query methods, meaning that they return the number of affected rows. However, we want the InsertProduct method to return the value returned by the query, not the number of rows affected. To accomplish this, adjust the InsertProduct method’s ExecuteMode property to Scalar.

Figure 28: Change the ExecuteMode Property to Scalar (Click to view full-size image)

The following code shows this new InsertProduct method in action:

NorthwindTableAdapters.ProductsTableAdapter productsAdapter =
    new NorthwindTableAdapters.ProductsTableAdapter();
// Add a new product
int new_productID = Convert.ToInt32(productsAdapter.InsertProduct
    ("New Product", 1, 1, "12 tins per carton", 14.95m, 10, 0, 10, false));
// On second thought, delete the product

Step 5: Completing the Data Access Layer

Note that the ProductsTableAdapters class returns the CategoryID and SupplierID values from the Products table, but doesn’t include the CategoryName column from the Categories table or the CompanyName column from the Suppliers table, although these are likely the columns we want to display when showing product information. We can augment the TableAdapter’s initial method, GetProducts(), to include both the CategoryName and CompanyName column values, which will update the strongly-typed DataTable to include these new columns as well.

This can present a problem, however, as the TableAdapter’s methods for inserting, updating, and deleting data are based off of this initial method. Fortunately, the auto-generated methods for inserting, updating, and deleting are not affected by subqueries in the SELECT clause. By taking care to add our queries to Categories and Suppliers as subqueries, rather than JOIN s, we’ll avoid having to rework those methods for modifying data. Right-click on the GetProducts() method in the ProductsTableAdapter and choose Configure. Then, adjust the SELECT clause so that it looks like:

SELECT     ProductID, ProductName, SupplierID, CategoryID,
QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued,
(SELECT CategoryName FROM Categories
WHERE Categories.CategoryID = Products.CategoryID) as CategoryName,
(SELECT CompanyName FROM Suppliers
WHERE Suppliers.SupplierID = Products.SupplierID) as SupplierName
FROM         Products

Figure 29: Update the SELECT Statement for the GetProducts() Method (Click to view full-size image)

After updating the GetProducts() method to use this new query the DataTable will include two new columns: CategoryName and SupplierName.

Figure 30: The Products DataTable has Two New Columns

Take a moment to update the SELECT clause in the GetProductsByCategoryID(categoryID) method as well.

If you update the GetProducts() SELECT using JOIN syntax the DataSet Designer won’t be able to auto-generate the methods for inserting, updating, and deleting database data using the DB direct pattern. Instead, you’ll have to manually create them much like we did with the InsertProduct method earlier in this tutorial. Furthermore, you’ll manually have to provide the InsertCommand, UpdateCommand, and DeleteCommand property values if you want to use the batch updating pattern.

Adding the Remaining TableAdapters

Up until now, we’ve only looked at working with a single TableAdapter for a single database table. However, the Northwind database contains several related tables that we’ll need to work with in our web application. A Typed DataSet can contain multiple, related DataTables. Therefore, to complete our DAL we need to add DataTables for the other tables we’ll be using in these tutorials. To add a new TableAdapter to a Typed DataSet, open the DataSet Designer, right-click in the Designer, and choose Add / TableAdapter. This will create a new DataTable and TableAdapter and walk you through the wizard we examined earlier in this tutorial.

Take a few minutes to create the following TableAdapters and methods using the following queries. Note that the queries in the ProductsTableAdapter include the subqueries to grab each product’s category and supplier names. Additionally, if you’ve been following along, you’ve already added the ProductsTableAdapter class’s GetProducts() and GetProductsByCategoryID(categoryID) methods.

  • ProductsTableAdapter

    • GetProducts:

      SELECT     ProductID, ProductName, SupplierID, 
      CategoryID, QuantityPerUnit, UnitPrice, UnitsInStock, 
      UnitsOnOrder, ReorderLevel, Discontinued, 
      (SELECT CategoryName FROM Categories WHERE
      Categories.CategoryID = Products.CategoryID) as 
      CategoryName, (SELECT CompanyName FROM Suppliers
      WHERE Suppliers.SupplierID = Products.SupplierID) 
      as SupplierName
      FROM         Products
    • GetProductsByCategoryID:

      SELECT     ProductID, ProductName, SupplierID, CategoryID,
      QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder,
      ReorderLevel, Discontinued, (SELECT CategoryName
      FROM Categories WHERE Categories.CategoryID = 
      Products.CategoryID) as CategoryName,
      (SELECT CompanyName FROM Suppliers WHERE
      Suppliers.SupplierID = Products.SupplierID)
      as SupplierName
      FROM         Products
      WHERE      CategoryID = @CategoryID
    • GetProductsBySupplierID:

      SELECT     ProductID, ProductName, SupplierID, CategoryID,
      QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder,
      ReorderLevel, Discontinued, (SELECT CategoryName
      FROM Categories WHERE Categories.CategoryID = 
      Products.CategoryID) as CategoryName, 
      (SELECT CompanyName FROM Suppliers WHERE 
      Suppliers.SupplierID = Products.SupplierID) as SupplierName
      FROM         Products
      WHERE SupplierID = @SupplierID
    • GetProductByProductID:

      SELECT     ProductID, ProductName, SupplierID, CategoryID,
      QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder,
      ReorderLevel, Discontinued, (SELECT CategoryName 
      FROM Categories WHERE Categories.CategoryID = 
      Products.CategoryID) as CategoryName, 
      (SELECT CompanyName FROM Suppliers WHERE Suppliers.SupplierID = Products.SupplierID) 
      as SupplierName
      FROM         Products
      WHERE ProductID = @ProductID
  • CategoriesTableAdapter

    • GetCategories:

      SELECT     CategoryID, CategoryName, Description
      FROM         Categories
    • GetCategoryByCategoryID:

      SELECT     CategoryID, CategoryName, Description
      FROM         Categories
      WHERE CategoryID = @CategoryID
  • SuppliersTableAdapter

    • GetSuppliers:

      SELECT     SupplierID, CompanyName, Address,
      City, Country, Phone
      FROM         Suppliers
    • GetSuppliersByCountry:

      SELECT     SupplierID, CompanyName, Address,
      City, Country, Phone
      FROM         Suppliers
      WHERE Country = @Country
    • GetSupplierBySupplierID:

      SELECT     SupplierID, CompanyName, Address,
      City, Country, Phone
      FROM         Suppliers
      WHERE SupplierID = @SupplierID
  • EmployeesTableAdapter

    • GetEmployees:

      SELECT     EmployeeID, LastName, FirstName, Title,
      HireDate, ReportsTo, Country
      FROM         Employees
    • GetEmployeesByManager:

      SELECT     EmployeeID, LastName, FirstName, Title, 
      HireDate, ReportsTo, Country
      FROM         Employees
      WHERE ReportsTo = @ManagerID
    • GetEmployeeByEmployeeID:

      SELECT     EmployeeID, LastName, FirstName, Title,
      HireDate, ReportsTo, Country
      FROM         Employees
      WHERE EmployeeID = @EmployeeID

Figure 31: The DataSet Designer After the Four TableAdapters Have Been Added (Click to view full-size image)

Adding Custom Code to the DAL

The TableAdapters and DataTables added to the Typed DataSet are expressed as an XML Schema Definition file (Northwind.xsd). You can view this schema information by right-clicking on the Northwind.xsd file in the Solution Explorer and choosing View Code.

Figure 32: The XML Schema Definition (XSD) File for the Northwinds Typed DataSet (Click to view full-size image)

This schema information is translated into C# or Visual Basic code at design time when compiled or at runtime (if needed), at which point you can step through it with the debugger. To view this auto-generated code go to the Class View and drill down to the TableAdapter or Typed DataSet classes. If you don’t see the Class View on your screen, go to the View menu and select it from there, or hit Ctrl+Shift+C. From the Class View you can see the properties, methods, and events of the Typed DataSet and TableAdapter classes. To view the code for a particular method, double-click the method name in the Class View or right-click on it and choose Go To Definition.

Figure 33: Inspect the Auto-Generated Code by Selecting Go To Definition from the Class View

While auto-generated code can be a great time saver, the code is often very generic and needs to be customized to meet the unique needs of an application. The risk of extending auto-generated code, though, is that the tool that generated the code might decide it’s time to "regenerate" and overwrite your customizations. With .NET 2.0’s new partial class concept, it’s easy to split a class across multiple files. This enables us to add our own methods, properties, and events to the auto-generated classes without having to worry about Visual Studio overwriting our customizations.

To demonstrate how to customize the DAL, let’s add a GetProducts() method to the SuppliersRow class. The SuppliersRow class represents a single record in the Suppliers table; each supplier can provider zero to many products, so GetProducts() will return those products of the specified supplier. To accomplish this create a new class file in the App_Code folder named SuppliersRow.cs and add the following code:

using System;
using System.Data;
using NorthwindTableAdapters;
public partial class Northwind
    public partial class SuppliersRow
        public Northwind.ProductsDataTable GetProducts()
            ProductsTableAdapter productsAdapter =
             new ProductsTableAdapter();

This partial class instructs the compiler that when building the Northwind.SuppliersRow class to include the GetProducts() method we just defined. If you build your project and then return to the Class View you’ll see GetProducts() now listed as a method of Northwind.SuppliersRow.

Figure 34: The GetProducts() Method is Now Part of the Northwind.SuppliersRow Class

The GetProducts() method can now be used to enumerate the set of products for a particular supplier, as the following code shows:

NorthwindTableAdapters.SuppliersTableAdapter suppliersAdapter =
    new NorthwindTableAdapters.SuppliersTableAdapter();
// Get all of the suppliers
Northwind.SuppliersDataTable suppliers =
// Enumerate the suppliers
foreach (Northwind.SuppliersRow supplier in suppliers)
    Response.Write("Supplier: " + supplier.CompanyName);
    // List the products for this supplier
    Northwind.ProductsDataTable products = supplier.GetProducts();
    foreach (Northwind.ProductsRow product in products)
        Response.Write("<li>" + product.ProductName + "</li>");
    Response.Write("</ul><p> </p>");

This data can also be displayed in any of ASP.NET’s data Web controls. The following page uses a GridView control with two fields:

  • A BoundField that displays the name of each supplier, and
  • A TemplateField that contains a BulletedList control that is bound to the results returned by the GetProducts() method for each supplier.

We’ll examine how to display such master-detail reports in future tutorials. For now, this example is designed to illustrate using the custom method added to the Northwind.SuppliersRow class.


<%@ Page Language="C#" CodeFile="SuppliersAndProducts.aspx.cs"
    AutoEventWireup="true" Inherits="SuppliersAndProducts" %>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN"
<html xmlns="http://www.w3.org/1999/xhtml" >
<head runat="server">
    <title>Untitled Page</title>
    <link href="Styles.css" rel="stylesheet" type="text/css" />
    <form id="form1" runat="server">

Suppliers and Their Products

" DataTextField="ProductName">

</form> </body> </html>


using System;
using System.Data;
using System.Configuration;
using System.Collections;
using System.Web;
using System.Web.Security;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using System.Web.UI.HtmlControls;
using NorthwindTableAdapters;
public partial class SuppliersAndProducts : System.Web.UI.Page
    protected void Page_Load(object sender, EventArgs e)
        SuppliersTableAdapter suppliersAdapter = new
        GridView1.DataSource = suppliersAdapter.GetSuppliers();

Figure 35: The Supplier’s Company Name is Listed in the Left Column, Their Products in the Right (Click to view full-size image)


When building a web application creating the DAL should be one of your first steps, occurring before you start creating your presentation layer. With Visual Studio, creating a DAL based on Typed DataSets is a task that can be accomplished in 10-15 minutes without writing a line of code. The tutorials moving forward will build upon this DAL. In the next tutorial we’ll define a number of business rules and see how to implement them in a separate Business Logic Layer.

Happy Programming!

Further Reading

For more information on the topics discussed in this tutorial, refer to the following resources:

Video Training on Topics Contained in this Tutorial

About the Author

Scott Mitchell, author of seven ASP/ASP.NET books and founder of 4GuysFromRolla.com, has been working with Microsoft Web technologies since 1998. Scott works as an independent consultant, trainer, and writer. His latest book is Sams Teach Yourself ASP.NET 2.0 in 24 Hours. He can be reached at mitchell@4GuysFromRolla.com. or via his blog, which can be found at http://ScottOnWriting.NET.

Special Thanks To

This tutorial series was reviewed by many helpful reviewers. Lead reviewers for this tutorial were Ron Green, Hilton Giesenow, Dennis Patterson, Liz Shulok, Abel Gomez, and Carlos Santos. Interested in reviewing my upcoming MSDN articles? If so, drop me a line at mitchell@4GuysFromRolla.com.

Cuma, 31 Ağustos 2018 / Published in Uncategorized

Use LibMan with ASP.NET Core in Visual Studio

  • 08/20/2018
  • 7 minutes to read
  • Contributors

In this article

By Scott Addie

Visual Studio has built-in support for LibMan in ASP.NET Core projects, including:

  • Support for configuring and running LibMan restore operations on build.
  • Menu items for triggering LibMan restore and clean operations.
  • Search dialog for finding libraries and adding the files to a project.
  • Editing support for libman.json—the LibMan manifest file.

View or download sample code (how to download)


  • Visual Studio 2017 version 15.8 or later with the ASP.NET and web development workload

Add library files

Library files can be added to an ASP.NET Core project in two different ways:

  1. Use the Add Client-Side Library dialog
  2. Manually configure LibMan manifest file entries

Use the Add Client-Side Library dialog

Follow these steps to install a client-side library:

  • In Solution Explorer, right-click the project folder in which the files should be added. Choose Add > Client-Side Library. The Add Client-Side Library dialog appears:

  • Select the library provider from the Provider drop down. CDNJS is the default provider.

  • Type the library name to fetch in the Library text box. IntelliSense provides a list of libraries beginning with the provided text.

  • Select the library from the IntelliSense list. Notice the library name is suffixed with the @ symbol and the latest stable version known to the selected provider.

  • Decide which files to include:

    • Select the Include all library files radio button to include all of the library’s files.
    • Select the Choose specific files radio button to include a subset of the library’s files. When the radio button is selected, the file selector tree is enabled. Check the boxes to the left of the file names to download.
  • Specify the project folder for storing the files in the Target Location text box. As a recommendation, store each library in a separate folder.

    The suggested Target Location folder is based on the location from which the dialog launched:

    • If launched from the project root:
      • wwwroot/lib is used if wwwroot exists.
      • lib is used if wwwroot doesn’t exist.
    • If launched from a project folder, the corresponding folder name is used.

    The folder suggestion is suffixed with the library name. The following table illustrates folder suggestions when installing jQuery in a Razor Pages project.

    Launch location Suggested folder
    project root (if wwwroot exists) wwwroot/lib/jquery/
    project root (if wwwroot doesn’t exist) lib/jquery/
    Pages folder in project Pages/jquery/
  • Click the Install button to download the files, per the configuration in libman.json.

  • Review the Library Manager feed of the Output window for installation details. For example:

    Restore operation started...
    Restoring libraries for project LibManSample
    Restoring library jquery@3.3.1... (LibManSample)
    wwwroot/lib/jquery/jquery.min.js written to destination (LibManSample)
    wwwroot/lib/jquery/jquery.js written to destination (LibManSample)
    wwwroot/lib/jquery/jquery.min.map written to destination (LibManSample)
    Restore operation completed
    1 libraries restored in 2.32 seconds

Manually configure LibMan manifest file entries

All LibMan operations in Visual Studio are based on the content of the project root’s LibMan manifest (libman.json). You can manually edit libman.json to configure library files for the project. Visual Studio restores all library files once libman.json is saved.

To open libman.json for editing, the following options exist:

  • Double-click the libman.json file in Solution Explorer.
  • Right-click the project in Solution Explorer and select Manage Client-Side Libraries.
  • Select Manage Client-Side Libraries from the Visual Studio Project menu.

If the libman.json file doesn’t already exist in the project root, it will be created with the default item template content.

Visual Studio offers rich JSON editing support such as colorization, formatting, IntelliSense, and schema validation. The LibMan manifest’s JSON schema is found at http://json.schemastore.org/libman.

With the following manifest file, LibMan retrieves files per the configuration defined in the libraries property. An explanation of the object literals defined within libraries follows:

  • A subset of jQuery version 3.3.1 is retrieved from the CDNJS provider. The subset is defined in the files property—jquery.min.js, jquery.js, and jquery.min.map. The files are placed in the project’s wwwroot/lib/jquery folder.
  • The entirety of Bootstrap version 4.1.3 is retrieved and placed in a wwwroot/lib/bootstrap folder. The object literal’s provider property overrides the defaultProvider property value. LibMan retrieves the Bootstrap files from the unpkg provider.
  • A subset of Lodash was approved by a governing body within the organization. The lodash.js and lodash.min.js files are retrieved from the local file system at C:\temp\lodash\. The files are copied to the project’s wwwroot/lib/lodash folder.
  "version": "1.0",
  "defaultProvider": "cdnjs",
  "libraries": [
      "library": "jquery@3.3.1",
      "files": [
      "destination": "wwwroot/lib/jquery/"
      "provider": "unpkg",
      "library": "bootstrap@4.1.3",
      "destination": "wwwroot/lib/bootstrap/"
      "provider": "filesystem",
      "library": "C:\\temp\\lodash\\",
      "files": [
      "destination": "wwwroot/lib/lodash/"


LibMan only supports one version of each library from each provider. The libman.json file fails schema validation if it contains two libraries with the same library name for a given provider.

Restore library files

To restore library files from within Visual Studio, there must be a valid libman.json file in the project root. Restored files are placed in the project at the location specified for each library.

Library files can be restored in an ASP.NET Core project in two ways:

  1. Restore files during build
  2. Restore files manually

Restore files during build

LibMan can restore the defined library files as part of the build process. By default, the restore-on-build behavior is disabled.

To enable and test the restore-on-build behavior:

  • Right-click libman.json in Solution Explorer and select Enable Restore Client-Side Libraries on Build from the context menu.

  • Click the Yes button when prompted to install a NuGet package. The Microsoft.Web.LibraryManager.Build NuGet package is added to the project:

    <PackageReference Include="Microsoft.Web.LibraryManager.Build" Version="1.0.113" />
  • Build the project to confirm LibMan file restoration occurs. The Microsoft.Web.LibraryManager.Build package injects an MSBuild target that runs LibMan during the project’s build operation.

  • Review the Build feed of the Output window for a LibMan activity log:

    1>------ Build started: Project: LibManSample, Configuration: Debug Any CPU ------
    1>Restore operation started...
    1>Restoring library jquery@3.3.1...
    1>Restoring library bootstrap@4.1.3...
    1>2 libraries restored in 10.66 seconds
    1>LibManSample -> C:\LibManSample\bin\Debug\netcoreapp2.1\LibManSample.dll
    ========== Build: 1 succeeded, 0 failed, 0 up-to-date, 0 skipped ==========

When the restore-on-build behavior is enabled, the libman.json context menu displays a Disable Restore Client-Side Libraries on Build option. Selecting this option removes the Microsoft.Web.LibraryManager.Build package reference from the project file. Consequently, the client-side libraries are no longer restored on each build.

Regardless of the restore-on-build setting, you can manually restore at any time from the libman.json context menu. For more information, see Restore files manually.

Restore files manually

To manually restore library files:

  • For all projects in the solution:
    • Right-click the solution name in Solution Explorer.
    • Select the Restore Client-Side Libraries option.
  • For a specific project:
    • Right-click the libman.json file in Solution Explorer.
    • Select the Restore Client-Side Libraries option.

While the restore operation is running:

  • The Task Status Center (TSC) icon on the Visual Studio status bar will be animated and will read Restore operation started. Clicking the icon opens a tooltip listing the known background tasks.

  • Messages will be sent to the status bar and the Library Manager feed of the Output window. For example:

    Restore operation started...
    Restoring libraries for project LibManSample
    Restoring library jquery@3.3.1... (LibManSample)
    wwwroot/lib/jquery/jquery.min.js written to destination (LibManSample)
    wwwroot/lib/jquery/jquery.js written to destination (LibManSample)
    wwwroot/lib/jquery/jquery.min.map written to destination (LibManSample)
    Restore operation completed
    1 libraries restored in 2.32 seconds

Delete library files

To perform the clean operation, which deletes library files previously restored in Visual Studio:

  • Right-click the libman.json file in Solution Explorer.
  • Select the Clean Client-Side Libraries option.

To prevent unintentional removal of non-library files, the clean operation doesn’t delete whole directories. It only removes files that were included in the previous restore.

While the clean operation is running:

  • The TSC icon on the Visual Studio status bar will be animated and will read Client libraries operation started. Clicking the icon opens a tooltip listing the known background tasks.
  • Messages are sent to the status bar and the Library Manager feed of the Output window. For example:
Clean libraries operation started...
Clean libraries operation completed
2 libraries were successfully deleted in 1.91 secs

The clean operation only deletes files from the project. Library files stay in the cache for faster retrieval on future restore operations. To manage library files stored in the local machine’s cache, use the LibMan CLI.

Uninstall library files

To uninstall library files:

  • Open libman.json.

  • Position the caret inside the corresponding libraries object literal.

  • Click the light bulb icon that appears in the left margin, and select Uninstall <library_name>@<library_version>:

Alternatively, you can manually edit and save the LibMan manifest (libman.json). The restore operation runs when the file is saved. Library files that are no longer defined in libman.json are removed from the project.

Update library version

To check for an updated library version:

  • Open libman.json.
  • Position the caret inside the corresponding libraries object literal.
  • Click the light bulb icon that appears in the left margin. Hover over Check for updates.

LibMan checks for a library version newer than the version installed. The following outcomes can occur:

  • A No updates found message is displayed if the latest version is already installed.

  • The latest stable version is displayed if not already installed.

  • If a pre-release newer than the installed version is available, the pre-release is displayed.

To downgrade to an older library version, manually edit the libman.json file. When the file is saved, the LibMan restore operation:

  • Removes redundant files from the previous version.
  • Adds new and updated files from the new version.

Additional resources

Cuma, 31 Ağustos 2018 / Published in Uncategorized

Client-side library acquisition in ASP.NET Core with LibMan

  • 08/14/2018
  • 2 minutes to read
  • Contributors

In this article

By Scott Addie

Library Manager (LibMan) is a lightweight, client-side library acquisition tool. LibMan downloads popular libraries and frameworks from the file system or from a content delivery network (CDN). The supported CDNs include CDNJS and unpkg. The selected library files are fetched and placed in the appropriate location within the ASP.NET Core project.

LibMan use cases

LibMan offers the following benefits:

  • Only the library files you need are downloaded.
  • Additional tooling, such as Node.js, npm, and WebPack, isn’t necessary to acquire a subset of files in a library.
  • Files can be placed in a specific location without resorting to build tasks or manual file copying.

For more information about LibMan’s benefits, watch Modern front-end web development in Visual Studio 2017: LibMan segment.

LibMan isn’t a package management system. If you’re already using a package manager, such as npm or yarn, continue doing so. LibMan wasn’t developed to replace those tools.

Additional resources

Cuma, 31 Ağustos 2018 / Published in Uncategorized


The tricky part when running a web solution with a web API in docker containers is to map the URLs and ports so that the code running inside the docker container can be accessed from outside. This is a question of docker configuration and minor code changes.


This article is a contribution to the Docker Contest described in this article.


  • Visual Studio 2017, community version. Latest release.
  • You have installed “Docker For Windows” on your computer: https://download.docker.com/win/stable/Docker for Windows Installer.exe.
  • You have an existing solution with a web API and a web “model-view-controller“ project and that the MVC project is able to communicate with the web API via a REST http interface. If not, you may use the CarApi and CarClient projects (see below) to implement your own solution.

The code belonging to this article is the containerized versions of CarClient and CarApi from this article.

In this article, docker support has been added and the docker configuration files have been updated to make it possible to access the API from CarClient, both frontend and backend.

How to Containerize an Existing Project

To add docker support for an existing web project, e.g., CarApi, open the project in Visual Studio, right click the project and chose Add -> Docker Support:

A docker configuration file, “Dockerfile”, is created and it looks like this:

# For more info see: http://aka.ms/VSContainerToolingDockerfiles
FROM microsoft/aspnetcore:2.0 AS base

FROM microsoft/aspnetcore-build:2.0 AS builder
COPY *.sln ./
COPY CarApi/CarApi.csproj CarApi/
RUN dotnet restore
COPY . .
WORKDIR /src/CarApi
RUN dotnet build -c Release -o /app

FROM builder AS publish
RUN dotnet publish -c Release -o /app

FROM base AS production
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "CarApi.dll"

Do this for both projects in your existing solution, i.e., for the web api and the web MVC project. When this is done, you need to add a docker-compose project to your solution.

Add a docker-compose Project

To add a docker-compose project to the solution, right click one of the projects and select Add -> Container Orchestrator Support -> Docker Compose -> Target OS:Linux.

The added project is of type “.dcproj” and the following files are created:


The next step is to right click the other project and in the same way, select Add -> Container Orchestrator Support ->Docker Compose -> Target OS:Linux.

Suppose that your two projects are called “CarClient” and “CarApi”, then the resulting docker-compose.yml looks like this:

version: '3.4'

    image: ${DOCKER_REGISTRY}carclient
      context: .
      dockerfile: CarClient/Dockerfile

    image: ${DOCKER_REGISTRY}carapi
      context: .
      dockerfile: CarApi/Dockerfile

The Containerized Solution with docker-compose

After having added Dockerfiles to each project and the docker-compose project to the solution, the solution consists of three projects: A web MVC project, a web API project and a docker-compose project.

Code and Config Changes

To make the containerized version function, we need to make some code and configuration changes.


In the original CarClient project, the web API was reached via the following URL:

private static readonly Uri Endpoint = new Uri("http://localhost:54411/");

For the containerized solution, we use “dns discovery”. Docker networking, as well as kubernetes handles all this magic. Instead of localhost, the name of the service, as defined in the docker-compose, is used. To call the CarApi, use http://carapi. You don’t need to set the port number as the port number is an external attribute. Like this:

private static readonly Uri Endpoint = new Uri("http://carapi/");


The JavaScript running in the browser uses port 54411. We must expose port 54411 by changing the docker configuration file for CarApi like this:

In the web API Dockerfile, write EXPOSE 54411:

# For more info see: http://aka.ms/VSContainerToolingDockerfiles
FROM microsoft/aspnetcore:2.0 AS base
EXPOSE 54411

In the docker-compose-yml map external port 54411 to internal 80:

version: '3.4'

    image: ${DOCKER_REGISTRY}carapi
      - 54411:80

The original JavaScript code is kept:

xmlhttp.open("GET", "http://localhost:54411/api/car", true);

That’s all that is needed. You can now run your containerized solution in Visual Studio.

The code is available at:

Cuma, 31 Ağustos 2018 / Published in Uncategorized

Disclaimer: I’m not a Git guru, so information in this post might not be the most accurate, it just works on my machine and wanted to share my experience.

I take for granted that you are a Visual Studio user, that you use Git using the Visual Studio plugin and, like me, have the need to work on projects where you need to share code hosted in its own separate repository among different solutions. I know that there are several solutions for this, as example using Nuget packages for shared code, but none of them reaches the flexibility that using source code offers, both in terms of debugging and real time bug fixing. Ok, not ‘technically correct’ I know, but it works, and I love things that works and make my life easier. Since I’m not the only one with this need I checked online and found that, among different alternatives, the most voted one (with also several opponents indeed) is to use Git SubModules that, to make it simple, are nothing more than repositories embedded inside a main repository. In this case, the submodule repository is copied into a subfolder of main one, and information about the original module are also added to main repository this means that when you clone the main project also all submodules are cloned.

Submodules in action Let’s create our fist project that uses a git submodule, fasten your seat belt and enjoy the journey.

I’ve created two UWP projects (but you can use any kind of project of course…) a main UWP application SubModulesApp and a UWP library named SubModulesLib, each one has its own repository hosted on github.com. I now have the need that SubModulesApp must use some services contained inside SubModules lib and once I start using the lib, is evident that both repos will have a string relationship so, even if i could keep them separated and just reference local SubModulesLib project from main app, the best solution is to create a submodule, this also gives us the benefit to keep submodule on a different commit compared to the ‘master’ one in case we need it.

Let’s start and open our empty app in Visual Studio:

Now open a Visual Studio command prompt at solution folder, if you use Visual Studio PowerCommands, just right click the solution node and select Power Command –> Open Command Prompt. Let now type: git submodule add <Path to your git repository> <foldername> and your project will be cloned into <foldername> here’s an example of what I’ve got on my machine

and here’s project folder structure

Now you can add the Lib project inside MyAwesomeLib folder to SubModulesApp project

Consume the Lib services and push everything back to github.

Let’s now make an important test: What if lib author updates the code in SubModulesLib? will I get the changes if when I pull SubModulesApp? To test it I’ve added a comment to MyCalculator.cs class and pushed the change back to original repository, I then pulled the SubModulesApp that uses the lib as submodules and, unfortunately, the change is not there so, it looks like that what we get here is a copy, or, to better say, something not pointing to the latest commit. To see the changes we need to open the solution from inside our nested folder (in this case MyAwesomeLib) and pull the changes from there, totally annoying stuff that could be avoided if Git plugin for Visual Studio would support multiple repositories (please vote for this feature here: https://visualstudio.uservoice.com/forums/121579-visual-studio-ide/suggestions/8960629-allow-multiple-git-repositories-to-be-active-at-on) What about the opposite? If I do a modification of code inside a submodule, will it be pushed back to the original repository? (in our case from inside SubModuleApp solution) unfortunately not, as before you need to push changes from the instance of Visual Studio that hosts the SubModuleLib residing inside MyAwesomeLib folder doing that properly aligns the original source repository.

All this works because we are working on the project where submodule was created, if someone else need to clone and work on the same project the following steps must be done:

1-Clone the project from Visual Studio (or manually if you’re an hypster…) 2-Open a VS Command Prompt at solution level and issue this command: git submodule update –init –recursive 3-Open each submodule under main solution, and checkout the associated branch using Visual Studio plugin (you will see that it results in detached state)

Now your cloned solution’s submodules are attached to original repositories and everything works as previously described.

A bit tricky for sure, at least until Visual Studio Git plugin won’t support multiple repositories. but once project is properly initialized is just a matter of remember to open the submodule project each time you need to interact with git.