Pazartesi, 12 Kasım 2018 / Published in Uncategorized

In a recent project, I was asked to produce a tool for importing a fairly large amount of data at once, this data then needed to be processed and exported. After much refactoring, I achieved a solution I was satisfied with which used TPL dataflow to execute the processing in parallel. Before I talk specifically about TPL dataflow (this will now be the subject of a following blog as this one grew slightly unwieldly as time went on), I thought I’d give a brief overview of dataflow in general.

Please excuse the coming awful puns, I couldn’t stop them once they started flowing

Since I started this blog my understanding of dataflow has developed, and exactly how I picture it working has changed. However, my initial interest in the subject has only grown and its definitely taking a place of pride in my growing list of exciting things I’ve learnt this year. I think it’s a highly expressive and understandable way of representing data processing.

The crucial thing to understand when using dataflow, is that the data is in control. In most conventional programming languages, the programmer determines how and when the code will run. In dataflow, it is the data that drives how the program executes. The movement of data controls the flow of the program (as Mark Carkci put it in his book on Dataflow & Reactive programming systems, “data is king”).

The main components in a dataflow model are blocks (which are activated by the arrival of data) and links (here I am using the TPL dataflow syntax, these fundamental concepts are called many things, most broadly nodes and arcs). These concepts are easily represented with the aid of a diagram.

My name’s Sher-block Holmes, it is my business to process your data

Blocks contain the code which does the processing of the data. When data arrives at a block, it causes the code it contains (or internal method) to execute. Blocks generally have input and output ports. It is data arriving on an input that causes the execution, and then any results are placed onto the output port. This output port can then be connected to other blocks via links. The overall conditions for a block to execute are such that there needs to be data waiting on the input, and there must be space on the output for the result.

There are different types of block which build up a dataflow model. One of the crucial ideas is compound blocks. This means blocks which are built up of other blocks. This allows us to build up complex systems out of more simple entities These compound blocks can be thought of as small dataflow programs within the wider context. Compounds blocks are important because they firstly make code more understandable, and secondly maximise code reuse. Both of these concepts are critical in the building of any sustainable program.

Another important thing to consider is the difference between functional and stateful blocks. Functional blocks hold no state. If you put the same piece of data through you will always get the same output. Functional blocks can act like stateful blocks, however this requires multiple inputs/outputs…

It is possible for blocks to have multiple inputs and outputs, though this is not intrinsically supported in TPL dataflow. If a block has multiple inputs, then a firing pattern is needed to tell the block when to execute. This defines which inputs need data on them in order for the block to activate. For example, if a block has three inputs you could define a condition that there must be data on input 1, and then on one of either 2 or 3 for the block to execute. For multiple inputs, we update our overall conditions of execution to require the firing pattern to be met, and for there to be space on the output.

For a functional block to act like a stateful block, it needs at least two inputs. To retain state, one of the block’s outputs is linked up to one of its own inputs and the state is passed back. The firing pattern would then be such that the block needs a data input and a state to execute.

If only I could think of a way to link these paragraphs together

Data moves between blocks by travelling down the links. In this way, a processing pipeline can be constructed. The transferred data can either pushed or pulled down the links. Usually in dataflow systems the data is pushed (the producer offers up the data when it is ready, *cough cough* Rx *cough cough*). In TPL dataflow these links work through message passing.

In a message passing system, messages are stored in a first-in-first-out queue. Each message must be received before the following one can be accessed. Usually each message is only read once, once it has been received it is removed from the queue. Multiple senders can add to the queue. The data will then just be processed sequentially with no regard for where the messages originated. You can also hook up multiple readers to a message passing channel, however when a message is read by any one of the receivers, it will be removed from the queue and will not be read by the others. In order to send the same message to multiple receivers you would need a block that has one input and sends a copy of the data down multiple output channels.

Make way for the king!

And now onto the data itself… There are boundless possibilities for the types of data that could be processed using dataflow. However, in general, it is highly recommended that immutable data is used. The reason for this is two-fold:

• Immutable data eliminates concurrency issues surrounding reading/writing. • If data is mutable, a deep copy needs to be made every time the data is sent to multiple blocks, which can be expensive and time consuming.

Now, it is not always possible for data to be completely immutable. But in these situations, care must be taken, especially when processing is being done in parallel.

Putting all of these pieces together, we have a reactive and highly configurable system for processing data. Look out for my next blog on how to implement these concepts using TPL dataflow!

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

2018-11-12T14:13:17+00:00

2018-11-12T14:14:50+00:00

ASP.NET Core: Replacement for Server.MapPath

Gunnar

12 11 2018

ASP.NET Core: Replacement for Server.MapPath

Gunnar | ASP.NET | 0

There’s no Server.MapPath() in ASP.NET Core and things are way different than with old ASP.NET MVC. This blog post shows how application and public web files are organized in ASP.NET Core and how to access them from web applications.

ASP.NET Core offers two different locations for files:

  • Content root – this is where application binaries and other private files are held.
  • Web root – this is where public files are held (wwwroot folder in web project).

By default web root is located under content root. But there are also deployments where web root is located somewhere else. I have previously seen such deployments on Azure Web Apps. It’s possible that some ISP-s also use different directories in tree for application files and web root.

Getting content and web root at code

Paths to content root and web root are available through IHostingEnvironment in code like shown here.

ASP.NET Core: Content root and web root may have different locations

Notice how content root and wwwroot are located in totally different places in machine.

Setting web root location

To set location for web root we need hosting.json file in application root folder. Also we need some code to include the file – at least for Kestrel. My hosting.json is shown here.

{  "webRoot": "c:\\temp\\wwwroot\\"}

It is loaded when program starts (Program.cs file). I made this file optional so my application doesn’t crash when hosting file is missing.

public class Program{    public static void Main(string[] args)    {        var config = new ConfigurationBuilder()                         .AddJsonFile("hosting.json", optional: true)                         .Build();         CreateWebHostBuilder(args).UseConfiguration(config)                                  .UseKestrel()                                  .Build()                                  .Run();    }     public static IWebHostBuilder CreateWebHostBuilder(string[] args) =>        WebHost.CreateDefaultBuilder(args)            .UseStartup<Startup>();}

If there is no hosting file then default configuration is used and ASP.NET Core expects that web root is located under application content root.

Wrapping up

Although we don’t have Server.MapPath() call anymore in ASP.NET we have IHostingEnvironment that provides us with paths to application content root and web root. These are full paths to mentioned locations and not URL-s. We can use these paths to read files from both of locations if needed.

See also

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

The latest version of Visual Studio 2017 (version 15.8) supports multiple carets. This means that, you can now create multiple insertion and selection points in your code file which will improve your productivity in certain conditions.

 

In this part of my Visual Studio Productivity Tips series, we will discuss how easy it is to add or delete code at multiple cursor positions.

Visual Studio Productivity Tips:

 

 

The latest version of Visual Studio 2017 adds the ability to insert selection points to multiple places of your code file, so that, you can start typing the same code in all those caret positions.

 

In the previous versions, the support was limited to a vertical selection. That means, you were able to select a vertical line or rectangular area by pressing [

Shift + Alt + Click
] OR [

Shift + Alt + Arrow keys
]. With the Visual Studio 2017 (version 15.8 and above), this functionality has been extended to random caret positions. This can be of three types:

 

  1. Add additional carets to your document:

    As shown in the video, you can now use [

    Ctrl + Alt + Click
    ] to add the additional carets to your document. Then you will be allowed to add or delete text in multiple places at the same time:

  2. Add additional selection that matches your current selection:

    You are also allowed to select the next matching selection in a document. When a selection is already there, press [

    Shift + Alt + . (dot)
    ] to add the next matching selections one by one. This helps you to verify the context of each additional selection.

    If you would like to skip over a match, you can press [

    Ctrl + Alt + Shift + . (dot)
    ] to move the last matching selection to the next instance.

  3. Add all matching selections in a document:

    When you like to select all matching selections in a document, this trick is helpful. You can press [

    Ctrl + Alt + Shift + , (comma)
    ] to grab all matching selections in a document, so that, you can replace all of them at a single shot. It’s almost like the "Find and Replace" functionality. Checkout how it works:

Visual Studio Productivity Tips: Adding and deleting text at multiple caret positions

 

Liked the post? Don’t forget to share it with your friends and colleagues. This will help them to complete their job smoothly and efficiently. Also, don’t forget to read out my other posts on Visual Studio Productivity Tips. Who knows those could be beneficial too.

 

 

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Database Migration Guide – October 2018 Updates

What’s new with the latest updates?

The latest updates to the Database Migration Guide include:

  • Launched a new and enhanced user interface that makes it easier to specify your migration scenario and that surfaces the most commonly used guides, which are available with a single click.

  • Added/updated guidance for the following migration scenarios (source/target pairs):
  • Added a table of “Migration learnings and best practices from real-world engagements" to each of the following scenarios:
  • Included references to additional Partner tool offerings.

Call to action

Please take some time to review the Migration Guide and consider:

  • The overall user experience, especially with the release of the new user interface.
  • The scope and accuracy of existing content, as well as any guidance/coverage you believe that we should add.
  • Valuable information that you can contribute directly to the Migration Guide by working in our Github repository (sign in using your GitHub account).

Be sure to submit any feedback you might have by using the Tell us about your experience link that appears at the bottom of any migration scenario page within the Guide or by joining the conversation in our Gitter chat room.

Thanks in advance for your support!

Jim Toland Senior Program Manager Microsoft Database Migration Team

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Over the past three years I’ve given many talks at conferences regarding .NET Core, I’ve implemented many solutions for customers using .NET Core, and help consult with many organizations looking to move forward. No matter what way you slice it, it seems that .NET Core is the future, but there are still many questions out there regarding the future and how you move forward. In my time working with .NET Core I made some assumptions early on that are starting to come true. With all of the recent news and confusion, I thought this would be a good time to try and summarize where we are, and what I think it means.

I will apologize in advance for the length of this posting, it will be long, and sadly mostly if not entirely text, but I think it is important to try and centralize all of this into a single posting….so without further delay, here we go!

Recent .NET Core News

In the past few months Microsoft has released a lot of new information regarding the future direction of .NET Core, and the features that are being included in the upcoming releases. We are currently on .NET Core 2.1, with 2.2 (already at preview 3) and 3.0 coming in the near future. Looking back it has been a wild ride, as Version 1.0 was only released about 3 years ago.

I want to be explicitly clear, the information I am sharing here is all 100% public knowledge, based on news articles, upgrade experiences, and past records. I have no insider information on the direction, but if you piece together the news it is easy to see the future. For clarity a few key articles that will help to support my hypothesis.

It is from these articles that we can start to form opinions on the direction of the platforms and what it means for us as developers, implementors, and system owners.

Microsoft’s Stance on .NET Framework

If you go digging into these articles, there is a very important quote from Microsoft that helps to explain future direction. It is important to start here in our understanding. Microsoft specifically has stated verbatim:

.NET Framework is the implementation of .NET that’s installed on over one billion machines and thus needs to remain as compatible as possible. Because of this, it moves at a slower pace than .NET Core. Even security and bug fixes can cause breaks in applications because applications depend on the previous behavior. We will make sure that .NET Framework always supports the latest networking protocols, security standards, and Windows features.

Now, what does this really mean? I think it is important to look at a few things to understand where this places us. To ensure we look at this fairly, we will do the same when we look at .NET Core.

Support & Lifecycle

Fundamentally this question is the biggest tell as it relates to the future of a platform. Sadly, you used to have to go digging for this to get the true picture. However, following the Lifecycle FAQ link I posted above we get this nugget of information.

Beginning with version 4.5.2 and later, .NET Framework is defined as a component of the Windows operating system (OS). Components receive the same support as their parent products, therefore, .NET Framework 4.5.2 and later follows the lifecycle policy of the underlying Windows OS on which it is installed.

Now, let’s be realistic, what does this really mean for us? As I look at this, it sets a future direction that gives us a very clear path. Following the official Support Lifecycle listing we see that mainstream support ends for Windows Server 2016 on 1/11/2022 and extended support ends 1/11/2027. From this information, it is a safe assumption that we have support, in some capacity for .NET 4.6 applications until 2027, remember that for later.

Feature Enhancements

Another key aspect to monitor is the development of feature enhancements, bug fixes, and related works. Microsoft already admitted that the fequency of these updates will be limited due to a desire to maintain support. That is totally fine, as most of us prefer stability anyway.

They are still working to include feature enhancements, such as UWP support, high-resolution support, and modern browser support. The key question is what do you need added to .NET Framework to accomplish your tasks? For most people, this is not a huge concern.

Microsofts Stance on .NET Core

Recent information from Microsoft finally released more details on their thoughts on the role of .NET Core. Summarized, from Microsoft to be:

.NET Core is the open source, cross-platform, and fast-moving version of .NET. Because of its side-by-side nature it can take changes that we can’t risk applying back to .NET Framework. This means that .NET Core will get new APIs and language features over time that .NET Framework cannot. At Build we showed a demo how the file APIs are faster on .NET Core. If we put those same changes into .NET Framework we could break existing applications, and we don’t want to do that.

This is where the future becomes clear! Features of .NET Core allow Microsoft to embrace change, on the surface, this sounds great….but is it? Let’s look at the same additional support aspects of .NET Core.

Support & Lifecycle

Support information on .NET Core has not always been managed with the highest level of grace. Thankfully we are at a point in the lifecycle of .NET Core that we have a formal plan. However, there are two different lifecycles with different lengths of time supported.

Long Term Support (LTS) Cycle

For developers desiring the longest support terms possible it will be required to stay with LTS releases. Current .NET Core 2.1.5 is a LTS flagged release. For individuals using LTS releases Microsoft explains the lifecycle as:

Customers using LTS will need the latest patch update installed to qualify for support. If a system is running 1.0 and 1.0.1 has been released, 1.0.1 will need to be installed as a first step. Once a patch update has been installed applications will begin using the update by default. LTS releases will be supported for 3-years after general availability, or for a 12 month Maintenance period after the next LTS release ships, whichever is longer.

This gives a 3 year minimum life to any project built on a particular version of .NET Core. You can continue to use the application after the end of the LTS period, however, you will then be using an application that doesn’t have Microsoft support for patching, which is a risk.

Current Release Cycle

For those that want to leverage the latest and greatest or those that want to move forward quickly they will fall under this cycle. Microsoft support notes that all of the LTS requirements apply, and:

In addition to staying current with the latest patch update, customers using Current will need to update as new minor versions are released. The latest released minor version will become the minimum serviceable baseline after release. After a 3 month Maintenance period, the previous minor version will no longer be supported. For example, after 1.2 releases systems running version 1.1 will have 3 months to update to 1.2 to remain eligible for support. Applications do not automatically begin using the new minor update.

This is KEY consideration for anyone looking to leverage .NET Core. It is possible that without notice, you will have to upgrade to a later version within 3 months. Typically these updates are less likely to break things, however, that can be a VERY quick cycle.

Feature Enhancements

I’m including this discussion briefly just to keep the consistency in review between .NET Core & .NET Framework. But the reality is that .NET Core for the near future will continue to get many new features. Along with that, methodologies will continue to change and more breaking changes might be introduced so buyer beware!.

So What Does That Mean For Me?

Now that we have established common ground on the news, direction, support cycles, etc, we can have a detailed review of what this means for specific scenarios. I have had conversations with people in multiple industries, with many different background, as well as a constant discussion/argument inside of the DNN community on direction. Rather than trying to summarize in a general manner, I thought I’d go through a few common questions.<.>

Do I need to rewrite my WebForms application

My short answer to this is NO you don’t need to rewrite your application. Support for .NET Framework is guaranteed until at least 2027, this is a long, long, runway for an application. It is true that the foundation under your application might not get as many updates, but you can rest assured that the platform will have support, that you will get security patches, and your code will still be able to execute.

Future planning for these applications is important however. If there is any desire to eventually migrate to .NET Core, the application will need to be rewritten. There is no support for WebForms in .NET Core and nothing on the horizon that indicate that it would be coming.

My stance with my consulting business IowaComputerGurus has been to look at applications when they need a major facelift to migrate them to .NET Core. This is not out of necessity, but out of a desire to try and keep as many customers on the same toolset for ease of support.

I’m creating a new application, what should I use?

In my mind, this answer is quite simple, .NET Core is the future that is quite clear. However, there are certain situations whereby that might not be the best answer. Regardless if you work on the LTS or Current models with .NET Core the upgrade process is something not to be forgotten. Organizations building on .NET Core need to ensure that they have proper support to move forward with these changes. it will not be recommended to build & forget an application on .NET Core, as many organizations do today with .NET.

I have an existing .NET ASP.NET MVC application, should I move it to .NET Core

Transition to .NET Core in this situation is possible, although not automatic, it is still fairly easy to do. We have converted a number of applications and with each of them have benefitted from new features, improved performance, and a better developer experience. I would recommend that anyone with an existing MVC application migrate when possible, the transition isn’t difficult and the benefits will be great.

I have an existing .NET Core application, but it runs on .NET Framework, what should I do

This situation is one of the "odd duck" situations. Early on in the .NET Core days Microsoft supported a model where you could have a .NET Core application that would run on .NET Framework. This was a bridge process and with .NET Core 3.0 it has been announced that this model is no longer going to be support for ASP.NET. This puts applications in this arena to a place where a change is mandatory, so I would recommend transitioning as soon as possible.

This transition should be much easier at this point in time however because of all of the recently added API’s to .NET Core since the early days.

I’m building a new web application, what do I use?

For 99.9% of my customers this is an easy answer. .NET Core, period. My suggestion for most organizations would be to settle on LTS releases and be sure to plan your project to upgrade at least once a year because the platform is changing.

For organizations, or solutions, that do not have a method to easily adopt the changes, it might be worth considering .NET Framework for the ability to let things sit for a longer period of time without updates.

Microsoft On The Future

Another nugget of information from Microsoft regarding the future eloquently supports my above statements.

If you have existing .NET Framework applications, you should not feel pressured to move to .NET Core. Both .NET Framework and .NET Core will move forward, and both will be fully supported, .NET Framework will always be a part of Windows. But moving forward they will contain somewhat different features. Even inside of Microsoft we have many large product lines that are based on the .NET Framework and will remain on .NET Framework.

Conclusion

The future here is charged, there are many variables in play, and every organization is going to be different. Support policies, features, and more might push one organization in one direction and another organization in a different direction. The key is to understand what it means to be on the platform that you are on and what that means to you.

I which it was more "black and white" but the reality here is that we have great opportunities given to us, we just have to figure out how to leverage them for our particular purposes.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

SL No
Form Name
Remarks
1Account CreationTo add New Account and edit and delete existing ledger account.2Account GroupTo Add New Account Group and edit and delete existing account group3ActivationFor Activation Purposes. By this window, we can manage our customers’ subscription4Admin SettingsTo set the admin property settings of the application5Application SettingsTo customize the settings of the application6Backup DatabaseThe Window to make the backup file of the database7Balance SheetTo show the balance sheet report of the firm8Bank PaymentThe window to enter the payment we made through the bank9Bank ReceiptThe window to enter the receipt we got through a bank account10Bank ReconciliationTo reconcile the bank entry by changing date to right date11Barcode DesigningThis window is used to design and to print the Barcode.12Bulk Bill PrintThis window is to print the Invoice from serial number to serial number bulky.13Brand CreationThe window to create, update and delete different brands of products14Calendar PlanningTo record the events and plans as per date15Cash PaymentThe window to enter the cash payment we made.16Cash ReceiptThe window to enter the cash receipt.17Change DateTo change the date of the application18Change PasswordTo change the password of the current user19Change DatabaseTo change the application database20Company DetailsIn this window, we can enter the name and other details of the company.21Cheque ReportTo view the post-dated cheque receipt and payment report.22Contact UsIf the customer wants to contact the developer, he can use this window23Credit NoteThis is to enter the credit note transaction24Custom Transaction ReportIf you want to generate a report other than the given report, you can use this window.25Daily Account StatementThis is to view the daily account statement. As in this window, you can see all the transaction you made in the current date26Database Auto BackupThe window to Backup the database when the application is closing27Day CloseThis window to close the day. After closing a day, we can’t enter any transactions on that day.28Database ConnectionThis is used to change a database connection, create a new database, backup database, restore the database and configure the database. In this, we create all database related work29Debit NoteTo record debit note transaction. 30Discount Sales ReportTo view the report of discount sales from fixed date to a fixed date.31Email SettingsThis is to save SMTP details and user id and password of email provider.32Employee RegistrationThis is to record the data about employees such as Name, designation, qualification and experience.33Expiry Stock ReportThis is to view the report of products that are expired.34Export DataThis is to export data to another database35Financial Period SettingsThis is used to set the financial period36First PurchaseWhen we purchase some product from a supplier, we have to create supplier, product group, brand and product details. By this window, we can avoid all these steps.37Godown CreationThis is to used  create a new godown38Godown Stock ReportThis is to view product stock in a certain godown39Group wise Sales ReportThis is to view the group wise sales of a product for some period40Inventory wise Sales ReportThis is to view product wise sales of certain period41Invoice Header and FooterThis is to set the header and footer of Invoice printing page42Inventory Print DesigningThis is to design the print of ivoices43Invoice SettingsThis is to set the properties of invoice44Journal EntryBy this window, we are recording journal entries45Journal Entry ReportTo view journal entry transactions46Ledger ReportThis is to know transactions made in a certain period of an account47Lock ApplicationThis to lock the application. After locking, we can’t enter any data as before unlocking by giving the password of the logged user.48Login ApplicationThis is the login window49Main FormMain navigation window. As from this page, we are open widows as we want. This is mdi parent.50My Message BoxThis is to show custom messages51Opening BalanceThis is used to enter or to change opening balance entry52Outstanding ReportThis is  to view debtor and creditor balance of customers and of suppliers.53Party ListThis is to view detailed report of customers and suppliers.54Cash Payment ReportReport to view cash payment details for a certain period55Bank Payment ReportReport to view bank payment details for a certain period56Cash Receipt ReportReport to view cash receipt details of a  certain period57Bank Receipt ReportReport to view bank receipt details of a  certain period58Pending Order ReportTo view the report about pending orders to sale or to purchase59Post Dated Cheque ReceiptTo record the receipt of post dated cheques from customers60Post Date Cheque PaymentTo record the payment of post dated cheques to suppliers61Price CategoryThis is to record the price category. 62Price Comparison ReportThis is to view price comparison report of products sold in certain period63Price ListThis is to view the price of products in different categories64Product CreationBy this window we are entering new products and editing existing products and deleting a product65Product Group CreationTo create, edit and delete groups of product group66Product CombinationBy this window we can create a new combination with two or more products.67Product DetailsTo know details about product. 68Product EditingThis is to edit group, brand, price category and tax category of certain product69Product Flow ReportThis to know number of purchases, sales, sales return, purchase return and current stock70Product Transaction ReportTo know the transaction of products as numbers we purchased from different parties and numbers sold to customers71Product TracingTo know stock fluctuations of products72Profit And Loss AccountTo know profit and lose reports of the firm73Profit ReportTo know the profit from sales of product74Purchase InvoiceThe purchase invoice window75Purchase ReportThis is to view the detailed report about purchase for certain period76Purchase ReturnThe purchase return invoice window77Purchase Return ReportThis is to view the detailed report about purchase return for certain period78ReminderThis is to set reminder79Report BugsTo report bugs found about the application to the developer80Reset DatabaseTo reset records we enter in the database of the application81RouteTo create a new route of sales and to edit and delete existing routes82Salary ProcessTo calculate salary of employees83Sales And Sales Return ReportTo know the sales and sales return for certain period84Sales EstimateThis is the page to enter Sales Estimate Entry as without tax sales85Sales Estimate ReportTo view the sales we made without showing tax.86Sales InvoiceThe Sales Invoice window87Salesman Wise ReportTo view salesman-wise sales for certain period.88Sales OrderTo record orders for new sales89Sales Order ReportTo view report about orders for sales and purchase90Sales ReturnThe sales return window. By this we can enter new sales return entry.91Sales Return ReportTo know about details about products that have been returned from customers for some period.92Sales Summary ReportThis is the summary93Sales ViewTo view cash sales or credit sales of certain date.94Selected Financial YearTo select a financial year95Send MailTo send  mail to customers from the application96ShelfTo create new self and delete and edit existing self97Show ProductTo show the product details sold to customers, such as price variation.98Splash ScreenThe splash screen window99Staff AttendanceThe window to record the attendance of staff100Stock AdjusterTo reconcile the product stock101Stock ReportTo view stock of the firm for certain dates.102Stock TransferTo transfer products from a certain godown to another.103Stock Transfer ReportTo view reports about stock transfers.104Tax CategoryTo create a new tax category105Tax Sales ReportTo get the tax sales report to submit in front of the concerned person.106Tax Purchase ReportTo get the tax purchase report to submit in front of the concerned tax person.107Transaction Without InventoryWe use this window to record all transactions without inventory.108UnitTo create a new unit of products and to delete and edit existing units109Unsold Product ReportTo view the report of the products not sold for a certain period.110User ManagementThe window to manage users and their privileges.111User PrivilegeTo create new privileges112VehicleCreate new vehicle and delete and edit existing vehicle113Video TutorialTo play a tutorial about the application.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Introduction

 

In this session, I will show you the steps of implementing Angular Datatable in ASP.NET MVC application. There is no need for adding filter data and pagination manually. You can get all these benefits by using Datatable which is open source. 

 

Description

 

DataTables is a powerful and easy to use jQuery plugin for displaying tabular data with features for Pagination, searching, State saving, Multi-column sorting with data type detection and lots more with ZERO or minimal configuration. For resolving the datatable performance issue like server-side filtering, sorting, and paging features when working with a large amount of data, many Angular grid tools are available.

 

Note

Before you go through this session I will suggest you visit my previous Datatable session as mentioned below. As I have used the same table structure with the data and same Entity Data Model in both sessions and under the same project I have created both these datatable based applications.

Steps to be followed,

 

Step 1

 

Add required JS, CSS files and 3 image files for sorting icons in the application that are the dependencies of angular-datatables.

You can find  these files from the attachment and the structure should look like as mentioned below.

 

 

Step 2

 

Here, I have added a JavaScript file in the "Scripts" folder for adding Angular components (module, controller etc). Right-click on your "Scripts" folder > Add > New Item > select "javascript" file > Enter name (here "myApp.js")> Ok.

 

  1. var app = angular.module(‘MyApp’, [‘datatables’]);  
  2. app.controller(‘homeCtrl’, [‘$scope’‘$http’‘DTOptionsBuilder’‘DTColumnBuilder’,  
  3.     function ($scope, $http, DTOptionsBuilder, DTColumnBuilder) {  
  4.         $scope.dtColumns = [  
  5.             DTColumnBuilder.newColumn("FirstName""First Name"),  
  6.             DTColumnBuilder.newColumn("LastName""Last Name"),  
  7.             DTColumnBuilder.newColumn("Age""Age"),  
  8.             DTColumnBuilder.newColumn("Address""Address"),  
  9.             DTColumnBuilder.newColumn("City""City"),  
  10.             DTColumnBuilder.newColumn("State""State")  
  11.         ]  
  12.   
  13.         $scope.dtOptions = DTOptionsBuilder.newOptions().withOption(‘ajax’, {  
  14.             url: "/home/getdata",  
  15.             type: "POST"  
  16.         })  
  17.             .withPaginationType(‘full_numbers’)  
  18.             .withDisplayLength(10);  
  19.     }])  

Code Description

 

I have included the datatables module in our module (here "MyApp") for implementing datatables in our AngularJS application.

  1. var app = angular.module(‘MyApp’, [‘datatables’]);  

Here, I have defined some function parameters with the controller name.

  1. app.controller(‘homeCtrl’, [‘$scope’‘$http’‘DTOptionsBuilder’‘DTColumnBuilder’,  
  2.     function ($scope, $http, DTOptionsBuilder, DTColumnBuilder)   

I have mentioned the columns of table (Entity Types) as uploaded using enitity data model using scope object. The scope is a JavaScript object which basically binds the "controller" and the "view". One can define member variables in the scope within the controller which can then be accessed by the view.

  1. $scope.dtColumns = [  
  2.             DTColumnBuilder.newColumn("FirstName""First Name"),  
  3.             DTColumnBuilder.newColumn("LastName""Last Name"),  
  4.             DTColumnBuilder.newColumn("Age""Age"),  
  5.             DTColumnBuilder.newColumn("Address""Address"),  
  6.             DTColumnBuilder.newColumn("City""City"),  
  7.             DTColumnBuilder.newColumn("State""State")  
  8.         ]  

Here, I have another scope object which specified the controller action method getdata to load data from the backend using Entity data model.

  1. $scope.dtOptions = DTOptionsBuilder.newOptions().withOption(‘ajax’, {  
  2.             url: "/home/getdata",  
  3.             type: "POST"  
  4.         })  

Here, I have mentioned UI pagination type with the no. of records per page.

  1. .withPaginationType(‘full_numbers’)  
  2.             .withDisplayLength(10);  

Step 3

 

Here, I have added "Angular" Action into "Home" Controller. Please write the following code.

  1. public ActionResult Angular()  
  2.         {  
  3.             return View();  
  4.         }  

Step 4

 

Add another action into your controller for getting data from a database for showing in the datatables.

  1. public ActionResult getData()  
  2.         {  
  3.             using (SatyaDBEntities dc = new SatyaDBEntities())  
  4.             {  
  5.                 return Json(dc.employees.ToList());  
  6.             }  
  7.         }  

Step 5

 

Add view for your Action & design for showing the data in the datatables.

 

Right-click on Action Method (here right-click on Angular action) > Add View… > Enter View Name > Select View Engine (Razor) > Add.

 

Code Ref

  1. @{  
  2.     ViewBag.Title = "Angular Datatables";  
  3. }  
  4.   
  5. <h2 style="color: blue">Satyaprakash-Angular Datatable Using EF and Bootstrap</h2>  
  6.   
  7.   
  8. <link href="~/css/bootstrap.css" rel="stylesheet" />  
  9. <link href="~/css/jquery.dataTables.min.css" rel="stylesheet" />  
  10.   
  11. “~/Scripts/jquery.js”>  
  12. “~/Scripts/jquery.dataTables.js”>  
  13. “~/Scripts/angular.js”>  
  14. “~/Scripts/angular-datatables.js”>  
  15.   
  16. “~/Scripts/myApp.js”>  
  17.   
  18.   
  19. “MyApp” class=“container”>  
  20.     
    “homeCtrl”>  
  21.         
    “entry-grid” datatable=“” dt-options=“dtOptions” dt-columns=“dtColumns” class=“table table-hover”>

      

  22.     

  

  • </div>  
  • Code Description

     

    For datatable CSS

    1. <link href="~/css/bootstrap.css" rel="stylesheet" />  
    2. <link href="~/css/jquery.dataTables.min.css" rel="stylesheet" />  

    JS for AngularJS and Datatable 

    1. “~/Scripts/jquery.js”>  
    2. “~/Scripts/jquery.dataTables.js”>  
    3. “~/Scripts/angular.js”>  
    4. “~/Scripts/angular-datatables.js”>  

    JS for our AngularJS module, controller. 

    1. “~/Scripts/myApp.js”>  

    Here, in div tag inside HTML element I have added Angular module with controller name. 

    1. “MyApp” class=“container”>  
    2.     
      “homeCtrl”>  

    Then in the table element I have mentioned Angular scope objects, that is dtOptions and dtColumns, with CSS class as described in the earlier step.

    1. “MyApp” class=“container”>  
    2.     
      “homeCtrl”>  
    3.         
      “entry-grid” datatable=“” dt-options=“dtOptions” dt-columns=“dtColumns” class=“table table-hover”>

        

    4.     

      

  • </div>  
  • OUTPUT

     

    During Page load initially.

     

     

    It supports Pagination, searching, State saving, and Multi-column sorting with data type detection.

     

    SUMMARY 

    • Implement the Angular Datatable using MVC and Bootstrap.
    • Datatable with Bootstrap using Angular DataTables dependencies.
    • Features of Datatable makes coding easier.
    • Datatable using Entity Framework.
    Pazartesi, 12 Kasım 2018 / Published in Uncategorized

    • Create a table in a database
    • Configure Entity Framework with database and application
    • Get Envelope Information
    • Store Envelope Informations
    • Get Envelope By Envelope Id

    Create a table in a database

     

    First, we will create a table in SQL Server to store the details of Envelope. I have created a table called "Recipient" with the following design 

     

     

    Execute the below query to create a table with the above design.

    1. CREATE TABLE [dbo].[Recipient](  
    2. [Id] [bigint] IDENTITY(1,1) NOT NULL,  
    3. [Name] [nvarchar](250) NULL,  
    4. [Email] [nvarchar](300) NULL,  
    5. [EnvelopeID] [nvarchar](maxNULL,  
    6. [Documents] [varbinary](maxNULL,  
    7. [Description] [nvarchar](maxNULL,  
    8. [Status] [varchar](100) NULL,  
    9. [SentOn] [datetime] NULL,  
    10. [UpdatedOn] [datetime] NULL,  
    11. CONSTRAINT [PK_Recipient] PRIMARY KEY CLUSTERED  
    12. (  
    13.    [Id] ASC  
    14. )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ONON [PRIMARY]  
    15. ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]  
    16. GO  

    Configure Entity Framework with database and application 

     

    Here, I have already discussed how to configure and implement a database-first approach. In the meantime, choose your created table with Entity Framework. Once we do our configuration with SQL table "Recipient" from CSharpCorner database and with our application, we will get the below screen as the succeeding configuration 

     

     

    Get Envelope Information

     

    In our previous article discussion, we discussed how to create a signature request on a respective document for the respective recipient, we successfully sent the signature request and got the envelope Id and much more information but we didn’t store it anywhere for future reference. So we need to store envelope information, with the help of the envelope information only we can track or get more information from docuSign. For every signature request, we will get a unique envelope id, so we need to store it into our database. We can’t get any information without envelope id. Let’s create a new signing request and get envelope information and store it in our database.

     

    As per the previous article, we will get enough information in the below envelopeSummary object. It contains Status,StatusDateTime and EnvelopeId. So, let’s store that information into our database.

    1. EnvelopeSummary envelopeSummary = envelopesApi.CreateEnvelope(accountId, envDef);  
    2. // print the JSON response  
    3. var result = JsonConvert.SerializeObject(envelopeSummary);  
    4. Recipient recipient = new Recipient();  
    5. recipient.Description = "envDef.EmailSubject";  
    6. recipient.Email = recipientEmail;  
    7. recipient.Name = recipientName;  
    8. recipient.Status = envelopeSummary.Status;  
    9. recipient.Documents = fileBytes;  
    10. recipient.SentOn = System.Convert.ToDateTime(envelopeSummary.StatusDateTime);  
    11. recipient.EnvelopeID = envelopeSummary.EnvelopeId;  
    12. CSharpCornerEntities cSharpCornerEntities = new CSharpCornerEntities();  
    13. cSharpCornerEntities.Recipients.Add(recipient);  

    Now, run your application.

     

    In the above image, you can see that we got all information and the below image represents our data has been stored in a database. Now, we can proceed with many things with the help of this envelope id and status.

     

    Get Envelope By Envelope Id

     

    Now, let’s try to pull the envelope information by passing the evelopeID. 

    1. public ActionResult getEnvelopeInformation() {  
    2.     ApiClient apiClient = new ApiClient("https://demo.docusign.net/restapi");  
    3.     Configuration.Default.ApiClient = apiClient;  
    4.     // provide a valid envelope ID from your account.  
    5.     string envelopeId = "3e66307b-e9ba-42a0-9f9e-b3a7016fee2a"//Enter Stored Envelope Id  
    6.     MyCredential myCredential = new MyCredential();  
    7.     // call the Login() API which sets the user’s baseUrl and returns their accountId  
    8.     string accountId = loginApi(myCredential.UserName, myCredential.Password);  
    9.     //===========================================================  
    10.     // Step 2: Get Envelope Information  
    11.     //===========================================================  
    12.     // |EnvelopesApi| contains methods related to creating and sending Envelopes including status calls  
    13.     EnvelopesApi envelopesApi = new EnvelopesApi();  
    14.     Envelope envInfo = envelopesApi.GetEnvelope(accountId, envelopeId);  
    15.     return View();  
    16. // end requestSignatu  

    Now, run your application. In the below image, you can see that we got much information by passing envelope id, and we will use all properties in the future one by one.

     

     Complete controller

    1. using DocuSign.eSign.Api;  
    2. using DocuSign.eSign.Client;  
    3. using DocuSign.eSign.Model;  
    4. using DocusignDemo.Models;  
    5. using Newtonsoft.Json;  
    6. using System.Collections.Generic;  
    7. using System.IO;  
    8. using System.Web;  
    9. using System.Web.Mvc;  
    10. using Document = DocuSign.eSign.Model.Document;  
    11. namespace DocusignDemo.Controllers {  
    12.     public class DocusignController: Controller {  
    13.         MyCredential credential = new MyCredential();  
    14.         private string INTEGRATOR_KEY = "Enter Integrator Key";  
    15.         public ActionResult SendDocumentforSign() {  
    16.                 return View();  
    17.             }  
    18.             [HttpPost]  
    19.         public ActionResult SendDocumentforSign(DocusignDemo.Models.Recipient recipient, HttpPostedFileBase UploadDocument) {  
    20.             Models.Recipient recipientModel = new Models.Recipient();  
    21.             string directorypath = Server.MapPath("~/App_Data/" + "Files/");  
    22.             if (!Directory.Exists(directorypath)) {  
    23.                 Directory.CreateDirectory(directorypath);  
    24.             }  
    25.             byte[] data;  
    26.             using(Stream inputStream = UploadDocument.InputStream) {  
    27.                 MemoryStream memoryStream = inputStream as MemoryStream;  
    28.                 if (memoryStream == null) {  
    29.                     memoryStream = new MemoryStream();  
    30.                     inputStream.CopyTo(memoryStream);  
    31.                 }  
    32.                 data = memoryStream.ToArray();  
    33.             }  
    34.             var serverpath = directorypath + recipient.Name.Trim() + ".pdf";  
    35.             System.IO.File.WriteAllBytes(serverpath, data);  
    36.             docusign(serverpath, recipient.Name, recipient.Email);  
    37.             return View();  
    38.         }  
    39.         public string loginApi(string usr, string pwd) {  
    40.             // we set the api client in global config when we configured the client  
    41.             ApiClient apiClient = Configuration.Default.ApiClient;  
    42.             string authHeader = "{\"Username\":\"" + usr + "\", \"Password\":\"" + pwd + "\", \"IntegratorKey\":\"" + INTEGRATOR_KEY + "\"}";  
    43.             Configuration.Default.AddDefaultHeader("X-DocuSign-Authentication", authHeader);  
    44.             // we will retrieve this from the login() results  
    45.             string accountId = null;  
    46.             // the authentication api uses the apiClient (and X-DocuSign-Authentication header) that are set in Configuration object  
    47.             AuthenticationApi authApi = new AuthenticationApi();  
    48.             LoginInformation loginInfo = authApi.Login();  
    49.             // find the default account for this user  
    50.             foreach(DocuSign.eSign.Model.LoginAccount loginAcct in loginInfo.LoginAccounts) {  
    51.                 if (loginAcct.IsDefault == "true") {  
    52.                     accountId = loginAcct.AccountId;  
    53.                     break;  
    54.                 }  
    55.             }  
    56.             if (accountId == null) { // if no default found set to first account  
    57.                 accountId = loginInfo.LoginAccounts[0].AccountId;  
    58.             }  
    59.             return accountId;  
    60.         }  
    61.         public void docusign(string path, string recipientName, string recipientEmail) {  
    62.             ApiClient apiClient = new ApiClient("https://demo.docusign.net/restapi");  
    63.             Configuration.Default.ApiClient = apiClient;  
    64.             //Verify Account Details  
    65.             string accountId = loginApi(credential.UserName, credential.Password);  
    66.             // Read a file from disk to use as a document.  
    67.             byte[] fileBytes = System.IO.File.ReadAllBytes(path);  
    68.             EnvelopeDefinition envDef = new EnvelopeDefinition();  
    69.             envDef.EmailSubject = "Please sign this doc";  
    70.             // Add a document to the envelope  
    71.             Document doc = new Document();  
    72.             doc.DocumentBase64 = System.Convert.ToBase64String(fileBytes);  
    73.             doc.Name = Path.GetFileName(path);  
    74.             doc.DocumentId = "1";  
    75.             envDef.Documents = new List < Document > ();  
    76.             envDef.Documents.Add(doc);  
    77.             // Add a recipient to sign the documeent  
    78.             DocuSign.eSign.Model.Signer signer = new DocuSign.eSign.Model.Signer();  
    79.             signer.Email = recipientEmail;  
    80.             signer.Name = recipientName;  
    81.             signer.RecipientId = "1";  
    82.             envDef.Recipients = new DocuSign.eSign.Model.Recipients();  
    83.             envDef.Recipients.Signers = new List < DocuSign.eSign.Model.Signer > ();  
    84.             envDef.Recipients.Signers.Add(signer);  
    85.             //set envelope status to "sent" to immediately send the signature request  
    86.             envDef.Status = "sent";  
    87.             // |EnvelopesApi| contains methods related to creating and sending Envelopes (aka signature requests)  
    88.             EnvelopesApi envelopesApi = new EnvelopesApi();  
    89.             EnvelopeSummary envelopeSummary = envelopesApi.CreateEnvelope(accountId, envDef);  
    90.             // print the JSON response  
    91.             var result = JsonConvert.SerializeObject(envelopeSummary);  
    92.             Recipient recipient = new Recipient();  
    93.             recipient.Description = "envDef.EmailSubject";  
    94.             recipient.Email = recipientEmail;  
    95.             recipient.Name = recipientName;  
    96.             recipient.Status = envelopeSummary.Status;  
    97.             recipient.Documents = fileBytes;  
    98.             recipient.SentOn = System.Convert.ToDateTime(envelopeSummary.StatusDateTime);  
    99.             recipient.EnvelopeID = envelopeSummary.EnvelopeId;  
    100.             CSharpCornerEntities cSharpCornerEntities = new CSharpCornerEntities();  
    101.             cSharpCornerEntities.Recipients.Add(recipient);  
    102.             cSharpCornerEntities.SaveChanges();  
    103.         }  
    104.         public ActionResult getEnvelopeInformation() {  
    105.             ApiClient apiClient = new ApiClient("https://demo.docusign.net/restapi");  
    106.             Configuration.Default.ApiClient = apiClient;  
    107.             // provide a valid envelope ID from your account.  
    108.             string envelopeId = "Enter Stored Envelope Id"//Enter Stored Envelope Id  
    109.             MyCredential myCredential = new MyCredential();  
    110.             // call the Login() API which sets the user’s baseUrl and returns their accountId  
    111.             string accountId = loginApi(myCredential.UserName, myCredential.Password);  
    112.             //===========================================================  
    113.             // Step 2: Get Envelope Information  
    114.             //===========================================================  
    115.             // |EnvelopesApi| contains methods related to creating and sending Envelopes including status calls  
    116.             EnvelopesApi envelopesApi = new EnvelopesApi();  
    117.             Envelope envInfo = envelopesApi.GetEnvelope(accountId, envelopeId);  
    118.             return View();  
    119.         } // end requestSignatu  
    120.     }  
    121.     public class MyCredential {  
    122.         public string UserName {  
    123.             get;  
    124.             set;  
    125.         } = "Enter your username";  
    126.         public string Password {  
    127.             get;  
    128.             set;  
    129.         } = "Enter your Password";  
    130.     }  
    131. }  

    Refer to the attached project. I did attach the demonstrated project without a package due to the size limit. Wait for my next docuSign article to get more knowledge on this.

     

    Summary

     

    In this article, we discussed how to maintain the DocuSign envelope information in our database using Entity Framework and how to get envelope information by envelope Id. I hope it will help you out. Your valuable feedback and comments about this article are always welcome. 

    Pazartesi, 12 Kasım 2018 / Published in Uncategorized

    Web designers have gotten used to turning errors into opportunities. It’s no secret that the common, most widely occurring (and surprisingly recognizable) HTTP status code 404, aka “Not Found,” was forced by developers to bring benefits to the project. In the past it scared away users, destroyed the overall impression and was a nightmare for developers. Everyone wanted it to disappear once and for all.

    Today, it is an essential detail of a website. WordPress even has a specifically assigned template for it. The “404 page” is an integral element of user experience. In the majority of cases, it has not only a beautiful design but also a theme that is aimed to contribute to the entire aesthetic of a website.

    The Web Designer ToolboxUnlimited Downloads: 500,000+ Web Templates, Themes, Plugins & Design Assets

    Along with the well-thought-out design, interactions and even animations, it includes useful links and getaways that help lost users to get back on track. However, the “404” error in web design is like Hipsters in real world: They still catch our eye with their dorky glasses, “vintage” shirts and beards but they are nothing new to us. As for the “403” error, that’s a different story. It’s not as popular as its next of kin, but still, it occurs and not once in a blue moon.

    Just for background, HTTP status code 403, aka “403 Forbidden”, means that you do not have permission to access the page. Reasons can vary, starting with inappropriate folder permissions and ending with a banal requirement of login credentials. Nevertheless, the rule of thumb dictates that any error is an opportunity to effect improvements. So why not turn the dummy “403 page” into a place that will serve the same duty as the “404 page”?

    Let’s consider a dozen splendid takes on this type of error. They not only serve as a source of inspiration but also a source of ready-to-use solutions.

    You Shall Not Pass

    “You shall not pass” – was said once by one of the most powerful white-bearded wizards in the fictional world (I hope all the fans of Dumbledore forgive me for this). The final phrase of Gandalf the Grey (note Grey, not White) perfectly fits into the context here. And Noah Rodenbeek, A van Hagen and Jhey show this in practice. Their code snippets are impregnated with a spirit and charm of “The Hobbit” novel. While the first two artists re-created Gandalf with his staff, the latter just hinted at the scene, yet quite successfully.

    HODOR 403

    If the motifs from fictional novels featured above are not enough for you, then you should set your eyes on this code snippet from Yasio. Surprise-surprise, he got his inspiration from George R. R. Martin’s series of fantasy novels. He has come up with a work called HODOR 403. I believe for the majority out there this solution is self-explanatory. For the rest, I recommend switching to HBO and seeing for yourself why this fictional character goes perfectly well with this type of an error.

    Use of Illustration and Animation

    Other solutions in our list were guided by the notion that “403” symbolizes a forbidden area so that animated and static CSS illustrations were recreated namely with this idea in mind.

    Error 403 – Forbidden by Aimie depicts a classic scene from the fairy tale. The animated bats, witch’s house, bare trees and creepy typography that are featured in the hours of darkness certainly do the trick here.

    403 Forbidden by Dylan and 403 Forbidden by Visual Composer have some unique medieval allure. “Close the Gates”: The projects evoke namely these associations. The first one features the classic wooden guard gate door that closes before your very eyes; the second one also goes for a guard gate topic and depicts a mechanism with cogs and chains that reveals the forbidden sign.

    Arturo Wibawa’s vision of the forbidden area is presented via marvelous, highly-detailed and even partially animated CSS illustration of the famous Chinese ‘The Purple Forbidden City,’ aka Palace Museum nowadays.

    It’s Watching You

    403 Forbidden Page by Mariana is marked by a whimsical monster-like character that, thanks to direction-aware effect, follows your mouse cursor everywhere. It recreates a feeling of being watched all the time. It also imitates a fancy fairy guard that does not allow moving forward. The project feels fun in spite of the “menacing” look of the mascot.

    Be Persistent

    Gabriele Corti also offers a vision of a “403” error page. His “Persistence is a key” project depicts an entire process of initially denying access and granting it after the right user action. The right actions imply inserting a key into a keyhole Nevertheless, you can always use this concept as a base for some advanced actions like inputting login and password.

    Keeping it Simple

    403 by lsgrrd is an oversimplified take on a “403 Page” that certainly has a right to exist. It is minimal but straight to the point. It has a certain digital quality that oozes techno vibe inherent to the computer sphere. The blinking cursor at the end in tandem with the digital typography produces a fantastic effect. The solution is clean, elegant and straightforward.

    Are You on the List?

    We are going to end our collection with the project made by Cassidy Williams. Unlike the majority featured here, this solution is a metaphor from the real world that illustrates the typical situation in any popular nightclub. The bouncer is the heart and soul of this code snippet. The character was even partially animated to make everything look lifelike.

    Another Opportunity to Engage Users

    Truth be told, “403 Error” is not as widespread as “404 Error”, nor is it as popular and recognizable. Nevertheless, it still exists and occurs time after time. That creates a hole in a website that can break the entire user experience. So, seize the opportunity and turn it into a valid part of the project. It will undoubtedly win over some new visitors and will prevent you from losing the old ones.

    The post Access Denied: 403 Page Code Snippets appeared first on Speckyboy Web Design Magazine.






    Pazartesi, 12 Kasım 2018 / Published in Uncategorized

    21 Mid Century Modern Fonts that Capture the 1950s and 60s

    Fonts that are SO Mid-Century

    Mid-Century modern fonts capture the optimism and hope that embodied the postwar world, with fun and bold lines and dramatic swashes.

    I’ve always found it extremely interesting how the culture of an era can influence the design and style of that time.

    That sounds super pretentious, so let’s move along.

    Mid-century fonts are a perfect example of a culture being seen in design. In the 50s and 60s, the world was recovering from a horrific world war. People were trying to move on from the trauma and focus on the good and the happy.

    That spirit of optimism is seen in the designs of that era, including in the fonts. Think about the lettering used in classic TV shows, like I Love Lucy, The Honeymooners, and Leave it to Beaver. They were fun, quirky, and lighthearted.

    The style is classic and fonts are still being modeled after them to this day.

    We’ve collected a list of premium and free mid-century modern fonts that capture the spirit of the 50s and 60s.

    Let us know your favorites in the comments below!

    Drive-In Mid-Century Font – $7

    It almost seems unfair to start with the best, because it’ll be all downhill from here. But, alas, here we are! Medialoot has created this gorgeous mid-century sans serif typeface that embodies the style and feel of that era.

    With your purchase, you’ll get both the solid and inline varieties of the font.

    Stiff Staff Font – Free

    Stiff Staff is an exaggerated, decorative font that has all the precise lines and edges that were so well-loved in the 1950s and 1960s. The free font comes with all the symbols and letterings you could ever need, so go get it!

    Milan Vintage Sans Serif Font – $15

    Milan has found a way to blend the modern with the more aged look of a sans serif font. The vintage typeface is an all-caps option that manages to work both in body, headers, and logos.

    Remachine Script – Free

    It doesn’t get more mid-century than Remachine Script. The font is dramatic and bold, with thick lines and graceful curves. It screams old-timey diner or drive-in movie, all the vibes you want for a mid-century, vintage font.

    Windpeak Script Font – $14

    What makes Windpeak so unique is that it was actually modeled after American apparel from the 1950s. So, when we call this a genuine mid-century font, we’re not joking. The swirls and vertical strokes make it feel vintage and rustic, something that is felt in every letter.

    The Lunch Box Font Set – $28

    The Lunch Box Font Set is the epitome of the 1950s and 1960s. And with this pack, you don’t just get one amazing font, you’ll get 10! From a neon hybrid font to a typeface that is reminiscent of the show Bewitched, there is no shortage of mid-century options for you!

    KTF Roadstar Font – Free

    KTF Roadstar is a simple, but effective, font that embraces all that is mid-century typeface. The strong lines, the dramatic swirls, and the hand drawn feeling are all elements that make KTF such a versatile font option.

    Airstream Font – Free

    If you’re looking for a script that cries retro, Airstream is the choice for you. With its thick yet subtle lines, Airstream is perfect for branding or logos.

    Lucy Script – Free

    What is more 1950s than I Love Lucy? The classic show is memorialized in this classic mid-century font. Lucy Script is so dainty and so beautiful, it’s best used in moderation. Then again, when did Lucy ever do anything in moderation?

    The Brown Bag Font Set – $28

    We’re going super classic with this set of gorgeous fonts. The Brown Bag pack comes with 10 diverse and unique fonts that will be perfect for any of your mid-century projects.

    Avelana Bold Font – Free

    This is one of the more delicate font options on this list and also a fast favorite. Avelana, which looks like it could be used for a Cuban nightclub sign, comes in thin, medium, and bold weights. And with its variety, Avelana is able to be gorgeous and versatile.

    The Roxers Typeface – $13

    Coming in Normal and Rough forms, The Roxers Typeface is a handmade font that is beautifully retro and vintage. The font is also very versatile, making it an ideal option for branding, ads, print, and textiles.

    Hamburger Heaven – Free

    With its 1950s feel, Hamburger Heaven is the perfect mid-century font for headings and titles. The freebie comes with classic straight lines and slight swishes to add a bit of drama.

    Madness Hyperactive Font – Free

    Madness Hyperactive is as it was named. The quirky typeface has a retro comic feel that makes it a perfect ode to 1950s cartoons.

    TV Dinner Font Set – $28

    Not all of the fonts that come in this set are specifically mid-century, but we had to include the pack because it really has some gems. Singlesville Script and Dry Cleaners both point to 1950s and 1960s styles.

    American Captain Font – Free

    American Captain gets its inspiration from a very obvious source. And while that specific story might have taken place in the 1940s, the style carried through the 50s and 60s.

    Seaside Resort Font – Free

    Maybe it’s just me, but Seaside Resort reminds me of Agatha Christie novels. The shadowed, thick lettering brings to mind old seaside villas where there’s bound to be a murder. Just saying.

    Casino Buffet Font Set – $28

    Another set of amazing mid-century fonts that just scream the golden age of television. The various typefaces come in the classic, like Lamplighters Script, and the zany, like Mirage Zanzibar.

    Studebaker Font – Free

    Like the classic car it was named after, Studebaker is a sleek and timeless font. The clean lines make it perfect for headers and branding, but the varying weights give it a fun personality.

    Palm Canyon Drive – $19

    Inspired by California in the 50s, Palm Canyon screams old Hollywood. Because the font was based off retro road signs, matchbooks, and postcards, it is perfect for social media and branding. Of course, if you have a diner you want to make a sign for, no font will serve you better!

    The Boller Typeface – $15

    Boller might be one of the most quirky font options on this list. But its perfect blend of mid-century modern and the drama that was the 1960s makes it a great choice for print, stationary, and textiles.

    TOP