Salı, 29 Mayıs 2018 / Published in Uncategorized

I am currently part of a closed beta program testing the all new cloud based Transport Management Service. It will offer a functionality comparable to the ABAP based enhanced Change and Transport System (CTS+) but without the need for an SAP NetWeaver application server coordinating the transports. Additionally the Transport Management Service allows not only the transport of development artifacts but also covers application specific content.

We have now reached a high enough maturity level of the service so that I can give a preview what will be coming…



The Transport Management Service is a service running in the Cloud Foundry environment:

It allows to model transport landscapes where the so-called transport nodes represent for example Neo subaccounts or Cloud Foundry spaces. The nodes are connected to the ‘real’ transport targets via destinations using the standard SAP Cloud Platform destination service. The flow of the content to be transported is modeled with transport routes specifying a source and a target node. Using several transport routes you can model larger or more complex transport landscapes.

The architecture of the Transport Management Service is quite flexible so that it will support a wide variety of content it can transport as well as different types of target environments.

Currently we are transporting SAP Cloud Platform Integration (CPI) packages, Multi-Target Applications (MTA) and SAP HANA XS classic model delivery units. Especially the first example shows an area which has not yet been covered by existing solutions: the transport of application specific content. Here we are planning to support further scenarios in the near future.

On the target environment side we are currently supporting SAP Cloud Platform Neo subaccounts and SAP HANA XS classic databases running in SAP Cloud Platform. The next step will be SAP Cloud Platform Cloud Foundry spaces.

Now, let’s have a brief look into the Transport Management Service.


Entry Screen

The entry screen gives an overview on successful, failed and pending transports. It also allows navigation to configuration activities, log files and documentation.


Transport Nodes

The Transport Nodes view shows all configured Transport Nodes. It allows the configuration of new nodes, as well as changing and deleting existing nodes. All changes to the configured landscape are tracked in the Landscape Action Logs.

The Transport Nodes view also gives access to the import queues of the nodes:

Here you can find the transport requests which are targeting this node. You can trigger the import of the transport requests and also access the logs of import activities.


Transport Routes

This view shows the Transport Routes connecting two Transport Nodes (source and target). By combining several routes you can setup more complex landscapes. In this example I have set up a linear landscape (ConsDev -> ConsTest -> ConsProd) and a star shaped landscape with one source (StarSource) and three targets (Star1, Star2 and Star3).



As mentioned above the Transport Management Service is currently in beta testing. We are planning the global availability later this year.

Although we are still in beta, the documentation is already available, if you would like to read more…

One of our beta testers already wrote a blog about his first experiences. I am also planning to provide you with further blogs about the Transport Management Service.

So stay tuned!


Salı, 29 Mayıs 2018 / Published in Uncategorized


This is Chandra Gajula, working in Mindtree Limited, Bangalore.

This blog will help you to establish a navigation (link) call to another application (UI) from custom field in custom OWL.

Most of the custom requirement for the new solution will be designed based on the interaction of the existing business applications like Customer, Sales Order, Opportunity, etc., means few of the standard fields would be attached to the custom fields and then form as a report or OWL UI or any.

For example: Here customer account UI (TI – Thing Inspector) could be called from custom OWL UI. Standard Customer Account field like “Account Number” is added to OWL and assigned to “Link” as a Display Type in properties.


Firstly, add association to Customer in custom BO.

association ToCustomer to Customer.

In OWL UI, Add new field and bind to SAP_ToCustomer under custom BO (Select BO Model).

Go to Controller and create Outport:

Add Key Parameter with binding of SAP_ToCustomer, as shown below.

SAP_ToCustomer field will be added automatically when you add ASSOCIATION to CUSTOMER in BO.

Create OBN Navigation:

Create new OBN Navigation and assign Standard Object “Customer” BO Model  under Business Partner/Global name space. This function will navigate to the Customer Account TI UI with the help of Outport created above.

Now, Create Event Handler to trigger outport:

Create new event handler to fire output as shown below.


Hope you understand and get some basic idea on UI customizing for the navigations.



Chandra Gajula.



Salı, 29 Mayıs 2018 / Published in Uncategorized

In previous post, we discussed and learnt about New Features of ASP.NET vNEXT. Here in this post, we will discuss the top new features and enhancements introduced in Microsoft C# 6.0. But before we dive deeper to discuss features of latest version of C#, let’s analyze how C# evolves and what key features introduced in all previous versions.

New Features in C# 6.0

Auto Property Initializer

This feature enables us to set the values of properties right at the place where they are declared. Previously, we use constructor to initialize the auto properties to non-default value but with this new feature in C# 6.0, it doesn’t require to initialize these properties with a constructor as shown below:

Note: We can use the feature on getter/setter as well as getter only properties as demonstrated above. Using getter only helps to easily achieve immutability.

Exception Filters

Microsoft introduced this CLR feature in C# with version 6.0 but it was already available in Visual Basic and F#. In order to use Exception Filters in C#, we have to declare the filter condition in the same line as that of the catch block and the catch block will execute only if the condition is successfully met as shown below:

The above code checks if the special exception occurred is of type SqlException. If not, the exception is taken care of.

Here is another case that shows how we can check the Message property of the exception and indicate a condition appropriately.

Note: Remember that Exception Filter is a debugging feature rather than a coding feature. For a very nice and detailed discussion, please follow here.

await in catch and finally block

We frequently log exceptions to a document or a database. Such operations are resource extensive and lengthy as we would need more time to perform I/O. In such circumstances, it would be awesome on the off chance that we can make asynchronous calls inside our exception blocks. We may additionally need to perform some cleanup operations in finally block which may also be resource extensive.

Dictionary Initializer

As oppose to older way of initializing a dictionary, C# 6.0 introduces more cleaner way for dictionary

initialization as follows:

Previously same was done in following way:

Important Note:

Primary Constructor has been removed from C# 6.0. For more details, please

follow here


More Related

Top 10 Interview Questions and Answers Series:

The post Top New Features of C# 6.0 appeared first on Web Development Tutorial.

Salı, 29 Mayıs 2018 / Published in Uncategorized

This post was inspired by Scott Brady's recent post on implementing "passwordless authentication" using ASP.NET Core Identity.. In this post I show how to implement his "optimisation" suggestions to reduce the lifetime of "magic link" tokens.

I start by providing some some background on the use case, but I strongly suggest reading Scott's post first if you haven't already, as mine builds strongly on his. I'll show:

I'll start with the scenario: passwordless authentication.

Passwordless authentication using ASP.NET Core Identity

Scott's post describes how to recreate a login workflow similar to that of Slack's mobile app, or Medium:

Instead of providing a password, you enter your email and they send you a magic link:

Clicking the link automatically, logs you into the app. In nhis post, Scott shows how you can recreate the "magic link" login workflow using ASP.NET Core Identity. In this post, I want to address the very final section in his post, titled Optimisations:Existing Token Lifetime.

Scott points out that the implementation he provided uses the default token provider, the DataProtectorTokenProvider to generate tokens, which generates large, long-lived tokens, something like the following:





By default, these tokens last for 24 hours. For a passwordless authentication workflow, that's quite a lot longer than we'd like. Medium uses a 15 minute expiry for example.

Scott describes several options you could use to solve this:

  • Change the default lifetime for all tokens that use the default token provider

  • Use a different token provider, for example one of the TOTP-based providers

  • Create a custom data-protection base token provider with a different token lifetime

All three of these approaches work, so I'll discuss each of them in turn.

Changing the default token lifetime

When you generate a token in ASP.NET Core Identity, by default you will use the DataProtectorTokenProvider. We'll take a closer look at this class shortly, but for now it's sufficient to know it's used by workflows such as password reset (when you click the "forgot your password?" link) and for email confirmation.

The DataProtectorTokenProvider depends on a DataProtectionTokenProviderOptions object which has a TokenLifespan property:

public class DataProtectionTokenProviderOptions


public string Name { get; set; } = "DataProtectorTokenProvider";

public TimeSpan TokenLifespan { get; set; } = TimeSpan.FromDays(1);


This property defines how long tokens generated by the provider are valid for. You can change this value using the standard ASP.NET Core Options framework inside your Startup.ConfigureServices method:

public class Startup


public void ConfigureServices(IServiceCollection services)



x => x.TokenLifespan = TimeSpan.FromMinutes(15));

// other services configuration


public void Configure() { /* pipeline config */ }


In this example, I've configured the token lifespan to be 15 minutes using a lambda, but you could also configure it by binding to IConfiguration etc.

The downside to this approach, is that you've now reduced the token lifetime for all workflows. 15 minutes might be fine for password reset and passwordless login, but it's potentially too short for email confirmation, so you might run into issues with lots of rejected tokens if you choose to go this route.

Using a different provider

As well as the default DataProtectorTokenProvider, ASP.NET Core Identity uses a variety of TOTP-based providers for generating short multi-factor authentication codes. For example, it includes providers for sending codes via email or via SMS. These providers both use the base TotpSecurityStampBasedTokenProvider to generate their tokens. TOTP codes are typically very short-lived, so seem like they would be a good fit for the passwordless login scenario.

Given we're emailing the user a short-lived token for signing in, the EmailTokenProvider might seem like a good choice for our paswordless login. But the EmailTokenProvider is designed for providing 2FA tokens, and you probably shouldn't reuse providers for multiple purposes. Instead, you can create your own custom TOTP provider based on the built-in types, and use that to generate tokens.

Creating a custom TOTP token provider for passwordless login

Creating your own token provider sounds like a scary (and silly) thing to do, but thankfully all of the hard work is already available in the ASP.NET Core Identity libraries. All you need to do is derive from the abstract TotpSecurityStampBasedTokenProvider<> base class, and override a couple of simple methods:

public class PasswordlessLoginTotpTokenProvider<TUser> : TotpSecurityStampBasedTokenProvider<TUser>

where TUser : class


public override Task<bool> CanGenerateTwoFactorTokenAsync(UserManager<TUser> manager, TUser user)


return Task.FromResult(false);


public override async Task<string> GetUserModifierAsync(string purpose, UserManager<TUser> manager, TUser user)


var email = await manager.GetEmailAsync(user);

return "PasswordlessLogin:" + purpose + ":" + email;



I've set CanGenerateTwoFactorTokenAsync() to always return false, so that the ASP.NET Core Identity system doesn't try to use the PasswordlessLoginTotpTokenProvider to generate 2FA codes. Unlike the SMS or Authenticator providers, we only want to use this provider for generating tokens as part of our passwordless login workflow.

The GetUserModifierAsync() method should return a string consisting of

… a constant, provider and user unique modifier used for entropy in generated tokens from user information.

I've used the user's email as the modifier in this case, but you could also use their ID for example.

You still need to register the provider with ASP.NET Core Identity. In traditional ASP.NET Core fashion, we can create an extension method to do this (mirroring the approach taken in the framework libraries):

public static class CustomIdentityBuilderExtensions


public static IdentityBuilder AddPasswordlessLoginTotpTokenProvider(this IdentityBuilder builder)


var userType = builder.UserType;

var totpProvider = typeof(PasswordlessLoginTotpTokenProvider<>).MakeGenericType(userType);

return builder.AddTokenProvider("PasswordlessLoginTotpProvider", totpProvider);



and then we can add our provider as part of the Identity setup in Startup:

public class Startup


public void ConfigureServices(IServiceCollection services)


services.AddIdentity<IdentityUser, IdentityRole>()



.AddPasswordlessLoginTotpTokenProvider(); // Add the custom token provider



To use the token provider in your workflow, you need to provide the key "PasswordlessLoginTotpProvider" (that we used when registering the provider) to the UserManager.GenerateUserTokenAsync() call.

var token = await userManager.GenerateUserTokenAsync(

user, "PasswordlessLoginTotpProvider", "passwordless-auth");

If you compare that line to Scott's post, you'll see that we're passing "PasswordlessLoginTotpProvider" as the provider name instead of "Default".

Similarly, you'll need to pass the new provider key in the call to VerifyUserTokenAsync:

var isValid = await userManager.VerifyUserTokenAsync(

user, "PasswordlessLoginTotpProvider", "passwordless-auth", token);

If you're following along with Scott's post, you will now be using tokens witth a much shorter lifetime than the 1 day default!

Creating a data-protection based token provider with a different token lifetime

TOTP tokens are good for tokens with very short lifetimes (nominally 30 seconds), but if you want your link to be valid for 15 minutes, then you'll need to use a different provider. The default DataProtectorTokenProvider uses the ASP.NET Core Data Protection system to generate tokens, so they can be much more long lived.

If you want to use the DataProtectorTokenProvider for your own tokens, and you don't want to change the default token lifetime for all other uses (email confirmation etc), you'll need to create a custom token provider again, this time based on DataProtectorTokenProvider.

Given that all you're trying to do here is change the passwordless login token lifetime, your implementation can be very simple. First, create a custom Options object, that derives from DataProtectionTokenProviderOptions, and overrides the default values:

public class PasswordlessLoginTokenProviderOptions : DataProtectionTokenProviderOptions


public PasswordlessLoginTokenProviderOptions()


// update the defaults

Name = "PasswordlessLoginTokenProvider";

TokenLifespan = TimeSpan.FromMinutes(15);



Next, create a custom token provider, that derives from DataProtectorTokenProvider, and takes your new Options object as a parameter:

public class PasswordlessLoginTokenProvider<TUser> : DataProtectorTokenProvider<TUser>

where TUser: class


public PasswordlessLoginTokenProvider(

IDataProtectionProvider dataProtectionProvider,

IOptions<PasswordlessLoginTokenProviderOptions> options)

: base(dataProtectionProvider, options)




As you can see, this class is very simple! Its token generating code is completely encapsulated in the base DataProtectorTokenProvider<>; all you're doing is ensuring the PasswordlessLoginTokenProviderOptions token lifetime is used instead of the default.

You can again create an extension method to make it easier to register the provider with ASP.NET Core Identity:

public static class CustomIdentityBuilderExtensions


public static IdentityBuilder AddPasswordlessLoginTokenProvider(this IdentityBuilder builder)


var userType = builder.UserType;

var provider= typeof(PasswordlessLoginTokenProvider<>).MakeGenericType(userType);

return builder.AddTokenProvider("PasswordlessLoginProvider", provider);



and add it to the IdentityBuilder instance:

public class Startup


public void ConfigureServices(IServiceCollection services)


services.AddIdentity<IdentityUser, IdentityRole>()



.AddPasswordlessLoginTokenProvider(); // Add the token provider



Again, be sure you update the GenerateUserTokenAsync and VerifyUserTokenAsync calls in your authentication workflow to use the correct provider name ("PasswordlessLoginProvider" in this case). This will give you almost exactly the same tokens as in Scott's original example, but with the TokenLifespan reduced to 15 minutes.


You can implement passwordless authentication in ASP.NET Core Identity using the approach described in Scott Brady's post, but this will result in tokens and magic-links that are valid for a long time period: 1 day by default. In this post I showed three different ways you can reduce the token lifetime: you can change the default lifetime for all tokens; use very short-lived tokens by creating a TOTP provider; or use the ASP.NET Core Data Protection system to create medium-length lifetime tokens.

Salı, 29 Mayıs 2018 / Published in Uncategorized

After the official release of Magento 2, a number of people went on to install it on their localhost or their hosting service providers. For developers, Magento 2 installation on localhost is a pretty painful experience.

Magento 2 comes with many requirements. Magento 2 requires a lot of server configurations and tweaking along with composer setup which are not set by default. Let me clear this for a lot of those who are wondering why composer is really a need in Magento 2. Composer enables us to manage the Magento system, extensions, its dependencies, and also allows us to declare libraries for the project. In this article, I will explain how you can install Magento 2 on your localhost by using XAMPP.

Before you start your installation, you need to have these things on hand:

1: XAMPP Server installed in your computer.

2: Magento 2 Downloaded (You can download Magento 2 from this link)

Magento 2 has some installation requirements like:

Apache Version 2.2 or 2.4

PHP Version 5.5.x or 5.6.x

MySQL Version 5.6x

Now we can go towards the installation processes. I will be covering the whole installation step-by-step to make it easy for you:

  • Open your XAMPP server and start Apache and MySQL applications

  • Extract Magento 2 files in your xampp/htdocs folder

Install Composer

Download and run Composer-Setup.exe. It will install the latest composer version and set up your path so that you can just call “composer” from any directory in your command line.

Click next and select “Install Shell Menus”

In the next step, you would need php.exe path to proceed with the composer setup and provide the path of php.exe

Now click on install to see the final step

After installing the composer, you must enable the extension (php_intl.dll) in your php.ini file. To enable the extension, go to your php.ini and uncommenting the line “extension=php_intl.dll” and just remove semicolon “;” from the starting of the line and restart your XAMPP control panel.

After this, go to the Magento 2 folder inside your htdocs folder. Hold the Shift key and right click, and Select “Open command window here”. This will open command prompt on the location.

In command prompt, execute the command “composer install”. This will install Magento 2 on your localhost.

Note: If you see the error show in the image below, then go to your Magento 2 directory and right-click on composer.lock file and select “composer update”

Get your Username and Password from and click My Account. Now click on Secure Keys under Developers tab on the left hand.

Your Public key is your Username and your Private key is your password.

Now create an empty database with correct permissions in your phpmyadmin.


  • Enter your Magento 2 URL in browser e.g. localhost/m2

Click on Agree and Set Up Magento.

Magento 2 will check your environment for readiness. Click on Start Readiness Check.

Now enter your database information that you previously created.

Click next, and you will be asked for web configurations like store URL, admin URL, Apache rewrites and https options.

Click Next, and select your Timezone, Currency and language in “Customize Your Store” section.

Click next, and insert your back-end Admin username, email and password to setup Admin credentials.

Now click next and Magento 2 is ready to be installed on your localhost. Click on Install Now and as a word of caution, don’t close your browser until setup is done.

Congratulations! You have now successfully installed Magento 2 on your localhost. You can now start making your preferred customizations and launch your own ecommerce store based on the most famous Magento software.

Worried about hosting? You don’t need to worry as long as Cloudways Managed Magento Hosting is here for increasing your everyday sales and taking away your server side management hassles. You just focus on building an awesome Magento 2 based ecommerce website and we will take care of the rest. And if you have any issues while installing Magento 2, you can go ahead and ask me via the comments section.

Note: If you face an error in your Magento 2 Admin url, then Go to your Magento 2 database and search “core_config_data” table and change the following 2 rows

web/unsecure/base_url to

web/secure/base_url to

Salı, 29 Mayıs 2018 / Published in Uncategorized


This article is about using Quartz.Net in a more modular and extensive approach. When we're using Quartz.Net for job queing and execution, there is a possibility that the we might face issues of volumne, memory usage and multithreading. This project provides the solution by creating a custom Quartz Schedulers which will pick up jobs from the queue and execute it. It gives us the flexibility to schedule job using the web application and decide which remote server should execute the job. Depending on the nature of the job, we can select the scheduler/listener which will be responsible for picking up the job from the queue and executing it.


It will help you gain better understanding about architecture of the framework and how it is being used here. Quartz.Net has its own windows service that it provides that helps you offload some of the job to remote server. But if you plan of having a custom implementation, this article will be helpful.


Before going through this article. please go over the basics of Quartz Scheduler. Before moving forward, you'll need to have SQL Server to get it to work.

Using the code

First step is the get the Quartz.Net package from Nuget.

Next step is to set up the database. Quartz requires a set of tables when we plan on using Sql Server as our datastore. You can find the sql script in the project. Once the database is updated, we need to configure our Quartz. There are multiple ways of doing it like having it in Web.config/App.config, a separate

  • Web.config/App.config

  • Separate config file for Quartz

  • Writing in class file as NameValueCollection

For simplicity, we're writing in the class file i.e. QuartzConfiguration.cs. Here we've two separate configuration pointint to the same database. The only difference between the two is the instance name. So while scheduling, we'll define which scheduler it needs to run under and that particular scheduler will select the job and run. All other schedulers will not execute it due to instance name difference.

public class QuartzConfiguration


public static NameValueCollection RemoteConfig()


NameValueCollection configuration = new NameValueCollection


{ "quartz.scheduler.instanceName", "RemoteServer" },

{ "quartz.scheduler.instanceId", "RemoteServer" },

{ "quartz.jobStore.type", "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" },

{ "quartz.jobStore.useProperties", "true" },

{ "quartz.jobStore.dataSource", "default" },

{ "quartz.jobStore.tablePrefix", "QRTZ_" },

{ "quartz.dataSource.default.connectionString", "Server=(servername);Database=(datbasename);Trusted_Connection=true;" },

{ "quartz.dataSource.default.provider", "SqlServer" },

{ "quartz.threadPool.threadCount", "1" },

{ "quartz.serializer.type", "binary" },


return configuration;


public static NameValueCollection LocalConfig()


NameValueCollection configuration = new NameValueCollection


{ "quartz.scheduler.instanceName", "LocalServer" },

{ "quartz.scheduler.instanceId", "LocalServer" },

{ "quartz.jobStore.type", "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" },

{ "quartz.jobStore.useProperties", "true" },

{ "quartz.jobStore.dataSource", "default" },

{ "quartz.jobStore.tablePrefix", "QRTZ_" },

{ "quartz.dataSource.default.connectionString", "Server=(servername);Database=(datbasename);Trusted_Connection=true;" },

{ "quartz.dataSource.default.provider", "SqlServer" },

{ "quartz.threadPool.threadCount", "1" },

{ "quartz.serializer.type", "binary" },


return configuration;



So when we're scheduling the job, we'll use appropriate configuration. Another thing to be careful about is to start the scheduler. If the scheduler isn't running, it can schedule jobs on the queue, but it cannot execute it. In this example, I'm using web application to schedule the job only. I'm not starting the scheduler. Hence my web app will never pick up any jobs from the queue as scheduler instance is not running.

How it works

When you schedule a job and using sql server datastore, the job gets saved to [dbo].[QRTZ_JOB_DETAILS] table.

The highlighted section is the name of the scheduler it is meant to run under. So when I start my Remote Scheduler, it will only pick up the 2nd and 3rd jobs and not the first. It will only be picked up when I start Local Scheduler

Here is the screenshot of the execution. If you notice, there are two jobs that were executed by Remote Scheduler and one by Local Scheduler.

If we're to use App.config or Web.Config to configure the schedulers, all we've to do is change the schduler name in the file. The code will automatically start picking up jobs from the queue will the same schduler name. It can be very beneficial in times when one of your scheduler is overloaded with jobs and you need additional resources to execute the jobs quicker. Without updating/replacing a single dlls, you can change the configuration of Quartz Schedulers and it will continue to work seamlessly.

Points of Interest

The most important thing that I learned that the Quartz.Net framework is very modular and flexible, making it easier to have a plug and play kind of components. Having a modular architecture is really helpful in times of working with larger jobs, volumes, long running jobs. As we're working of the sql server datastore, this concept can also be used across load balanced servers. The only catch is to make sure that the service has the required dlls/binaries in its folder and the app.config file is regularly updated.

Salı, 29 Mayıs 2018 / Published in Uncategorized

A while ago on one of my older articles, I came across the following comment:

Maybe, he’s right? Maybe the REAL editors are vim and Emacs:


Then it all became obvious to me. Why would someone want a nice GUI when we can use terminal based editors, such as vim or emacs. I wasn’t questioning the reality behind all the GUIs… In order to question the reality like a REAL programmer, I need to stop using graphical interfaces, they’re evil. Just like the evil corporations that make us pay for proprietary operating systems that are constantly spying on us. The corporations that steal our privacy and activate our webcams when we sleep. Corporations which control us subconsciously in order to be in control of our lives; who we vote for, how much we earn and who we become. I can see it now, but I’ve been so blind all these years. How could I never see this? How come? Is it too late for me? What can I do to save others? Do others even exist? I took the red pill and now, I am a completely different person. Thank you John, if that is in fact your real name, for this revelation.

I should start by installing a Linux distribution, covering my webcam with tape and destroying my microphone. I will also need to remove the sim card from my phone, destroy the Bluetooth module, disconnect all the network cards and scratching the fingerprint sensor on my phone to the point it’s unusable. Real programmers don’t use mice nor track pads, so I will also throw mine away. Real programmers certainly don’t need the internet. When it comes to programming, I will only use nano/vim/emacs. Real programmers also build websites and apps in Assembly or machine code itself. I should also use a magnetized needle and a magnetic disk to write that machine code.

But seriously, how did we get from choosing an editor of your choice, to questioning the reality. Do we really need to make such a big deal out of using a GUI? I personally really like emacs’ shortcuts and I’m a massive fan of using a terminal, but I still don’t use it simply because IDEs such as Webstorm, Visual Studio or Visual Studio Code are much better. However, it doesn’t matter.

You can use any text editor, any operating system and almost any browser (except for IE) and still be a great developer. Because it’s not about what tools you use, but it’s about how you use them and what you build with them.

Salı, 29 Mayıs 2018 / Published in Uncategorized

I recently wanted to try out the preview of .NET Core 2.1, so that I could experiment a bit with the new Span<T> functionality, which is something I'm very excited about.

Which SDK do I need?

But when I visited the .NET homepage (found at the easy to remember, and navigated to the all downloads page, I must confess to being a little confused by the experience…

That's right, there's two 2.1 SDK's available. The one called 2.1.300-preview1 is the one I wanted – that's the SDK for the .NET Core 2.1 preview runtime. Confusingly the 2.1.103 SDK actually contains the 2.0 runtime (not 2.1)! There's a new versioning strategy going forwards that will make the relationship between runtime and SDK easier to understand, but before that, the runtime and SDK version numbers did not necessarily correspond.

Will bad things happen if I install a preview SDK?

So I've worked out that I need 2.1.300-preview1, but like many developers, I've been burned in the past by installing bad preview bits that mess up my development machine. So I had two key questions before I was ready to pull the trigger. First, will installing a preview .NET Core SDK break something?, and second, can I easily switch back to the release version of the SDK?

There's good news on both fronts. First of all, .NET Core SDKs are installed side by side, so will not modify any previous SDK installations. So when I install the preview, all that happens is that I get a new folder under C:\Program Files\dotnet\sdk. You can see here all the various .NET Core SDK versions I've got installed on this PC:

And the same is true for the .NET Core runtime. Inside the C:\Program Files\dotnet\shared\Microsoft.NETCore.App folder we can see all the versions of the runtime I've installed, including the 2.1.0 preview that came with the SDK:

So that's great, I've got the 2.1 preview SDK and runtime available on my machine, and I can experiment with it. But now we need to know how to identify which one of these SDKs is currently active, and how to switch between them.

What version of the SDK am I currently running?

If I go to a command line and type dotnet –version it will tell me the SDK version I'm currently running, which will be the preview version I just installed:

The way this works is that I'm actually calling dotnet.exe which resides in C:\Program Files\dotnet. That is just a proxy which passes on my command to whatever the latest SDK is, which by default will just be the folder in C:\Program Files\dotnet\sdk with the highest version number.

What SDK and runtime versions are installed?

Two other handy commands to know are dotnet –list-sdks to see all installed SDKs:

and dotnet –list-runtimes to see all installed runtimes:

How can I switch between .NET Core SDK versions?

Since my current version is 2.1.300-preview1-008174, if I run dotnet new console it will create me a .csproj with that targets the .NET Core 2.1 runtime (which is of course still in preview at the time of writing):

<Project Sdk="Microsoft.NET.Sdk">






That's great if I want to experiment with the .NET Core 2.1 preview, but what if I just want to get back to the version I was using before, and build .NET Core 2.0 apps? In other words, when I type the dotnet command in the console, how can I control which version of the SDK I am actually using?

The answer is that we can easily switch between versions of the SDK by creating a global.json file. It simply needs to reside in the folder you're working in (or any parent folder). So if there is a global.json file in my current folder or any parent folder with the following contents, I'll be back to using the 2.1.103 version of the SDK (which if you remember targets the 2.0 runtime).


"sdk": {

"version": "2.1.103"



Having created this global.json file, running dotnet –version now returns 2.1.103:

And if I do a dotnet new console we'll see that the generated .csproj targets the 2.0 runtime, as we'd expect:

<Project Sdk="Microsoft.NET.Sdk">






Easily create a global.json file

One final tip – you can easily create a global.json file in the current working folder with the dotnet new globaljson command. This will create a global.json file and pre-populate it with the latest version of the SDK available. You can also control the version that gets written to the file with the <span style="font-family: sans-serif; font-size: 16px; line-height: 1.7;"–<sdk-version argument. For example, dotnet new globaljson –sdk-version 2.0.2 generates the following global.json:


"sdk": {

"version": "2.0.2"




It's always daunting to install preview bits on your developer machine as you don't know what might break, but with .NET Core, SDKs and runtimes are installed side by side, and so long as you know about global.json it's very easy to control which version of the SDK you're using. And the version of the runtime can also be controlled using the TargetFramework (and, if necessary the RuntimeFrameworkVersion) properties in your .csproj file.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Object Browser: The World's Worst-Named Tool

So, you know the class you need but you don't know what class library it's in. How do you add the right reference to your project? Object Browser will let you do it in two steps.

You can do that because "Object Browser" is patently misnamed — to begin with, it displays classes, not objects. Just as obviously, it isn't just limited to classes (objects) but also displays namespaces, enums, structs, interfaces, and class members (e.g. events, properties, etc.).

If you know what class (or interface or enum or, even, member) you want, you can search for it in Object Browser using the search box at the top of Object Browser's window. Once you find what you're looking for, just click on the Add to References icon at the top of Object Browser to add a reference to the relevant library to whatever project you have selected in Solution Explorer.

So, it isn't just a browser, either.

Worst. Name. Ever.

Posted by Peter Vogel on 04/05/2018 at 5:18 AM

Salı, 29 Mayıs 2018 / Published in Uncategorized

Sign up here to receive the weekly blockchain dispatch on Friday mornings PST.

Whether it’s being hailed as a game-changer or derided as hype, blockchain is everywhere these days. But it’s hard to get trustworthy, unbiased news and tutorials about the tech. If you’re trying to enter the blockchain dev world, there’s a high price of entry.

Blockchain technology is useful for more than just cryptocurrency, and this newsletter will highlight some of the more interesting and exciting developments in this emerging field, along with ways to get started yourself.

You’ll get a curated selection of links sourced by me, Adam. You may know me from weekly Back-end and Design newsletters, or from Versioning, my daily newsletter and membership publication focused on the bleeding-edge of web dev and design. Now I want to expand my knowledge to the blockchain world – and I want you to join me.

Every Friday, you’ll receive a newsletter full of curated links that I’ve found useful in my own research, along with links to useful tools, tutorials and projects I think are worthwhile. Also, no scams!

Sign up, and I promise it’ll be both on-the-chain and off-the-chain!

Continue reading %Announcing the SitePoint Blockchain Newsletter!%