I am currently part of a closed beta program testing the all new cloud based Transport Management Service. It will offer a functionality comparable to the ABAP based enhanced Change and Transport System (CTS+) but without the need for an SAP NetWeaver application server coordinating the transports. Additionally the Transport Management Service allows not only the transport of development artifacts but also covers application specific content.

We have now reached a high enough maturity level of the service so that I can give a preview what will be coming…

 

Overview

The Transport Management Service is a service running in the Cloud Foundry environment:

It allows to model transport landscapes where the so-called transport nodes represent for example Neo subaccounts or Cloud Foundry spaces. The nodes are connected to the ‘real’ transport targets via destinations using the standard SAP Cloud Platform destination service. The flow of the content to be transported is modeled with transport routes specifying a source and a target node. Using several transport routes you can model larger or more complex transport landscapes.

The architecture of the Transport Management Service is quite flexible so that it will support a wide variety of content it can transport as well as different types of target environments.

Currently we are transporting SAP Cloud Platform Integration (CPI) packages, Multi-Target Applications (MTA) and SAP HANA XS classic model delivery units. Especially the first example shows an area which has not yet been covered by existing solutions: the transport of application specific content. Here we are planning to support further scenarios in the near future.

On the target environment side we are currently supporting SAP Cloud Platform Neo subaccounts and SAP HANA XS classic databases running in SAP Cloud Platform. The next step will be SAP Cloud Platform Cloud Foundry spaces.

Now, let’s have a brief look into the Transport Management Service.

 

Entry Screen

The entry screen gives an overview on successful, failed and pending transports. It also allows navigation to configuration activities, log files and documentation.

 

Transport Nodes

The Transport Nodes view shows all configured Transport Nodes. It allows the configuration of new nodes, as well as changing and deleting existing nodes. All changes to the configured landscape are tracked in the Landscape Action Logs.

The Transport Nodes view also gives access to the import queues of the nodes:

Here you can find the transport requests which are targeting this node. You can trigger the import of the transport requests and also access the logs of import activities.

 

Transport Routes

This view shows the Transport Routes connecting two Transport Nodes (source and target). By combining several routes you can setup more complex landscapes. In this example I have set up a linear landscape (ConsDev -> ConsTest -> ConsProd) and a star shaped landscape with one source (StarSource) and three targets (Star1, Star2 and Star3).

 

Outlook

As mentioned above the Transport Management Service is currently in beta testing. We are planning the global availability later this year.

Although we are still in beta, the documentation is already available, if you would like to read more…

One of our beta testers already wrote a blog about his first experiences. I am also planning to provide you with further blogs about the Transport Management Service.

So stay tuned!

 

Hi,

This is Chandra Gajula, working in Mindtree Limited, Bangalore.

This blog will help you to establish a navigation (link) call to another application (UI) from custom field in custom OWL.

Most of the custom requirement for the new solution will be designed based on the interaction of the existing business applications like Customer, Sales Order, Opportunity, etc., means few of the standard fields would be attached to the custom fields and then form as a report or OWL UI or any.

For example: Here customer account UI (TI – Thing Inspector) could be called from custom OWL UI. Standard Customer Account field like “Account Number” is added to OWL and assigned to “Link” as a Display Type in properties.

Solution:

Firstly, add association to Customer in custom BO.

association ToCustomer to Customer.

In OWL UI, Add new field and bind to SAP_ToCustomer under custom BO (Select BO Model).

Go to Controller and create Outport:

Add Key Parameter with binding of SAP_ToCustomer, as shown below.

SAP_ToCustomer field will be added automatically when you add ASSOCIATION to CUSTOMER in BO.

Create OBN Navigation:

Create new OBN Navigation and assign Standard Object “Customer” BO Model  under Business Partner/Global name space. This function will navigate to the Customer Account TI UI with the help of Outport created above.

Now, Create Event Handler to trigger outport:

Create new event handler to fire output as shown below.

 

Hope you understand and get some basic idea on UI customizing for the navigations.

 

Regards,

Chandra Gajula.

 

 

In previous post, we discussed and learnt about New Features of ASP.NET vNEXT. Here in this post, we will discuss the top new features and enhancements introduced in Microsoft C# 6.0. But before we dive deeper to discuss features of latest version of C#, let’s analyze how C# evolves and what key features introduced in all previous versions.

New Features in C# 6.0

Auto Property Initializer

This feature enables us to set the values of properties right at the place where they are declared. Previously, we use constructor to initialize the auto properties to non-default value but with this new feature in C# 6.0, it doesn’t require to initialize these properties with a constructor as shown below:

Note: We can use the feature on getter/setter as well as getter only properties as demonstrated above. Using getter only helps to easily achieve immutability.

Exception Filters

Microsoft introduced this CLR feature in C# with version 6.0 but it was already available in Visual Basic and F#. In order to use Exception Filters in C#, we have to declare the filter condition in the same line as that of the catch block and the catch block will execute only if the condition is successfully met as shown below:

The above code checks if the special exception occurred is of type SqlException. If not, the exception is taken care of.

Here is another case that shows how we can check the Message property of the exception and indicate a condition appropriately.

Note: Remember that Exception Filter is a debugging feature rather than a coding feature. For a very nice and detailed discussion, please follow here.

await in catch and finally block

We frequently log exceptions to a document or a database. Such operations are resource extensive and lengthy as we would need more time to perform I/O. In such circumstances, it would be awesome on the off chance that we can make asynchronous calls inside our exception blocks. We may additionally need to perform some cleanup operations in finally block which may also be resource extensive.

Dictionary Initializer

As oppose to older way of initializing a dictionary, C# 6.0 introduces more cleaner way for dictionary

initialization as follows:

Previously same was done in following way:

Important Note:

Primary Constructor has been removed from C# 6.0. For more details, please

follow here

.

More Related

Top 10 Interview Questions and Answers Series:

The post Top New Features of C# 6.0 appeared first on Web Development Tutorial.

This post was inspired by Scott Brady's recent post on implementing "passwordless authentication" using ASP.NET Core Identity.. In this post I show how to implement his "optimisation" suggestions to reduce the lifetime of "magic link" tokens.

I start by providing some some background on the use case, but I strongly suggest reading Scott's post first if you haven't already, as mine builds strongly on his. I'll show:

I'll start with the scenario: passwordless authentication.

Passwordless authentication using ASP.NET Core Identity

Scott's post describes how to recreate a login workflow similar to that of Slack's mobile app, or Medium:

Instead of providing a password, you enter your email and they send you a magic link:

Clicking the link automatically, logs you into the app. In nhis post, Scott shows how you can recreate the "magic link" login workflow using ASP.NET Core Identity. In this post, I want to address the very final section in his post, titled Optimisations:Existing Token Lifetime.

Scott points out that the implementation he provided uses the default token provider, the DataProtectorTokenProvider to generate tokens, which generates large, long-lived tokens, something like the following:

CfDJ8GbuL4IlniBKrsiKWFEX/Ne7v/fPz9VKnIryTPWIpNVsWE5hgu6NSnpKZiHTGZsScBYCBDKx/

oswum28dUis3rVwQsuJd4qvQweyvg6vxTImtXSSBWC45sP1cQthzXodrIza8MVrgnJSVzFYOJvw/V

ZBKQl80hsUpgZG0kqpfGeeYSoCQIVhm4LdDeVA7vJ+Fn7rci3hZsdfeZydUExnX88xIOJ0KYW6UW+

mZiaAG+Vd4lR+Dwhfm/mv4cZZEJSoEw==

By default, these tokens last for 24 hours. For a passwordless authentication workflow, that's quite a lot longer than we'd like. Medium uses a 15 minute expiry for example.

Scott describes several options you could use to solve this:

  • Change the default lifetime for all tokens that use the default token provider

  • Use a different token provider, for example one of the TOTP-based providers

  • Create a custom data-protection base token provider with a different token lifetime

All three of these approaches work, so I'll discuss each of them in turn.

Changing the default token lifetime

When you generate a token in ASP.NET Core Identity, by default you will use the DataProtectorTokenProvider. We'll take a closer look at this class shortly, but for now it's sufficient to know it's used by workflows such as password reset (when you click the "forgot your password?" link) and for email confirmation.

The DataProtectorTokenProvider depends on a DataProtectionTokenProviderOptions object which has a TokenLifespan property:

public class DataProtectionTokenProviderOptions

{

public string Name { get; set; } = "DataProtectorTokenProvider";

public TimeSpan TokenLifespan { get; set; } = TimeSpan.FromDays(1);

}

This property defines how long tokens generated by the provider are valid for. You can change this value using the standard ASP.NET Core Options framework inside your Startup.ConfigureServices method:

public class Startup

{

public void ConfigureServices(IServiceCollection services)

{

services.Configure<DataProtectionTokenProviderOptions>(

x => x.TokenLifespan = TimeSpan.FromMinutes(15));

// other services configuration

}

public void Configure() { /* pipeline config */ }

}

In this example, I've configured the token lifespan to be 15 minutes using a lambda, but you could also configure it by binding to IConfiguration etc.

The downside to this approach, is that you've now reduced the token lifetime for all workflows. 15 minutes might be fine for password reset and passwordless login, but it's potentially too short for email confirmation, so you might run into issues with lots of rejected tokens if you choose to go this route.

Using a different provider

As well as the default DataProtectorTokenProvider, ASP.NET Core Identity uses a variety of TOTP-based providers for generating short multi-factor authentication codes. For example, it includes providers for sending codes via email or via SMS. These providers both use the base TotpSecurityStampBasedTokenProvider to generate their tokens. TOTP codes are typically very short-lived, so seem like they would be a good fit for the passwordless login scenario.

Given we're emailing the user a short-lived token for signing in, the EmailTokenProvider might seem like a good choice for our paswordless login. But the EmailTokenProvider is designed for providing 2FA tokens, and you probably shouldn't reuse providers for multiple purposes. Instead, you can create your own custom TOTP provider based on the built-in types, and use that to generate tokens.

Creating a custom TOTP token provider for passwordless login

Creating your own token provider sounds like a scary (and silly) thing to do, but thankfully all of the hard work is already available in the ASP.NET Core Identity libraries. All you need to do is derive from the abstract TotpSecurityStampBasedTokenProvider<> base class, and override a couple of simple methods:

public class PasswordlessLoginTotpTokenProvider<TUser> : TotpSecurityStampBasedTokenProvider<TUser>

where TUser : class

{

public override Task<bool> CanGenerateTwoFactorTokenAsync(UserManager<TUser> manager, TUser user)

{

return Task.FromResult(false);

}

public override async Task<string> GetUserModifierAsync(string purpose, UserManager<TUser> manager, TUser user)

{

var email = await manager.GetEmailAsync(user);

return "PasswordlessLogin:" + purpose + ":" + email;

}

}

I've set CanGenerateTwoFactorTokenAsync() to always return false, so that the ASP.NET Core Identity system doesn't try to use the PasswordlessLoginTotpTokenProvider to generate 2FA codes. Unlike the SMS or Authenticator providers, we only want to use this provider for generating tokens as part of our passwordless login workflow.

The GetUserModifierAsync() method should return a string consisting of

… a constant, provider and user unique modifier used for entropy in generated tokens from user information.

I've used the user's email as the modifier in this case, but you could also use their ID for example.

You still need to register the provider with ASP.NET Core Identity. In traditional ASP.NET Core fashion, we can create an extension method to do this (mirroring the approach taken in the framework libraries):

public static class CustomIdentityBuilderExtensions

{

public static IdentityBuilder AddPasswordlessLoginTotpTokenProvider(this IdentityBuilder builder)

{

var userType = builder.UserType;

var totpProvider = typeof(PasswordlessLoginTotpTokenProvider<>).MakeGenericType(userType);

return builder.AddTokenProvider("PasswordlessLoginTotpProvider", totpProvider);

}

}

and then we can add our provider as part of the Identity setup in Startup:

public class Startup

{

public void ConfigureServices(IServiceCollection services)

{

services.AddIdentity<IdentityUser, IdentityRole>()

.AddEntityFrameworkStores<IdentityDbContext>()

.AddDefaultTokenProviders()

.AddPasswordlessLoginTotpTokenProvider(); // Add the custom token provider

}

}

To use the token provider in your workflow, you need to provide the key "PasswordlessLoginTotpProvider" (that we used when registering the provider) to the UserManager.GenerateUserTokenAsync() call.

var token = await userManager.GenerateUserTokenAsync(

user, "PasswordlessLoginTotpProvider", "passwordless-auth");

If you compare that line to Scott's post, you'll see that we're passing "PasswordlessLoginTotpProvider" as the provider name instead of "Default".

Similarly, you'll need to pass the new provider key in the call to VerifyUserTokenAsync:

var isValid = await userManager.VerifyUserTokenAsync(

user, "PasswordlessLoginTotpProvider", "passwordless-auth", token);

If you're following along with Scott's post, you will now be using tokens witth a much shorter lifetime than the 1 day default!

Creating a data-protection based token provider with a different token lifetime

TOTP tokens are good for tokens with very short lifetimes (nominally 30 seconds), but if you want your link to be valid for 15 minutes, then you'll need to use a different provider. The default DataProtectorTokenProvider uses the ASP.NET Core Data Protection system to generate tokens, so they can be much more long lived.

If you want to use the DataProtectorTokenProvider for your own tokens, and you don't want to change the default token lifetime for all other uses (email confirmation etc), you'll need to create a custom token provider again, this time based on DataProtectorTokenProvider.

Given that all you're trying to do here is change the passwordless login token lifetime, your implementation can be very simple. First, create a custom Options object, that derives from DataProtectionTokenProviderOptions, and overrides the default values:

public class PasswordlessLoginTokenProviderOptions : DataProtectionTokenProviderOptions

{

public PasswordlessLoginTokenProviderOptions()

{

// update the defaults

Name = "PasswordlessLoginTokenProvider";

TokenLifespan = TimeSpan.FromMinutes(15);

}

}

Next, create a custom token provider, that derives from DataProtectorTokenProvider, and takes your new Options object as a parameter:

public class PasswordlessLoginTokenProvider<TUser> : DataProtectorTokenProvider<TUser>

where TUser: class

{

public PasswordlessLoginTokenProvider(

IDataProtectionProvider dataProtectionProvider,

IOptions<PasswordlessLoginTokenProviderOptions> options)

: base(dataProtectionProvider, options)

{

}

}

As you can see, this class is very simple! Its token generating code is completely encapsulated in the base DataProtectorTokenProvider<>; all you're doing is ensuring the PasswordlessLoginTokenProviderOptions token lifetime is used instead of the default.

You can again create an extension method to make it easier to register the provider with ASP.NET Core Identity:

public static class CustomIdentityBuilderExtensions

{

public static IdentityBuilder AddPasswordlessLoginTokenProvider(this IdentityBuilder builder)

{

var userType = builder.UserType;

var provider= typeof(PasswordlessLoginTokenProvider<>).MakeGenericType(userType);

return builder.AddTokenProvider("PasswordlessLoginProvider", provider);

}

}

and add it to the IdentityBuilder instance:

public class Startup

{

public void ConfigureServices(IServiceCollection services)

{

services.AddIdentity<IdentityUser, IdentityRole>()

.AddEntityFrameworkStores<IdentityDbContext>()

.AddDefaultTokenProviders()

.AddPasswordlessLoginTokenProvider(); // Add the token provider

}

}

Again, be sure you update the GenerateUserTokenAsync and VerifyUserTokenAsync calls in your authentication workflow to use the correct provider name ("PasswordlessLoginProvider" in this case). This will give you almost exactly the same tokens as in Scott's original example, but with the TokenLifespan reduced to 15 minutes.

Summary

You can implement passwordless authentication in ASP.NET Core Identity using the approach described in Scott Brady's post, but this will result in tokens and magic-links that are valid for a long time period: 1 day by default. In this post I showed three different ways you can reduce the token lifetime: you can change the default lifetime for all tokens; use very short-lived tokens by creating a TOTP provider; or use the ASP.NET Core Data Protection system to create medium-length lifetime tokens.

Install Magento 2 On Localhost

Salı, 29 Mayıs 2018 by

After the official release of Magento 2, a number of people went on to install it on their localhost or their hosting service providers. For developers, Magento 2 installation on localhost is a pretty painful experience.

Magento 2 comes with many requirements. Magento 2 requires a lot of server configurations and tweaking along with composer setup which are not set by default. Let me clear this for a lot of those who are wondering why composer is really a need in Magento 2. Composer enables us to manage the Magento system, extensions, its dependencies, and also allows us to declare libraries for the project. In this article, I will explain how you can install Magento 2 on your localhost by using XAMPP.

Before you start your installation, you need to have these things on hand:

1: XAMPP Server installed in your computer.

2: Magento 2 Downloaded (You can download Magento 2 from this link)

Magento 2 has some installation requirements like:

Apache Version 2.2 or 2.4

PHP Version 5.5.x or 5.6.x

MySQL Version 5.6x

Now we can go towards the installation processes. I will be covering the whole installation step-by-step to make it easy for you:

  • Open your XAMPP server and start Apache and MySQL applications

  • Extract Magento 2 files in your xampp/htdocs folder

Install Composer

Download and run Composer-Setup.exe. It will install the latest composer version and set up your path so that you can just call “composer” from any directory in your command line.

Click next and select “Install Shell Menus”

In the next step, you would need php.exe path to proceed with the composer setup and provide the path of php.exe

Now click on install to see the final step

After installing the composer, you must enable the extension (php_intl.dll) in your php.ini file. To enable the extension, go to your php.ini and uncommenting the line “extension=php_intl.dll” and just remove semicolon “;” from the starting of the line and restart your XAMPP control panel.

After this, go to the Magento 2 folder inside your htdocs folder. Hold the Shift key and right click, and Select “Open command window here”. This will open command prompt on the location.

In command prompt, execute the command “composer install”. This will install Magento 2 on your localhost.

Note: If you see the error show in the image below, then go to your Magento 2 directory and right-click on composer.lock file and select “composer update”

Get your Username and Password from Magento.com and click My Account. Now click on Secure Keys under Developers tab on the left hand.

Your Public key is your Username and your Private key is your password.

Now create an empty database with correct permissions in your phpmyadmin.

Installation

  • Enter your Magento 2 URL in browser e.g. localhost/m2

Click on Agree and Set Up Magento.

Magento 2 will check your environment for readiness. Click on Start Readiness Check.

Now enter your database information that you previously created.

Click next, and you will be asked for web configurations like store URL, admin URL, Apache rewrites and https options.

Click Next, and select your Timezone, Currency and language in “Customize Your Store” section.

Click next, and insert your back-end Admin username, email and password to setup Admin credentials.

Now click next and Magento 2 is ready to be installed on your localhost. Click on Install Now and as a word of caution, don’t close your browser until setup is done.

Congratulations! You have now successfully installed Magento 2 on your localhost. You can now start making your preferred customizations and launch your own ecommerce store based on the most famous Magento software.

Worried about hosting? You don’t need to worry as long as Cloudways Managed Magento Hosting is here for increasing your everyday sales and taking away your server side management hassles. You just focus on building an awesome Magento 2 based ecommerce website and we will take care of the rest. And if you have any issues while installing Magento 2, you can go ahead and ask me via the comments section.

Note: If you face an error in your Magento 2 Admin url, then Go to your Magento 2 database and search “core_config_data” table and change the following 2 rows

web/unsecure/base_url to http://127.0.0.1/magento2/

web/secure/base_url to https://127.0.0.1/magento2/

Introduction

This article is about using Quartz.Net in a more modular and extensive approach. When we're using Quartz.Net for job queing and execution, there is a possibility that the we might face issues of volumne, memory usage and multithreading. This project provides the solution by creating a custom Quartz Schedulers which will pick up jobs from the queue and execute it. It gives us the flexibility to schedule job using the web application and decide which remote server should execute the job. Depending on the nature of the job, we can select the scheduler/listener which will be responsible for picking up the job from the queue and executing it.

Background

It will help you gain better understanding about architecture of the framework and how it is being used here. Quartz.Net has its own windows service that it provides that helps you offload some of the job to remote server. But if you plan of having a custom implementation, this article will be helpful.

Requirement

Before going through this article. please go over the basics of Quartz Scheduler. Before moving forward, you'll need to have SQL Server to get it to work.

Using the code

First step is the get the Quartz.Net package from Nuget.

Next step is to set up the database. Quartz requires a set of tables when we plan on using Sql Server as our datastore. You can find the sql script in the project. Once the database is updated, we need to configure our Quartz. There are multiple ways of doing it like having it in Web.config/App.config, a separate

  • Web.config/App.config

  • Separate config file for Quartz

  • Writing in class file as NameValueCollection

For simplicity, we're writing in the class file i.e. QuartzConfiguration.cs. Here we've two separate configuration pointint to the same database. The only difference between the two is the instance name. So while scheduling, we'll define which scheduler it needs to run under and that particular scheduler will select the job and run. All other schedulers will not execute it due to instance name difference.

public class QuartzConfiguration

{

public static NameValueCollection RemoteConfig()

{

NameValueCollection configuration = new NameValueCollection

{

{ "quartz.scheduler.instanceName", "RemoteServer" },

{ "quartz.scheduler.instanceId", "RemoteServer" },

{ "quartz.jobStore.type", "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" },

{ "quartz.jobStore.useProperties", "true" },

{ "quartz.jobStore.dataSource", "default" },

{ "quartz.jobStore.tablePrefix", "QRTZ_" },

{ "quartz.dataSource.default.connectionString", "Server=(servername);Database=(datbasename);Trusted_Connection=true;" },

{ "quartz.dataSource.default.provider", "SqlServer" },

{ "quartz.threadPool.threadCount", "1" },

{ "quartz.serializer.type", "binary" },

};

return configuration;

}

public static NameValueCollection LocalConfig()

{

NameValueCollection configuration = new NameValueCollection

{

{ "quartz.scheduler.instanceName", "LocalServer" },

{ "quartz.scheduler.instanceId", "LocalServer" },

{ "quartz.jobStore.type", "Quartz.Impl.AdoJobStore.JobStoreTX, Quartz" },

{ "quartz.jobStore.useProperties", "true" },

{ "quartz.jobStore.dataSource", "default" },

{ "quartz.jobStore.tablePrefix", "QRTZ_" },

{ "quartz.dataSource.default.connectionString", "Server=(servername);Database=(datbasename);Trusted_Connection=true;" },

{ "quartz.dataSource.default.provider", "SqlServer" },

{ "quartz.threadPool.threadCount", "1" },

{ "quartz.serializer.type", "binary" },

};

return configuration;

}

}

So when we're scheduling the job, we'll use appropriate configuration. Another thing to be careful about is to start the scheduler. If the scheduler isn't running, it can schedule jobs on the queue, but it cannot execute it. In this example, I'm using web application to schedule the job only. I'm not starting the scheduler. Hence my web app will never pick up any jobs from the queue as scheduler instance is not running.

How it works

When you schedule a job and using sql server datastore, the job gets saved to [dbo].[QRTZ_JOB_DETAILS] table.

The highlighted section is the name of the scheduler it is meant to run under. So when I start my Remote Scheduler, it will only pick up the 2nd and 3rd jobs and not the first. It will only be picked up when I start Local Scheduler

Here is the screenshot of the execution. If you notice, there are two jobs that were executed by Remote Scheduler and one by Local Scheduler.

If we're to use App.config or Web.Config to configure the schedulers, all we've to do is change the schduler name in the file. The code will automatically start picking up jobs from the queue will the same schduler name. It can be very beneficial in times when one of your scheduler is overloaded with jobs and you need additional resources to execute the jobs quicker. Without updating/replacing a single dlls, you can change the configuration of Quartz Schedulers and it will continue to work seamlessly.

Points of Interest

The most important thing that I learned that the Quartz.Net framework is very modular and flexible, making it easier to have a plug and play kind of components. Having a modular architecture is really helpful in times of working with larger jobs, volumes, long running jobs. As we're working of the sql server datastore, this concept can also be used across load balanced servers. The only catch is to make sure that the service has the required dlls/binaries in its folder and the app.config file is regularly updated.

TOP