Salı, 29 Mayıs 2018 / Published in Uncategorized

Last September, we announced the general availability of Azure App Service on Linux, allowing developers to bring their code or Docker-formatted containers to run on this high-productivity platform with Linux. Today, we are pleased to announce public previews for multi-container support, Linux support for App Service Environment (ASE), and simplified diagnostics, debugging, authentication, and authorization.

Over the years, Azure App Service has helped developers quickly build, deploy, and scale applications without having to maintain the underlying web servers or operating systems. With these new investments, we now support more app patterns and deployment choices. 

Multi-container support

Web apps do not live in a vacuum. They typically have a frontend, need to talk to APIs, or use a caching service. To increase portability, isolation, and agility, you might choose to containerize each component but manage them as one integral unit.  Using multi-container support from App Service, you can now deploy web apps that are composed of multiple Docker-formatted containers into a single virtual machine (VM) host. This VM host can be scaled out horizontally, either dynamically or manually, or scaled up to more powerful hardware if needed. You can then operate this composite as an atomic unit while leveraging App Service’s powerful capabilities such as built-in CI/CD, autoscaling, and intelligent diagnostics without worrying about container orchestration or hosting infrastructure. 

Take a shopping cart web app as an example, you can compose a cache service, backend payment API, monitoring service, and the front-end ordering app, each from respective containers. Each container can be managed separately following the “separation of concerns” design pattern and reused by different applications or teams to quicken development. Additionally, hardware usage is optimized by having the composite of multiple containers run on a single host VM. With the backend APIs and cache service in the same unit, you can reduce processing latency and persist state for user information without having to deal with service discovery. We recommend using a database as a service (e.g., Azure Database for MySQL) separately for your data needs to optimize scale and performance. 

Here’s an example of the multi-container architecture for the shopping cart web app.

Deploying multi-container web apps to Azure App Service is easy. Simply describe your web app with a Docker Compose file or Kubernetes Pod definition and upload the configuration file, or copy/paste a url pointing to the config file in the Azure portal. App Service takes care of pulling your containers from a registry, instantiating them, and configuring communications between them.

Linux support for App Service Environment in public preview

With Linux on App Service Environment, you can deploy web applications into an Azure virtual network (VNet), by bringing your own container or Linux based code. Both Windows and Linux web applications can be deployed into one ASE, sharing the same VNet. You can choose to have an Internet accessible endpoint by deploying the ASE with an external endpoint, or a private address in a VNet by deploying the web app into an ASE with an internal load balancer. Apps deployed into an ASE are available in the Isolated plan, where you can scale up to 100 Dv2 VMs per ASE and provide network isolation that meets your compliance needs.

To get started, please refer to this quick start

Enhanced App Service diagnostics

Last year, we introduced an App Service diagnostics capability, offering a guided intelligent troubleshooting experience that points you to the right direction to diagnose and solve your app issues. You can now leverage this tool not only for your Windows web apps, but also for web apps running on Linux, App Service Environment, and Azure Functions. Besides identifying platform and application issues, you can get code-level insights using Application Insights within the App Service diagnostics experience.

Easier debugging, authentication, and authorization

We are excited to share the following set of innovations that make building and debugging apps on Azure even easier.

  • Remote debugging, in public preview: You can now choose to remote debug your Node.JS applications running on App Service on Linux using Visual Studio Code or Azure CLI to resolve issues much faster.
  • Support for the SSH client of your choice, in public preview: In addition to the Kudu web SSH client, you can now use Putty or any other SSH client from your own console or shell program. We also support SFTP protocol and you can use the SFTP client of your choice.
  • Easier authentication and authorization, in public preview for App Service on Linux: Sites can now authorize users and restrict access to site contents using Azure Active Directory, Facebook, Google, Twitter, and Microsoft Account identities.

As we keep innovating on App Service, we are also making it easier for developers to use it. Today we are announcing that each new Azure subscription will get their first month (722 hours) of B1 Linux consumption for free. This is effective starting June 1, 2018. We encourage you to use this offer to try out our new capabilities and let us know what you think on the forum

Salı, 29 Mayıs 2018 / Published in Uncategorized



The SOLID design principles were promoted by Robert C. Martin and are some of the best-known design principles in object-oriented software development. SOLID is a mnemonic acronym for the following five principles:

Each of these principles can stand on its own and has the goal to improve the robustness and maintainability of object-oriented applications and software components. But they also add to each other so that applying all of them makes the implementation of each principle easier and more effective.

I explained the first four design principles in previous articles. In this one, I will focus on the Dependency Inversion Principle. It is based on the Open/Closed Principle and the Liskov Substitution Principle. You should, therefore, at least be familiar with these two principles, before you read this article.

Definition of the Dependency Inversion Principle

The general idea of this principle is as simple as it is important: High-level modules, which provide complex logic, should be easily reusable and unaffected by changes in low-level modules, which provide utility features. To achieve that, you need to introduce an abstraction that decouples the high-level and low-level modules from each other.

Based on this idea, Robert C. Martin’s definition of the Dependency Inversion Principle consists of two parts:

  1. High-level modules should not depend on low-level modules. Both should depend on abstractions.
  2. Abstractions should not depend on details. Details should depend on abstractions.

An important detail of this definition is, that high-level and low-level modules depend on the abstraction. The design principle does not just change the direction of the dependency, as you might have expected when you read its name for the first time. It splits the dependency between the high-level and low-level modules by introducing an abstraction between them. So in the end, you get two dependencies:

  1. the high-level module depends on the abstraction, and
  2. the low-level depends on the same abstraction.

Based on other SOLID principles

This might sound more complex than it often is. If you consequently apply the Open/Closed Principle and the Liskov Substitution Principle to your code, it will also follow the Dependency Inversion Principle.

The Open/Closed Principle required a software component to be open for extension, but closed for modification. You can achieve that by introducing interfaces for which you can provide different implementations. The interface itself is closed for modification, and you can easily extend it by providing a new interface implementation.

Your implementations should follow the Liskov Substitution Principle so that you can replace them with other implementations of the same interface without breaking your application.

Let’s take a look at the CoffeeMachine project in which I will apply all three of these design principles.

Brewing coffee with the Dependency Inversion Principle

You can buy lots of different coffee machines. Rather simple ones that use water and ground coffee to brew filter coffee, and premium ones that include a grinder to freshly grind the required amount of coffee beans and which you can use to brew different kinds of coffee.

If you build a coffee machine application that automatically brews you a fresh cup of coffee in the morning, you can model these machines as a BasicCoffeeMachine and a PremiumCoffeeMachine class.

Implementing the BasicCoffeeMachine

The implementation of the BasicCoffeeMachine is quite simple. It only implements a constructor and two public methods. You can call the addGroundCoffee method to refill ground coffee, and the brewFilterCoffee method to brew a cup of filter coffee.

import java.util.Map;

public class BasicCoffeeMachine implements CoffeeMachine {

    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee).   
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee  
       return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException { 
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity())
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.")
            }
        } else {
            this.groundCoffee.put(sel, newCoffee)
        }
    }  
}

Implementing the PremiumCoffeeMachine

The implementation of the PremiumCoffeeMachine class looks very similar. The main differences are:

  • It implements the addCoffeeBeans method instead of the addGroundCoffee method.
  • It implements the additional brewEspresso method.

The brewFilterCoffee method is identical to the one provided by the BasicCoffeeMachine.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28));
    }

    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.ESPRESSO),
            config.getQuantityCoffee())
        // brew an espresso
        return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
            config.getQuantityWater());
    }

    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee,
            config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
         } else {
             this.beans.put(sel, newBeans); 
         }
    }
}

To implement a class that follows the Dependency Inversion Principle and can use the BasicCoffeeMachine or the PremiumCoffeeMachine class to brew a cup of coffee, you need to apply the Open/Closed and the Liskov Substitution Principle. That requires a small refactoring during which you introduce interface abstractions for both classes.

Introducing abstractions

The main task of both coffee machine classes is to brew coffee. But they enable you to brew different kinds of coffee. If you use a BasicCoffeeMachine, you can only brew filter coffee, but with a PremiumCoffeeMachine, you can brew filter coffee or espresso. So, which interface abstraction would be a good fit for both classes?

As all coffee lovers will agree, there are huge differences between filter coffee and espresso. That’s why we are using different machines to brew them, even so, some machines can do both. I, therefore, suggest to create two independent abstractions:

  • The FilterCoffeeMachine interface defines the Coffee brewFilterCoffee() method and gets implemented by all coffee machine classes that can brew a filter coffee.
  • All classes that you can use to brew an espresso, implement the EspressoMachine interface, which defines the Coffee brewEspresso() method.

As you can see in the following code snippets, the definition of both interface is pretty simple.

 
public interface CoffeeMachine {
    Coffee brewFilterCoffee();
}

public interface EspressoMachine {
    Coffee brewEspresso();
}

In the next step, you need to refactor both coffee machine classes so that they implement one or both of these interfaces.

Refactoring the BasicCoffeeMachine class

Let’s start with the BasicCoffeeMachine class. You can use it to brew a filter coffee, so it should implement the CoffeeMachine interface. The class already implements the brewFilterCoffee() method. You only need to add implements CoffeeMachine to the class definition.

public class BasicCoffeeMachine implements CoffeeMachine {
    private Configuration config;
    private Map<CoffeeSelection, GroundCoffee> groundCoffee;
    private BrewingUnit brewingUnit;

    public BasicCoffeeMachine(Map<CoffeeSelection, GroundCoffee> coffee) {
        this.groundCoffee = coffee;
        this.brewingUnit = new BrewingUnit();
        this.config = new Configuration(30, 480);
    }

    @Override
    public Coffee brewFilterCoffee() {
        // get the coffee
        GroundCoffee groundCoffee = this.groundCoffee.get(CoffeeSelection.FILTER_COFFEE);
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, groundCoffee, this.config.getQuantityWater());
    }

    public void addGroundCoffee(CoffeeSelection sel, GroundCoffee newCoffee) throws CoffeeException {
        GroundCoffee existingCoffee = this.groundCoffee.get(sel);
        if (existingCoffee != null) {
            if (existingCoffee.getName().equals(newCoffee.getName())) {
                existingCoffee.setQuantity(existingCoffee.getQuantity() + newCoffee.getQuantity());
            } else {
             throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
           }
        } else {
            this.groundCoffee.put(sel, newCoffee);
        }
    } 
}

Refactoring the PremiumCoffeeMachine class

The refactoring of the PremiumCoffeeMachine also doesn’t require a lot of work. You can use the coffee machine to brew filter coffee and espresso, so the PremiumCoffeeMachine class should implement the CoffeeMachine and the EspressoMachine interfaces. The class already implements the methods defined by both interfaces. You just need to declare that it implements the interfaces.

import java.util.HashMap;
import java.util.Map;

public class PremiumCoffeeMachine implements CoffeeMachine, EspressoMachine {
    private Map<CoffeeSelection, Configuration> configMap;
    private Map<CoffeeSelection, CoffeeBean> beans;
    private Grinder grinder;
    private BrewingUnit brewingUnit;

    public PremiumCoffeeMachine(Map<CoffeeSelection, CoffeeBean> beans) {
        this.beans = beans;
        this.grinder = new Grinder();
        this.brewingUnit = new BrewingUnit();
        this.configMap = new HashMap<>();
        this.configMap.put(CoffeeSelection.FILTER_COFFEE, new Configuration(30, 480));
        this.configMap.put(CoffeeSelection.ESPRESSO, new Configuration(8, 28)); 
    }

    @Override
    public Coffee brewEspresso() {
        Configuration config = configMap.get(CoffeeSelection.ESPRESSO);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
           this.beans.get(CoffeeSelection.ESPRESSO),
           config.getQuantityCoffee());
       // brew an espresso
       return this.brewingUnit.brew(CoffeeSelection.ESPRESSO, groundCoffee,
           config.getQuantityWater());
    }

    @Override
    public Coffee brewFilterCoffee() {
        Configuration config = configMap.get(CoffeeSelection.FILTER_COFFEE);
        // grind the coffee beans
        GroundCoffee groundCoffee = this.grinder.grind(
            this.beans.get(CoffeeSelection.FILTER_COFFEE),
            config.getQuantityCoffee());
        // brew a filter coffee
        return this.brewingUnit.brew(CoffeeSelection.FILTER_COFFEE, 
            groundCoffee,config.getQuantityWater());
    }

    public void addCoffeeBeans(CoffeeSelection sel, CoffeeBean newBeans) throws CoffeeException {
        CoffeeBean existingBeans = this.beans.get(sel);
        if (existingBeans != null) {
            if (existingBeans.getName().equals(newBeans.getName())) {
                existingBeans.setQuantity(existingBeans.getQuantity() + newBeans.getQuantity());
            } else {
                throw new CoffeeException("Only one kind of coffee supported for each CoffeeSelection.");
            }
        } else {
            this.beans.put(sel, newBeans);
        }
    }
}

The BasicCoffeeMachine and the PremiumCoffeeMachine classes now follow the Open/Closed and the Liskov Substitution principles. The interfaces enable you to add new functionality without changing any existing code by adding new interface implementations. And by splitting the interfaces into CoffeeMachine and EspressoMachine, you separate the two kinds of coffee machines and ensure that all CoffeeMachine and EspressMachine implementations are interchangeable.

Implementing the coffee machine application

You can now create additional, higher-level classes that use one or both of these interfaces to manage coffee machines without directly depending on any specific coffee machine implementation.

As you can see in the following code snippet, due to the abstraction of the CoffeeMachine interface and its provided functionality, the implementation of the CoffeeApp is very simple. It requires a CoffeeMachine object as a constructor parameter and uses it in the prepareCoffee method to brew a cup of filter coffee.

public class CoffeeApp {
    private CoffeeMachine coffeeMachine;

    public CoffeeApp(CoffeeMachine coffeeMachine) {
     this.coffeeMachine = coffeeMachine
    }

    public Coffee prepareCoffee(CoffeeSelection selection
        throws CoffeeException {
        Coffee coffee = this.coffeeMachine.brewFilterCoffee();
        System.out.println("Coffee is ready!");
        return coffee;
    }  
}

The only code that directly depends on one of the implementation classes is the CoffeeAppStarter class, which instantiates a CoffeeApp object and provides an implementation of the CoffeeMachine interface. You could avoid this compile-time dependency entirely by using a dependency injection framework, like Spring or CDI, to resolve the dependency at runtime.

import java.util.HashMap;
import java.util.Map;

public class CoffeeAppStarter {
    public static void main(String[] args) {
        // create a Map of available coffee beans
        Map<CoffeeSelection, CoffeeBean> beans = new HashMap<CoffeeSelection, CoffeeBean>();
        beans.put(CoffeeSelection.ESPRESSO, new CoffeeBean(
            "My favorite espresso bean", 1000));
        beans.put(CoffeeSelection.FILTER_COFFEE, new CoffeeBean(
             "My favorite filter coffee bean", 1000))
        // get a new CoffeeMachine object
        PremiumCoffeeMachine machine = new PremiumCoffeeMachine(beans);
        // Instantiate CoffeeApp
        CoffeeApp app = new CoffeeApp(machine);
        // brew a fresh coffee
        try {
           app.prepareCoffee(CoffeeSelection.ESPRESSO);
        } catch (CoffeeException e) {
            e.printStackTrace();
        }
    }
}

Summary

The Dependency Inversion Principle is the fifth and final design principle that we discussed in this series. It introduces an interface abstraction between higher-level and lower-level software components to remove the dependencies between them.

As you have seen in the example project, you only need to consequently apply the Open/Closed and the Liskov Substitution principles to your code base. After you have done that, your classes also comply with the Dependency Inversion Principle. This enables you to change higher-level and lower-level components without affecting any other classes, as long as you don’t change any interface abstractions.

If you enjoyed this article, you should also read my other articles about the SOLID design principles:

Salı, 29 Mayıs 2018 / Published in Uncategorized

The App Service team is happy to announce the public preview of Multi-container support in Web App for Containers.

Multi-container Web App Concept

App Service Linux community has repeatedly asked for the capability to deploy multiple containers for a single App.  Customers want to have additional containers to complement the primary container and have these containers form an “operable” unit. The benefits for Multi-container are: 1. customer can separate capabilities such as proxies, cache and storage into additional containers and manage the source code and container images independently following the “separation of concerns” design pattern in containerization;  2. customer can operate those containers as an atomic unit and leverage App Service’s rich feature set for application life-cycle management, scaling and diagnosis, without the needs to stand up a container Orchestrator and to manage the hosting infrastructure by themselves.  Today we’re happy to announce the public preview of Multi-container support in Web App for Containers!

Primary Use Case

In App Service Linux community, the primary multi-container use case is that customer deploy a stateless web app with multiple containers (Web, Proxy, Cache or Git-mirror) to a single App Service plan.  For example: customer can have one container for web frontend and have another container for session cache using Redis, all deployed to the same App Service plan. All containers can communicate to each other through a Docker Bridge network using internal IP addresses.

Supported Configurations 

In Public Preview, we support Docker-Compose and Kubernetes configuration format as they’re the “standard” ways to describe multi-container applications. We don’t want to invent another format.  It’s also convenient for the customers because the formats are well documented and widely used by Docker community.

Customer can create or configure a Multi-container app from Azure Portal or through Azure CLI. Customer can describe a multi-container app using Docker Compose and Kubernetes configuration format in a yaml file. Customer can upload the multi-container config file through portal UI or point to the URL if the config file is hosted elsewhere (note: URL link support will come soon after announcement), portal screenshot as below.

For example, customer can use Docker-Compose format to describe a multi-container app:

docker-compose.yml

version: ‘3’ services: web: image: “appsvcsample/flaskapp” # this image repo’s source code come from “Get started with Docker Compose” on docker.com ports: – “80:80” redis: image: “redis:alpine”

CLI command to create a multi-container app:

$ az webapp create –resource-group [resource group] –plan [service plan] –name [app name] –multicontainer-config-type “compose” –multicontainer-config-file [path to “docker-compose.yml”]

CLI command to configure a multi-container app:

$ az webapp config container set –resource-group [resource group] –name [app name] –multicontainer-config-type “compose” –multicontainer-config-file [path to “docker-compose.yml”]

Customer can also use Kubernetes configuration format to describe a multi-container app:

my-kube.yml

apiVersion: v1 kind: Pod metadata: name: python spec: containers: – name: web image: appsvcsample/flaskapp # this image repo’s source code come from “Get started with Docker Compose” on docker.com ports: – containerPort: 80 – name: redis image: redis:alpine

CLI command to create a multi-container app:

$ az webapp create –resource-group [resource group] –plan [service plan] –name [app name] –multicontainer-config-type “kube” –multicontainer-config-file [path to “my-kube.yml”]

CLI command to configure a multi-container app:

$ az webapp config container set –resource-group [resource group] –name [app name] –multicontainer-config-type “kube” –multicontainer-config-file [path to “my-kube.yml”]

Samples

We’re working to add more Multi-container web app samples.  To get you started quickly, please feel free to copy the samples provided in this blog post, or download more from Github.

Scaling Multi-container Web App

Customer can scale up and / or out a stateless multi-container app just as any web apps hosted on App Service, using the same scaling features provided by App Service.

If you would like to use a database in container for dev/testing purposes in a single-host app, please make sure to use the persisted shared storage to store your database files, so the database files can be persisted during app restarts. First, you should enable the App Service shared storage following the instructions at here. Then, you should mount the shared storage to a directory within the Docker container where you store the database files, a MySQL example in docker-compose.yml:

services: mysql: image: mysql:5.7 volumes: – ${WEBAPP_STORAGE_HOME}/site:[/path/in/container/where/mysqlfiles/needs/to/be/mounted]

If you would like to scale out a multi-container app to multiple hosts in an App Service plan and use the app for production purpose, we strongly recommend you use Azure Database services instead of putting the database in a container. For example, for a WordPress app you can move the database to an Azure Database for MySQL.  To do that, please follow the following steps:

WORDPRESS_DB_HOST: [mysql server name].mysql.database.azure.com WORDPRESS_DB_USER: [db user name]@[mysql server name] WORDPRESS_DB_PASSWORD: [database password]

  • Test the configuration locally with docker-compose up before you push it to App Service.

Limitations in Public Preview

We wanted to put this feature out as soon as possible so customer can validate and provide more feedbacks during Preview. There are certain limitations in this Public Preview release.

We support Docker-Compose and Kubernetes format to describe a multi-container app, but we don’t support all their configuration objects during Public Preview. Our goal is to support any configuration objects that are meaningful to App Service during this release. The supported objects and limitations are as follows:

Docker-Compose

Supported configuration in Public Preview:

services

A service definition contains configuration that is applied to each container started for that service, much like passing command-line parameters to docker container create.

image

Specify the image to start the container from. Can either be a repository/tag or a partial image ID.

ports

Expose ports by mapping ports in the HOST:CONTAINER pattern, recommend explicitly specifying your port mapping as string. App Service specific, we would only expose one service port to external, we would identify a single service port to expose to external based on the HOST port provided for each service, we’re looking for port 80 or port 8080.

environment

Add environment variables. You can use an array as input. The dictionary format is not supported in current release, we will add support for dictionary format in next release.

environment:  #supported RACK_ENV: development SHOW: ‘true’ SESSION_SECRET:

environment:  #not supported – RACK_ENV=development – SHOW=true – SESSION_SECRET

volumes

Mount host paths or named volumes, specified as sub-options to a service. We support both persisted volume and non-persisted volume. To use the persisted volume, please enable the shared storage by set WEBSITES_ENABLE_APP_SERVICE_STORAGE=TRUE.  You can reference the shared storage using ${WEBAPP_STORAGE_HOME}.

For example, you can mount the shared storage to /tmp in the container:

volumes: – ${WEBAPP_STORAGE_HOME}/site/wwwroot:/tmp

command

Override the default command. Currently we support the collection format, for example: command: [“bundle”, “exec”, “thin”, “-p”, “3000”]. We will add support for a single string after public preview.

entrypoint

Override the default entrypoint.

restart

“no” is the default restart policy, and it does not restart a container under any circumstance. When “always” is specified, the container always restarts. More info at https://docs.docker.com/compose/compose-file/#restart.

Configuration not supported in Public Preview:

(besides the list below, any other Docker-Compose syntax not explicitly called out in the “Supported Configuration” section will not be supported in Public Preview)

build

Configuration options that are applied at build time. We don’t support “build” image locally as we need an explicit image to start the container with.

depends_on

Express dependency between services. we don’t currently support “depends_on” to specify the dependencies among containers, but we plan to support this shortly after Public Preview.

networks

Networks to join. We don’t support additional “networks” as we run all containers on one Bridge network.

secrets

Grant access to secrets on a per-service basis using the per-service secrets configuration. We don’t currently support it, but we plan to support this shortly after Public Preview.

Kubernetes

Supported configuration in Public Preview:

spec

Spec schema is a description of a pod. It’s the parent level object for containers.

containers

A list of containers belonging to the pod.

name

The name of this pod. If it’s within a containers object, it’s the name for container.

image

Docker container image name.

command

The entrypoint array, override the default entrypoint provided by container image.

args

A command array containing arguments to the entrypoint. The docker image’s cmd is used if this is not provided.

ports

A list of ports to expose from the container.

Configuration not supported in Public Preview:

Any other Kubernetes syntax not explicitly called out in the “Supported Configuration” section will not be supported in Public Preview.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Interested in programming? Like to read about programming without seeing a constant flow of technology and political news into your proggit? That’s what /r/coding is for. A pure discussion of programming with a strict policy of programming-related discussions.

As a general policy, if your article doesn’t have a few lines of code in it, it probably doesn’t belong here. However, things directly related to the actual process of programming – libraries, tools, and so on – are all okay, but please use discretion.

To avoid a lot of self.coding posts, consider these subreddits:

This is also NOT AN OPERATING SYSTEM SUBREDDIT. Post those where they belong. In /r/linux, /r/mac or /r/windows.

Salı, 29 Mayıs 2018 / Published in Uncategorized


WEBINAR:On-Demand

Application Security Testing: An Integral Part of DevOps

I love games; some of you may know that about me. I love reading and try to dabble with writing every now and then. A game that combines both of these passions is Scrabble. Scrabble is a word game. You get a certain number of tiles, each with a letter on it. Each of these letters counts a particular point value. Difficult letters are worth more than others. In the game, you continuously add letters onto existing letters to form a word to score points. It gets pretty difficult if you do not have a pocket dictionary by your side, or a huge vocabulary. Even if you have a decent vocabulary, you will struggle some times because the tiles and the letters do not work in your favor.

My aim today is to improve your vocabulary a bit! I will show you how to traverse through dictionary files in search of a certain word. The same principle gets applied with Scrabble games to verify matching words. Who knows, perhaps in a second installment I will delve a bit deeper into creating the game engine. So for now, let’s call this Part 1.

There is a ton of work, so let’s get started right away!

Our Project

Before you begin with creating the project, you first need a dictionary. Luckily, the Internet is huge and you can just about find anything on it. Navigate to this dictionary and download the files. Remember the location of the files; you will need the location a bit later, because you will read through all these files to find your matching word and all its variants.

Create a new Windows Forms application in C# or VB.NET. Once the project has been created, design it to resemble Figure 1.

Figure 1: Design

Create a new class and name it something descriptive, such as clsRecursion. The recursion class will be used to iterate through the dictionary files one by one. If you do not know what Recursion is, here is a very helpful link.

To be honest, I did not create this class, but it is probably the best recursion class that can be found out there. Add the class now.

C#

using System;
using System.Collections.Generic;

namespace Scrabble_C
{
   /*
    * The "Recursion" class below, is written in C# by
    * Gary Stafford
   * who references
   * Algorithm Source: A. Bogomolny, Counting And Listing
   * All Permutations from Interactive Mathematics Miscellany
   *    and Puzzles
   * http://www.cut-the-knot.org/do_you_know/AllPerm.shtml,
   * Accessed 11 June 2009
   * who references

   * 1. A. Levitin, Introduction to The Design & Analysis of
       * Algorithms,
       * Addison Wesley, 2003
   * 2. E. W. Dijkstra, A Discipline of Programming,
       * Prentice-Hall,
   * 3. R. Sedgewick, Algorithms in C, Addison-Wesley,
       * 3rd edition (August 31, 2001)
   */
   public class clsRecursion
   {

      private int lvlElement = -1;
      private int intElements;
      private int[] intPermutationVal = new int[0];

      private string strList = "";
      private LinkedList<string> lstList = new
         LinkedList<string>();

      private char[] chrInput;
      public char[] InputSet
      {

         get
         {

            return chrInput;
         }

         set
         {

            chrInput = value;

         }

      }

      private int intCountPermutation = 0;
      public int PermutationCount
      {

         get

         {

            return intCountPermutation;
         }

         set
         {

            intCountPermutation = value;
         }

      }

      public char[] CreateCharArray(string strInput)
      {

         char[] charString = strInput.ToCharArray();


         Array.Resize(ref intPermutationVal, charString.Length);
         intElements = charString.Length;

         return charString;

      }

      public void CalcPermutation(int intIndex)
      {

         lvlElement++;

         intPermutationVal.SetValue(lvlElement, intIndex);

         if (lvlElement == intElements)
         {

            OutputPermutation(intPermutationVal);

         }

         else
         {

            for (int i = 0; i < intElements; i++)
            {

               if (intPermutationVal[i] == 0)
               {

                  CalcPermutation(i);

               }

            }

         }

         lvlElement--;

         intPermutationVal.SetValue(0, intIndex);

      }

      private void OutputPermutation(int[] intVal)
      {

         string strWord = "";

         foreach (int i in intVal)
         {

            strList += (chrInput.GetValue(i - 1));
            strWord += (chrInput.GetValue(i - 1));

         }

         strList += "\r\n";

         if (lstList.Contains(strWord))
         {

         }

         else
         {

            lstList.AddLast(strWord);

         }

         PermutationCount++;

      }

      public LinkedList<string>GetList()
      {

         return lstList;

      }

   }

}

VB.NET

Imports System
Imports System.Collections.Generic

Public Class clsRecursion

   Private lvlElement As Integer = -1

   Private intElements As Integer

   Private intPermutationVal As Integer() = New Integer(-1) {}

   Private strList As String = ""

   Private lstList As LinkedList(Of String) = New _
      LinkedList(Of String)()

   Private chrInput As Char()

   Public Property InputSet As Char()

      Get

         Return chrInput

      End Get

      Set(ByVal value As Char())

         chrInput = value

      End Set

   End Property

   Private intCountPermutation As Integer = 0

   Public Property PermutationCount As Integer

      Get

         Return intCountPermutation

      End Get

      Set(ByVal value As Integer)

         intCountPermutation = value

      End Set

   End Property

   Public Function CreateCharArray(ByVal strInput As String) _
         As Char()

      Dim charString As Char() = strInput.ToCharArray()
      Array.Resize(intPermutationVal, charString.Length)

      intElements = charString.Length

      Return charString

   End Function

   Public Sub CalcPermutation(ByVal intIndex As Integer)

      lvlElement += 1

      intPermutationVal.SetValue(lvlElement, intIndex)

      If lvlElement = intElements Then

         OutputPermutation(intPermutationVal)

      Else

         For i As Integer = 0 To intElements - 1

            If intPermutationVal(i) = 0 Then

               CalcPermutation(i)

            End If

         Next

      End If

      lvlElement -= 1
      intPermutationVal.SetValue(0, intIndex)

   End Sub

   Private Sub OutputPermutation(ByVal intVal As Integer())

      Dim strWord As String = ""

      For Each i As Integer In intVal

         strList += (chrInput.GetValue(i - 1))
         strWord += (chrInput.GetValue(i - 1))

      Next

      strList += vbCrLf

      If lstList.Contains(strWord) Then

      Else

         lstList.AddLast(strWord)

      End If

      PermutationCount += 1

   End Sub

   Public Function GetList() As LinkedList(Of String)

      Return lstList

   End Function


End Class

This class gets a list of strings and loops through it continuously until all the strings have been traversed. On the form, add the necessary namespaces.

C#

using System.IO;
using System.Text.RegularExpressions;

VB.NET

Imports System.IO
Imports System.Text.RegularExpressions

The System.IO namespace allows you to work with files and folders. System.Text.RegularExpressions enables you to make use of Regular Expressions in your text. Regular Expressions are a great way to filter input strings. Here is more information on Regular Expressions.

Add the following member variables to your form class.

C#

      private LinkedList<string> lstDictionary = new
         LinkedList<string>();

      public static LinkedList<string> lstCombinations = new
         LinkedList<string>();

VB.NET

   Private lstDictionary As LinkedList(Of String) = New _
      LinkedList(Of String)()

   Public Shared lstCombinations As LinkedList(Of String) = New _
      LinkedList(Of String)()

lstDictionary will contain all the dictionary items that match your input word. lstCombinations will contain all possible combinations of that word. Each of these will be stored inside a LinkedList generic list object.

Add the following code behind the button.

C#

      private void btnGetWords_Click(object sender, EventArgs e)
      {
         lstCombinations.Clear();
         txtPossibles.Clear();

         char[] Letter = {
            'A','B','C','D','E','F','G','H','I','J','K','L','M',
            'N','O','P','Q','R','S','T','U','V','W','X','Y','Z'
         };

         if (lstDictionary.Count == 0)
         {

            for (int i = 0; i < Letter.Length; i++)
            {

               StreamReader srWords;
               srWords = File.OpenText("C:\\scrabble\\dictionary\\
                  gcide_" + Letter[i] + ".xml");

               string strFullList = "";

               strFullList = srWords.ReadToEnd();
               srWords.Close();

               Regex rgPattern = new Regex(@"<p> (.*?) </p>",
                  RegexOptions.Singleline);

               MatchCollection mcMatch =
                  rgPattern.Matches(strFullList);

               MatchCollection mcWords = Regex.Matches(strFullList,
                  @"<hw> (.*?) </hw>",
                  RegexOptions.Multiline);

               for (int j = 0; j < mcWords.Count; j++)
               {

                  lstDictionary.AddLast(Regex.Replace(mcWords[j]
                     .ToString().ToLower(), "<hw>|</hw>|\\
                     .|\\]|\\[|,|\\?|:|\\(|\\)|;|-|!|\\*|\"|`", "")
                     .ToString().Trim().ToLower());

               }

            }

         }

         string strInput = txtInput.Text;

         GetSubPermutations(strInput);

         LinkedList<string> strCombo = new LinkedList<string>();
         clsRecursion rcRecursion = new clsRecursion();

         foreach (string strComb in lstCombinations)
         {

            rcRecursion.InputSet = rcRecursion.CreateCharArray
               (strComb);
            rcRecursion.CalcPermutation(0);

            strCombo = rcRecursion.GetList();

         }

         lblDictionary.Text = "Dictionary: " + lstDictionary.Count;
         lblCombination.Text = "Combination: " + strCombo.Count;

         foreach (var word in strCombo)

         {

            txtPossibles.Text += word.ToUpper();

            txtPossibles.Text += Environment.NewLine;
            txtPossibles.Text += Environment.NewLine;

         }

      }

VB.NET

   Private Sub btnGetWords_Click(ByVal sender As Object, _
         ByVal e As EventArgs)

      lstCombinations.Clear()
      txtPossibles.Clear()

      Dim Letter As Char() = {"A"c, "B"c, "C"c, "D"c, "E"c, "F"c, _
         "G"c, "H"c, "I"c, "J"c, "K"c, "L"c, "M"c, "N"c, "O"c, _
         "P"c, "Q"c, "R"c, "S"c, "T"c, "U"c, "V"c, "W"c, "X"c, _
         "Y"c, "Z"c}

      If lstDictionary.Count = 0 Then

         For i As Integer = 0 To Letter.Length - 1

            Dim srWords As StreamReader

            srWords = File.OpenText("C:\scrabble\dictionary\ _
               gcide_" & Letter(i) & ".xml")

            Dim strFullList As String = ""
            strFullList = srWords.ReadToEnd()
            srWords.Close()

            Dim rgPattern As Regex = New Regex("<p> (.*?) _
               </p>", RegexOptions.Singleline)

            Dim mcMatch As MatchCollection = _
               rgPattern.Matches(strFullList)

            Dim mcWords As MatchCollection = _
               Regex.Matches(strFullList, "<hw> (.*?) _
               </hw>", RegexOptions.Multiline)

            For j As Integer = 0 To mcWords.Count - 1

               lstDictionary.AddLast(Regex.Replace(mcWords(j) _
                  .ToString().ToLower(), "<hw>|</hw>|\ _
                  .|\]|\[|,|\?|:|\(|\)|;|-|!|\*|""|`", "") _
                  .ToString().Trim().ToLower())

            Next

         Next

      End If

      Dim strInput As String = txtInput.Text

      GetSubPermutations(strInput)

      Dim strCombo As LinkedList(Of String) = New _
         LinkedList(Of String)()
      Dim rcRecursion As clsRecursion = New clsRecursion()

      For Each strComb As String In lstCombinations

         rcRecursion.InputSet = _
            rcRecursion.CreateCharArray(strComb)
         rcRecursion.CalcPermutation(0)

         strCombo = rcRecursion.GetList()

      Next

      lblDictionary.Text = "Dictionary: " & lstDictionary.Count
      lblCombination.Text = "Combination: " & strCombo.Count

      For Each word In strCombo

         If lstDictionary.Contains("" & word) Then

            txtPossibles.Text += word.ToUpper()

            txtPossibles.Text += Environment.NewLine
            txtPossibles.Text += Environment.NewLine

         End If

      Next

   End Sub

Here, you set up the "Letter array" that contains every letter of the alphabet. You did this because you will need to loop through each of the dictionary files that you downloaded earlier. You then specified the location for your dictionary files. Remember your location, as stated earlier.

Now, these files contain a lot of jargon. They not only contain words, but they also contain their definitions as well as some unnecessary (well, to me at least) comments. You need to sift through these files to get only the words that potentially can match the entered word. In comes Regular Expressions. As you can see: Regular Expressions allow you to set up a filter once, and it gets applied to all the strings that need to be filtered. Lastly, add the getSubPermutations Sub procedure that adds all combinations to the LinkedList object.

C#

      public static void GetSubPermutations(string strPerm)
      {

         if (lstCombinations.Contains(strPerm))
         {

         }

         else
         {

            lstCombinations.AddLast(strPerm);

         }

         if (strPerm.Length > 2)
         {

            for (int i = 0; i < strPerm.Length; i++)
            {

               if (lstCombinations.Contains(strPerm.Remove(i, 1)))
               {

               }

               else
               {

                  lstCombinations.AddLast(strPerm.Remove(i, 1));

               }

               GetSubPermutations(strPerm.Remove(i, 1));

            }

         }

      }

VB.NET

   Public Shared Sub GetSubPermutations(ByVal strPerm As String)

      If lstCombinations.Contains(strPerm) Then

      Else

         lstCombinations.AddLast(strPerm)

      End If

      If strPerm.Length > 2 Then

         For i As Integer = 0 To strPerm.Length - 1

            If lstCombinations.Contains(strPerm.Remove(i, 1)) Then

            Else

               lstCombinations.AddLast(strPerm.Remove(i, 1))

            End If

            GetSubPermutations(strPerm.Remove(i, 1))

         Next

      End If
   End Sub

GitHub C# Source

GitHub VB.NET Source

Conclusion

Now that you have made the word interpretation engine, creating a decent user interface for a game such as Scrabble should not be too complicated. Hopefully, I will post a second part for this game soon. But first, I need a much deserved holiday. See you afterwards.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Team Foundation Server 2018 Update 2 is now available

Today we announce the release of Team Foundation Server 2018 Update 2. There are a lot of new features in this release, which you can see in our release notes.

One big change in Update 2 is that we have re-enabled legacy XAML builds to unblock those customers that still require it in their environment. Although we’ve made this change, please keep in mind that XAML build are deprecated, meaning there will be no further investment in this feature. To learn more on how you can start migrating from XAML Builds to more supported features, read our migration documentation.

To get started, here are your key resources: TFS 2018.2 Release Notes TFS 2018.2 Web Installer TFS 2018.2 ISO TFS 2018.2 Express Web Installer TFS 2018.2 Express ISO

Update 2 includes many Visual Studio Team Services features from our September 5 to March 5 deployments (sprints 123 – 131). Here are a few highlights:

Pull Requests

There have been many updates to pull requests, including pull request labels and mentions for pull requests. Also, the PR notifications now include the thread context. When a reply is made to a PR comment, the prior replies show in the body of the notification, so you won’t need to open the web view to get the context.

Work

There are now two new macros for queries. @MyRecentActivity shows the work items you’ve viewed recently and @RecentMentions returns the items you were mentioned in over the 30 days.

We have also added support for the Not In query operator. You can query for work items “Not In” a list of states, IDs, or many other fields, without nested “Or” clauses.

Build and Release

We have made several enhancements to multi-phase builds. You can use a different agent queue for each build phase, run tests in parallel, give scripts access to the OAuth token for each phase, and run a phase only under specific conditions.

In Release, you can now use release gates to react to health checks in your release pipelines.

Package

You can now set retention policies in package feeds to automatically clean up older, unused package versions. Also, the Packages page has been updated to use the standard layout and filter bar.

Wiki

There are a bunch of improvements to the wiki, such as search, referencing work items, and pasting rich text. You can also preview the wiki page as you’re editing.

Following our updated release approach, we will have one more TFS 2018 update, TFS 2018 Update 3, which will be mainly bug fixes.

Please report any problems on Developer Community or call customer support if you need immediate assistance.

 

Salı, 29 Mayıs 2018 / Published in Uncategorized

Introduction

 

Recently, Angular has released its latest version, Angular 6.0. In the latest version, they  havefocused more on toolchain, which provides us with a way to quick start our task easily, as well as some synchronized versions, such as Angular/core, Angular/compiler etc.

 

With the release of Angular 6.0, one of the features which I like the most is Material Dashboard, which is kind of a starter component with a list of dynamic card components.

 

You can find the official blog post for the release of Angular 6.0 


here

 

In this article, we are going to learn how to make use of Material Dashboard starter component into our application to get a Dashboard structure with the following few steps. 

 

Material Dashboard Using Angular 6.0

 

Material Dashboard is completely inspired by Google’s Angular Material Design Component. In this dashboard, there are a number of Material Components used, which are listed below.

  • mat-grid-tile 
  • mat-grid-list
  • mat-card
  • mat-menu
  • mat-icon 

And also there are a few other components like mat-title, mat-header, mat-content etc.

 

Let’s start by implementing Dashboard using Angular 6.0. If you are new to Angular, do not worry, just follow a few steps and you will be able to create an application in Angular. I hope you have already installed a stable version or the latest version of node.js as well as npm. 

 

Create new Angular 6.0 application by executing npm command.

  1. ng new materialdashboardangular6  

After executing the above command, you can see that the Angular version is now updated to 6.0.0.

 

 

You can see in the above image, those core dependencies like core, compiler, HTTP, form, and others are updated with the latest version, which is now 6.0.0.

 

Other developer dependencies are also updated in the latest version of Angular. 

 

For working with Material Dashboard, we need to install a few dependencies for Angular-material by using npm commands which are listed below. 

 

Old approach for using Angular MaterialBase dependency is angular/material, just type the below npm command into the console.

  1. npm install @angular-material  

Next move is to install Angular/cdk, which provides tools for developers to create awesome components. We can install cdk by using npm command.

Apart from these two dependencies, we should have Angular/animations installed, because some of the effects in the Angular Material component are dependant on animation. Angular 6.0 comes with Angular/animation so that we do not need to re-install.

 

New Updates In Angular 6.0 

 

In Angular 6.0, there is a tree of components which is used to create a group of components and by using these components, we can get Materialized Dashboard using Angular Material + Material CDK Components.

 

And one of the most important parts is Angular Material Starter Component. By installing Angular/material, we will be able to generate a starter component.

 

For that, we need to install Angular/material by executing the following command. 

Generating Material Dashboard

 

Previously we have installed @angular/material, for generating Material Dashboard, we can use a single command to generate starter dashboard component.Syntax 

  1. ng generate @angular/material:material-dashboard –name = <Give_Any_Name>  

material-dashboard

is a starter component, which generates a group of the materialized components in the form of the dashboard by just providing a name, as I’ve provided in the below command.

  1. ng generate @angular/material:material-dashboard –name material-dashboard  

And you can see in the console, there are a few files created inside the app directory with provided a component name. The file structure looks like the below screen.

 

Now, let’s see the content of all the files generated one by one.

 

material-dashboard.component.html 

  1. class=“grid-container”>  
  2.  

    class=“mat-h1”>Dashboard

      

  3.   “2” rowHeight=“350px”>  
  4.     “let card of cards” [colspan]=“card.cols” [rowspan]=“card.rows”>  
  5.       class=“dashboard-card”>  
  6.           
  7.             
  8.             {{card.title}}  
  9.             
  10.               more_vert  
  11.               
  12.             “matMenu” xPosition=“before”>  
  13.                 
  14.                 
  15.               
  16.             
  17.           
  18.         class=“dashboard-card-content”>  
  19.          
    Card Content Here

      

  20.         </mat-card-content>  
  21.       </mat-card>  
  22.     </mat-grid-tile>  
  23.   </mat-grid-list>  
  24. </div>  

Basically, an HTML file contains the group of material components. All of the components are the same as older versions of the material component, and distributed in tree structure.

  1. <mat-grid-list> // Main grid container  
  2.   
  3.   // divides structure in small chunks  
  4.   <mat-grid-tile *ngFor="let card of cards" [colspan]="card.cols" [rowspan]="card.rows">  
  5.   
  6.     // Every chunks acts as card component  
  7.     <mat-card class="dashboard-card">  
  8.        
  9.       // Card Header  
  10.       <mat-card-header>  
  11.         <mat-card-title>  
  12.           
  13.           <mat-menu #menu="matMenu" xPosition="before">  
  14.               
  15.           </mat-menu>  
  16.           
  17.         </mat-card-title>  
  18.       </mat-card-header>  
  19.         
  20.       // Content of Card  
  21.       <mat-card-content class="dashboard-card-content">  
  22.        
    Card Content Here

      

  23.       </mat-card-content>  
  24.   
  25.     </mat-card>  
  26.   
  27.   </mat-grid-tile>  
  28.   
  29. </mat-grid-list>  

The above snippet is a basic building block which was generated by the component generator, which repeats the number time-based on items of the array using *ng-if statement.

material-dashboard.component.ts

  1. import { Component } from ‘@angular/core’;  
  2.   
  3. @Component({  
  4.   selector: ‘material-dashboard’,  
  5.   templateUrl: ‘./material-dashboard.component.html’,  
  6.   styleUrls: [‘./material-dashboard.component.css’]  
  7. })  
  8. export class MaterialDashboardComponent {  
  9.   
  10.   // Number of cards to be generated with column and rows to be covered  
  11.   cards = [  
  12.     { title: ‘Card 1’, cols: 2, rows: 1 },  
  13.     { title: ‘Card 2’, cols: 1, rows: 1 },  
  14.     { title: ‘Card 3’, cols: 1, rows: 2 },  
  15.     { title: ‘Card 4’, cols: 1, rows: 1 }  
  16.   ];  
  17. }  

TypeScript file contains an array of cards to be generated while creating the dashboard. For that we can specify how many  rows and columns will be occupied to generate card component.

material-dashboard.component.css

Stylesheet file contains different classes to provide different positioning and direction throughout the page. 

  1. .grid-container {  
  2.   margin: 20px;  
  3. }  
  4.   
  5. .dashboard-card {  
  6.   position: absolute;  
  7.   top: 15px;  
  8.   left: 15px;  
  9.   right: 15px;  
  10.   bottom: 15px;  
  11. }  
  12.   
  13. .more-button {  
  14.   position: absolute;  
  15.   top: 5px;  
  16.   right: 10px;  
  17. }  
  18.   
  19. .dashboard-card-content {  
  20.   text-align: center;  
  21. }  

These files are automatically generated for creating Material Dashboard. Now, the last step is to execute our dashboard app.

 

For that, just go to app.component.html file. Remove all of the content and just add markup tag as in the following snippet : 

  1. <!– To Use Dashboard On Application Home page –>  
  2.   
  3. <material-dashboard>  
  4.     
  5. </material-dashboard>  

Now, let’s run our Material Dashboard app by running [ng generate -o] command and you can see the output.

 

Conclusion

 

In this article, we have covered a few things which are listed below. 

  • Overview of Material Dashboard and its component
  • Package.json file version updates 
  • New updates in Angular 6.0
  • Material Dashboard generating process
  • Introduction of all generated files using Material Generator

Thus, the generated dashboard is just a normal structure. We can enhance it by customizing as per our different requirements.

 

Follow my other articles on Angular Material.

I hope you have learned something from this article, Thanks for reading!!!

Salı, 29 Mayıs 2018 / Published in Uncategorized

In this article, we will see what is Project Collection in TFS.

But before we go deep into what is a collection in TFS, we will get a basic idea of what a TFS is and what it is used for. What is TFS? According to Wikipedia ,TFS is a product that was initially released by Microsoft in the year 2005. It basically covers the entire application development lifecycle and now even have capabilities of DevOps.It provides features like source code management, project management, automated build and releases, testing, etc. It is available in 2 forms:on-premises and online. It uses SQL Server as a back-end for managing everything. In this article, we will be using TFS that will be installed on the local server. What is a project Collection or simply a collection in TFS? A collection or project collection is a group of projects that share a common logical or physical resources. In TFS we can have multiple team projects but not all of them require same requirements or objectives, so we can group all the projects that have similar requirements and objectives into a collection. In TFS every collection has a separate database, have a separate code base, user groups, etc.

 

To create a new collection open “Team Foundation Administration Console”.

Click on "Create Collection" button. This will open up Create Team Project Collection, as shown below.

Enter the collection name and description and click Next. The wizard will open up a window to provide SQL SERVER database instance and provide us an option to create a new database or select an existing empty database. Each collection has its own database, so we will select “Create a new database for this collection” radio button and click Next.

In the Review Configuration window, click “Verify”. After verification is done click the “Create” button . This will take some time to complete.

Click “Complete” button and then “Close” button. Now, we have our new collection added to the Team Project Collections.

Now we have a “DevOps Collection” ready and online, we will add a new project to the collection.

To create a new project in “DevOps Collection”, go to TFS URL and click the gear icon. This will open up TFS admin page .Select the collection from left menu and click “View the collection administration page” link. Alternatively, we can go to the URL “http://servername:8080/tfs/collectionname/_admin”.

Select the “New Team Project” link. A new “Create New Team Project" modal dialog will open up. Enter the project name, description, process template and version control as Team Foundation Version Control and click “Create Project” button. We can also select GIT as version control.

After the project has been created go to the TFS home and we can find the “TFSAutoBuild” under “Recent projects & teams”. Click on the project and it will take you to the project dashboard as shown below. This page contains various menu items like code, build, test, release, etc.

Now we will create and add the project to this team project.

For the purpose of this demo, I have created a simple asp.net MVC application.

We will now add this project to the “DevOps Collection”. Open Team Explorer in Visual Studio and click “Manage Connections” link.

In connect to project window click add TFS Server link. Provide the TFS Server URL and click Add. This will show the list of all project collections. Select “DevOps Collection” and click “Connect” button.

Right-click on the solution and select “Add Solution to Source Control”. After the solution has been added, right-click on the solution and click “Check In”. After the code has been checked in, it will appear under the code link. 

 

 So in this demo, we created a new collection in TFS and added a project to it.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Introduction

In my last article, I discussed about Azure Service Bus Queue, which is basically a one to one communication. This article tells you how to work with Azure Service Bus topics with a sample console application.

Content

  • Service Bus Topics
  • Create Service Bus Topics and subscriptions on the Azure portal
  • Send and receive a message using service bus topics

Pre-request

  • Visual Studio 2017 update 3 or later
  • Azure Subscription
  • Basic Knowledge of the Azure portal

Azure Service Bus Topics

Topics is similar to Queue but Inside a topic, we have subscriptions and each subscription will have multiple subscribers. The process is simple – just put data to the subscription and one or more subscriber will pull the data out.

Creating a Service Bus Topic in Azure Portal

Step 1 Login into Azure Portal https://portal.azure.com/

Step 2 I’m going to use an existing Azure Service Bus namespace which is used for creating a Queue, please refer to the last article where I have explained how to create an Azure Service Bus and it’ namespace.

Step 3 Once the Azure Service Bus has been created successfully, now, we can start to create a topic for the service bus. Working with topics is similar to queue in Azure Service Bus.

Step 4Go to Azure service bus namespace. In my case I’m using my existing namespace msgdemobus .

Step 5 In overview, you can find a topic, click on it to add a new topic as shown in the below figure.

Step 6 – Add Topic

  • Give the name for the topic, in my case I named it as mytopics
  • Set a max topic size based on the requirement
  • Time for message to live —  by default it will be 14 days
  • By enabling duplicate detection, topic will not store any duplicate messages
  • By enabling the partioning, the topics itself spreads into multiple storage systems, that means more than one message broker will    be backing up the message.

Finally click on create.

Step 7 Once the topic is created successfully, we are able to create a subscription for the topic. In topic overview, click on subscription as shown below.

Step 8 Add the subscription, name the subscription, and I’m going with default values for other fields, as shown below,

Send and receive a message using Service Bus Topics

I’m going to create a new console application using C# to send and receive a message with Azure Service Bus Topics using Topic Client.

MyTopicApp.cs

  1. using Microsoft.ServiceBus.Messaging;  
  2. using System;  
  3.   
  4.   
  5. namespace MyTopicApp  
  6. {  
  7.     class Program  
  8.     {  
  9.         private static string _serviceBusConn = "Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=<policy name>;SharedAccessKey=<key>";  
  10.         private static string _serviceBustopic = "<Topic Name>";  
  11.   
  12.         static void Main(string[] args)  
  13.         {  
  14.             SendMessage("Hello I’m Azure Service Bus Topic ");  
  15.         }  
  16.   
  17.         static void SendMessage(string message)  
  18.         {  
  19.             var topicClient = TopicClient.CreateFromConnectionString(_serviceBusConn, _serviceBustopic);  
  20.             var msg = new BrokeredMessage(message);  
  21.             topicClient.Send(msg);  
  22.         }  
  23.   
  24.         static void ReadMessage()  
  25.         {  
  26.             var subClient = SubscriptionClient.CreateFromConnectionString(_serviceBusConn, _serviceBustopic, "<subscription name>");  
  27.             subClient.OnMessage(m =>  
  28.             {  
  29.                 Console.WriteLine(m.GetBody<string>());  
  30.             });  
  31.         }  
  32.     }  
  33. }  

_serviceBustopic will holds a topic Name.  

_serviceBusConn will hold the Azure service bus connection string, we can get a shared access Name and key from Azure service bus shared access policy as showed in the below figure.

 

 Policy Name  

Pick a key.

Send Message

SendMessage function is used to send a message to Azure service bus topic using topic client and the BrokeredMessage is used to hold the message.

  1. static void Main(string[] args)  
  2.        {  
  3.            SendMessage("Hello I’m Azure Service Bus Topic ");  
  4.        }  
  5.   
  6.        static void SendMessage(string message)  
  7.        {  
  8.            var topicClient = TopicClient.CreateFromConnectionString(_conn, _topic);  
  9.            var msg = new BrokeredMessage(message);  
  10.            topicClient.Send(msg);  
  11.        }  

Run the program

Switch to Azure portal, you can notice the message count will be updated from 0 to 1. 

Read Message

ReadMessage function is used to received a message based on the subscription using subscription client.

  1. static void Main(string[] args)  
  2.        {  
  3.            ReadMessage();  
  4.            Console.ReadKey();  
  5.        }  
  6.  static void ReadMessage()  
  7.        {  
  8.            var subClient = SubscriptionClient.CreateFromConnectionString(_serviceBusConn, _serviceBustopic, "MyAppSubscription");  
  9.            subClient.OnMessage(m =>  
  10.            {  
  11.                Console.WriteLine(m.GetBody<string>());  
  12.                 
  13.            });  
  14.        }  

Run a program Result

 

Message is received from topic through subscription.

 

Switch to the Azure portal and you can notice the message count will be updated from 1 to 0. 

I hope you have enjoyed this article. Your valuable feedback, questions, or comments about this article are always welcome.

Salı, 29 Mayıs 2018 / Published in Uncategorized

Today, I am writing about the light level of micro:bit. The micro:bit uses its LED array to detect the light level. In shadow, it reads the light level as 0 and whenever it senses light, it reads more than zero. Basically, the light level is used to detect how bright or dark the place is where our micro:bit is placed or where we are. And, the light level ranges from 0 to 255, where 0 means darkness and more than 0 to 255 means brightness. It uses LED arrays to detect the level of the light.

 

So let us see how we can access the light level using our micro:bit.

 

Tool You Need

  1. Micro:Bit(1 pcs)
  2. USB (1 pcs)
  3. Battery Box( 1pcs)
  4. AA battery (2 pcs) 

In this tutorial, there is no connection part since we’re not working with any sensor or anything, so we just need a USB cable to connect micro:bit to the computer to upload the program. You need to go to

makecode

website as always and create a new project and follow the steps.

 

Steps

 

Go to input block and then grab the light level.

 

 

Actually, light level is used to read the light level from the micro:bit and it returns the number from 0 to 255.

 

Now, go to variables and choose "set item to".

 

 

This function/block is used to set the value of a variable to some value. And if you want, then rename the item variable to level or whatever you want. I have renamed it as level and attached the light level to it. In this step, we’re going to read the light level and map it to a variable so that we can do whatever we want to do with it. I have used on start block so it will display the light level only once — if you want you can use forever block.

 

 

Now go to basic and choose show number block and attach the light level to it.

 

Now, download this code and upload it to your micro:bit. You will be able to see the light level.    

 

That’s all you need to do. You can see a working demo


here

.

 

Now, what I am going to do is to plot the graph so that we can see the light level whenever our micro:bit reads we can see a graph on it.

 

So for that just go to LED block and grab plot bar graph.

 

 

Now attach the light level variable to the first 0 and we know that the range is 255, so the chances are the second will be 255. 

 

The final code is:

Get the code


here

.

Next, I am going to create a burglar alarm system by combining this and 

this

 article. So, please go through this article once and stay tuned.

Javascript code

  1. let level = 0  
  2. input.onButtonPressed(Button.A, () => {  
  3.     led.plotBarGraph(  
  4.     input.lightLevel(),  
  5.     255  
  6.     )  
  7. })  
  8. level = input.lightLevel()  
  9. basic.showNumber(input.lightLevel()) 
TOP