Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Whether you’re working on a brand new website or a stylish redesign, inspiration and dreaming up the style you want to portray is always the first step. This may sound like a difficult task if you’ve been in an inspo-rut. On the other hand, you could be brimming with inspiration but struggling to organize all of your fun ideas into a cohesive plan.

As you read through this guide, you will realize that it is easy to hone-in-website design inspiration and manage it into digestible concepts and drum up new ideas to make your site its best. This mood boarding process works well whether your website is for your personal brand, your own company, or for a client! Mood boarding is the best way to synthesize everyone’s input, too.

What is Mood boarding? 

Mood boarding refers to a creative process used by web designers to collect a wide range of images or pictures as well as other things that define and express a personal or brand identity. If you are thinking to develop a website for yourself, then creating a mood board will help you dream up the color palette, the stylistic elements, and other visual notes. The use of cohesive visual elements on your website is ideal for inspiration, communication, guidance, and affirmation for the ideal target audience.  The key purpose of using images/visual elements is to effectively express the mood and position of your inspiration to the target customer – so, by drawing inspiration from a mood or attitude first, will make sure that your audience feels spoken to on every page of your site.

The Purpose of Mood boards in Web Design

There are several reasons why mood boards are created by designers. Typically, the purpose of a mood board is to help you put up an inspiration website that wow clients and drive your work. Plus, establishing a color palette, and ensuring that you stay on-brand with everything you design, can all be driven from the mood or overall visual aesthetic. For example, if you’re creating a website for a bakery and the mood of that bakery is vintage, cozy, and European – you could throw together images that evoke those motifs into a mood board and you’ll know that if your website sticks to those colors, those feelings, and those aesthetics, it will please the client.

How to create a website design mood board

Coming up with a mood board involves a lot of creativity because there’s no right or wrong format for a mood board. You could cut pictures out of magazines to make a collage, doodle together a bunch of different inspirational elements, or make your mood board digitally via Pinterest, in Illustrator, or in an Instagram collection. The process should then be to take the information about what your client wants in a website and translate that into a visual mood with inspiration trimmings. You can decide if it benefits your business to share this mood board with your client or whether it will just drive your internal decisions.

Elements of a Design Mood Board

Since each mood board is unique, a designer has to select what to add on the project. The objective is to include as many elements necessary to put across the look and feel of the design being crafted.  Here below are some of the ideas that can guide you through the process:

  • Images: Did you know that images woo clients more rapidly than describing your vision through copy? You can get photos from Instagram, Pinterest and other photo sharing platforms. As long as you don’t publish them, you don’t need to worry about their origins. This is just an internal project. Illustrations, vectors, logos, icons and your own sketches are welcome here, too.
  • Bits of Text: Choose the quotes, phrases or words that can tie into the message intended to be communicated. Designers ought to slot in specific fonts whenever they are using text in a mood board, to create more visual impact.
  • Colors: Use your creativity and mind’s eye to figure out the best colors that define or make your brand stand out. If your client or your own brand already has branded colors, start with those. However, those are not the only 1 to 3 colors your website will employ, so make sure you find those complementary shades to fit the theme of the site. Pair them together on your mood board to get a feel for how they’ll play together online. This way, you can settle on the exact hex codes for the colors that express the mood you want.
  • Shapes and Silhouettes: It has been found that natural shapes like leafy vines or ocean waves are ideal in reflecting the sense of tranquility. Angular, geometric silhouettes will bring a modern and urban feel to your site. If you’re looking for a classy and luxurious mood, play with elegant shapes like fleur de lis, filigrees, and other dainty curls.
  • Patterns and Textures: Likewise, add swatches of different patterns or textures to your mood board. If you’re looking for a regal mood, use a silk swatch or something in a marble pattern. For casual, comfortable websites – try a woodgrain texture or a familiar, nautical stripe. Recurring patterns play an important role in creating visual energy and familiarity, as well as movement.

Piecing it all Together

After you have gathered and sorted your images, you can arrange them on your mood board to achieve the desired organization and hierarchy. At this stage, it is important to work on key themes with larger pictures and position the smaller images along with texts to make your message visually clear.  Once you start layering, feel free to discard any image or bit of text that starts to look out of place, or explore in a new direction if you’re drawn to. The best part of a mood board is its impermanence – since these aren’t final products, there’s no wrong answer and no wasted time creating something that doesn’t work.

Crafting a mood board successfully for your website is an abstract and fluid process. Some designers would find this process excessive or taxing while others will see it is just the kick-off to ensure that the website they design will be on brand, mood-suitable, and inspired.

The post How to Create a Mood Board to Inspire Your Website Design appeared first on Line25.





Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Cookies are omnipresent online, and while some browsers block trackers automatically, most of us are followed by invisible eyes everywhere we go on the internet. The option to clear your cache or your cookies is buried in settings, subtly deterring users from cleaning them out.

Martina Huynh [Photo: courtesy Martina Huynh/Design Academy Eindhoven]

What if clearing those pesky cookies was as easy as physically wiping your computer screen? That’s the idea behind 
Augmented Mundanity OS. The project, by designer and recent graduate of the Design Academy Eindhoven
Martina Huynh, envisions a new kind of operating system where mundane, everyday gestures are an interface for maintaining your privacy and security.

To make privacy tools easier to use, Huynh designed gestural interfaces to control them–like wiping your screen to wipe out trackers, spraying a room fragrance to automatically mask your online presence through a VPN, and lowering the blinds in your house to pull up the encrypted Tor browser to hide your digital activity.

“The digital space where we dwell, in forums or platforms, these have become part of our living spaces,” Huynh explains. “It’d be nice to connect [digital spaces] to tangible gestures we already know that are mundane and a part of our daily life.”

Right now, most of Augmented Mundanity OS is still a concept–though Huynh has a working prototype for the ability to wipe your screen to wipe away your cookies. But she thinks there’s an opportunity to make our physical living spaces part of a more intuitive interface for security-related digital actions, the same way other gestural interfaces have made aspects of the digital world more convenient to access or navigate.

Huynh sees these concepts as interventions that give the user more power over their digital lives. “The interface shows what you can do and what you’re not allowed to do–if there’s only one button, that’s the only path you can take,” Huynh says. “On a more broader level I see this as a humble start to how we can design interfaces that are different and empower the user in the digital space.”

[Image: courtesy Martina Huynh/Design Academy Eindhoven]

Huynh is currently looking for technical collaborators to help her bring some of these ideas to life, including the idea that you could flick pop-up ads away with your finger, or clean out all your personal data by using special soap when you wash your hands.

In a world where all privacy settings are difficult to access and not at all user-friendly, Augmented Mundanity points to an alternative way of interacting with computers in physical space. And as data privacy becomes more and more important, controlling it should be as easy as a wipe.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

If you are among those who prefer not to call in the pros to build a website, then head on to the best 20 WordPress themes compatible with Gutenberg and let it happen. To help you figure out which is the best WordPress themes for you, take time to learn all of these beautiful templates. Remember, premium templates are second to none when it comes to breaking into a professional field.

The truth is that WordPress has made it very easy even for a newbie to create sites from day one. In most cases, you get what you pay for. Indeed, it doesn’t mean that you have to break the bank to have a great website. However, the fact is true, your task is to consider how professional you want your future website to look.

Why Are Premium WordPress Themes Compatible with Gutenberg Worth the Trouble?

Of course, it’s easy to get overwhelmed by the wealth of beautiful WordPress themes today’s web market brings. Yet, everyone is preparing for the new ambitious WordPress Gutenberg update, so should be you. Thus, it makes sense to pick WordPress themes compatible with Gutenberg. So your website will be up to date and designed in line with the latest web design standards.

In fact, building a website of your dreams is easier than you think. Big cheers to WordPress. And it’s not always about simplicity. However, the fact is true, a myriad of eye-catching designs and SEO optimized pages are only the tip of the iceberg. As a result, your visitors enjoy the best user experience, mobile-friendly layouts and an awful lot of interactive functions. Take advantage of flexible customization options and give your website wings to take your brand into the heart of potential clients.

So, if you want a piece of Gutenberg cake, enjoy these amazing WordPress designs below. Just grab one and get your visitors into the habit of enjoying your site.

Gutentype | A Trendy Gutenberg WordPress Theme for Modern Blog

Well, Gutentype offers an easy way to build a modern blog or personal site using powerful drag’n’drop tools. A clean design, responsive layout and beautiful pre-made pages make Gutentype a perfect solution for blogs, guides or writer sites. Besides, integration with WooCommerce gives you the freedom to add an online store to your site within minutes.

Details | Demo

Jacqueline | Spa & Massage Salon WordPress Theme

Need to build a healthcare website? Take a look at Jacqueline. Designed in a trendy style, it comes looks awesome and appealing on any modern screen sizes. Sure, Jacqueline has a lot to offer. Apart from multiple customization options, it includes appointments management & online store functionality. Also, it’s incredibly easy to use, even without touching a line of code. Make it look the way you want.

Details | Demo

Revirta | Virtual Assistant WordPress Theme

As simple as it sounds, Revirta can be your safe bet when building a professional website. Apart from being one of the top WordPress themes compatible with Gutenberg, Revirta is easy to customize thanks to the most popular WPBakery builder. Its modern and clean design fits small and large-scale businesses. Indeed, responsive and SEO-friendly, it’s integrated with the ThemeREX Addons plugin.

Details | Demo

Consultor | A Business Consulting WordPress Theme

Looking for a win-win solution for your business consulting or investment advising website? End your search here. Consultor is a clean specimen of WordPress themes compatible with Gutenberg that you can use to your advantage. Give your website a lep up on the competition. Choose from multiple beautiful pre-designed layouts, different Contact Forms and set of custom shortcodes. Thus, the editing process becomes as simple as that.

Details | Demo

MaxiNet | Broadband & Telecom WordPress Theme

To help you build a professional website, explore the power of MaxiNet. This GDPR compliant WP template fits Internet companies, telecom agencies, computer networks or cable television stores. Another key thing to remember is that the most popular WPBackery builder allows you to create unique designs in no time. To get you up to speed, it‘s WooCommerce ready, so you can start selling products or services within minutes.

Details | Demo

WealthCo | A Fresh Business & Financial Consulting WordPress Theme

Designed with the purpose to ease your pain when building a website, WealthCo is the theme to trust. Apart from its professional design, this specimen of WordPress themes compatible with Gutenberg gives wings to your website. Thus, it fits modern business, financial, investment and corporate websites. What’s more, it’s also great for financial blog and business news. Enjoy its live demo right now.

DetailsDemo

Alliance | Intranet & Extranet WordPress Theme

Wish to close online leads effectively? Sure, Alliance is here to help you. With an awful lot of amazing go-to tools, it gives you the freedom in terms of customization. Thus, you can improve communication inside your company by providing a stylish and modern access to corporate data. What’s more, it’s easy to build a fully-functional community, share documents or reports using BuddyDrive plugin. Not only do this but also run internal polls and researches. Poke around.

Details | Demo

Hoverex | Cryptocurrency & ICO WordPress Theme + Spanish

Available in two languages – English & Spanish, Hoverex can help take your brand into the heart of new culture. Well, it fits both financial & cryptocurrency sites. Even if you are a newbie, no need to call in the pros, Hoverex has a lot to offer in terms of customization and management. Start selling our coins or accepting donations using ThemeREX Addons plugin. Indeed, responsive and SEO-friendly WP template can come in handy every time you need help.

Details | Demo

HotLock | Locksmith & Security Systems WordPress Theme

Looking for tools to help turn your business into a major source of revenue? Don’t miss on out the opportunity to build a professional website and present your business in a more appealing way. HotLock can help showcase your services and products in a simple way. Let your visitors interact with your site and take action. Enjoy its responsive design, beautiful pre-designed pages and a single click installation.

Details | Demo

Dr.Patterson | Medicine & Healthcare WordPress Theme

Whether you are in healthcare or medical business, a professional website is imperative. Enough wasting time, stay ahead of the game with your beautiful site that impresses. Being one of the top WordPress themes compatible with Gutenberg, Dr.Patterson gives you leg up on the competition. Besides, it includes 6 pre-made homepages, appointment management, advanced Contact forms and a set of custom shortcodes. With WPBackery builder, it’s easy to make every page worth the visit.

Details | Demo

AutoParts | Car Parts Store & Auto Services WordPress Theme

Are you interested in building a website to represent your business? Great! With AutoParts, it’s easy to build a great-looking business site if you want to stand out from the crowd. Thanks to its responsive design your site gives users the best possible viewing experience. What’s more, the powerful WPBackery plugins allows you to create new designs on a go. Designed with a high-class bold style, AutoParts fits different car-related businesses best.

Details | Demo

ProDent | Dental Clinic & Healthcare WordPress Theme

ProDent is another go-to option for your next website. Being a part of the best WordPress themes compatible with Gutenberg, ProDent can save the day. An awful lot of helpful customization options comes in handy when matching your brand’s style. Apart from this, ProDent works with most modern browsers and is easy to enjoy on both mobile and desktop devices. Indeed, WooCommerce integration allows you to start selling your products or services online hassle-free. Poke around. It’s fun.

Details | Demo

Drone Media | Aerial Photography & Videography WordPress Theme

Face it, your website s your professional tool to promote your business. With Drone Media, it’s easy to build a modern site and give your clients a great online experience. If your business is focused on aerial photography or videography, then Drone Media is your safe bet. With your mobile-friendly site, it’s easy to tap into the enormous potential of web marketing and earn trust of potential clients.

Details | Demo

Lymcoin | Cryptocurrency & ICO WordPress Theme

Lymcoin is a nice and clean WP template designed to help improve your online presence. Built for business and corporate sites, Lymcoin fits cryptocurency projects, too. In the light of GDPR compliance, Lymcoin follows the best web design practices and standards. Do not miss a beat, let Lymcoin help you reach out to the potential audience in the online world. A ton of useful features come included to help make our dreams a reality.

Details | Demo

LeGrand | A Modern Multi-Purpose Business WordPress Theme

If you want to build an end-user friendly website, LeGrand is your win-win solution. It’s easy to build an attractive and professional website that stands out from day one. Choose from different homepage styles to make LeGrand a perfect fit for your specific business purposes. It clean and flexible layout suits well marketing, financial and advertising companies. Explore its awesome functionality and tons of advanced features that come in handy big times.

Details | Demo

Deviox | A Trendy Multi-Purpose Business WordPress Theme

Wish your business to stay ahead of the game? With Devoix, it’s easy to make every page of your website worth the visit. For that reason, learn what’s more comes included in Devoix. Apart from being one of the best WordPress themes compatible with Gutenberg, it‘s flexible and SEO-friendly. The powerful WPBackery builder allows you to modify any page content easily. Besides, you can start a blog and spread the word about your business.

Details | Demo

Windsor | Apartment Complex / Single Property WordPress Theme

Looking for a solution designed to impress and convert? Take a look at Windsor. With a modern WP template, it’s easy to build a great-looking site offering a smooth navigation. Whether you need a website for an office center, rental business or construction purposes, Windsor is here to help let your properties make a statement. Sure, responsive design allows you to collect inquiries and leads with your eye-catching and high-converting design.

Details | Demo

AlphaColor | Type Design & Printing Services WordPress Theme

If you are involved in design or printing services, your websites should take full advantage of the web to display it. Take time to enjoy AlphaColor. Designed in a stylish manner, it’s one of the best WordPress themes compatible with Gutenberg and GDPR standards. Integration with WooCommerce plugin allows you to start selling products within minutes. Let your competitors pale in comparison to your eye-catching website.

Details | Demo

PJ | Life & Business Coaching WordPress Theme

Create a cutting-edge business website with PJ. Thus, you can showcase the uniqueness of your products or services in a more appealing way. The thing is that PJ comes with several outstanding homepage layouts, multiple pre-designed modules and dozens of easy to use shortcodes. Besides, with Event Calendar, you can bravely inform your users about upcoming events and meetings. Explore the power of PJ right now.

Details | Demo

Hope | Non-Profit, Charity & Donations WordPress Theme + RTL

If you are focused around church community or foundation firm, Hope is your win-win solution. Designed in a modern style, it fits fundraising, child care or government social program websites. You might be hay to know that Hope is one of the best WordPress themes compatible with Gutenberg. On the other hand, the powerful WPBackery builder comes in handy to create new unique layouts without much thinking.

Details | Demo

In a nutshell, these best 20 WordPress themes compatible with Gutenberg should be your first stop when looking to save time. All of them can help you cut corners from day one. Thus, get straight to the matter – make your pick.

 

The post Best 20 WordPress Themes Compatible with Gutenberg That Can Give Your Website Wings appeared first on EGrappler.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Data filtering is one of the most popular interactive features of our WPF controls. In an effort to improve usability, we analyzed user interaction patterns for this functionality.

In v18.2, we added the following data filtering features to the Data Grid:

  • Filter Elements
  • New Data Filter Editor
  • New Date Operators
  • Record Count Display
  • Predefined Filters

In upcoming releases, we will support these features in other data-bound WPF controls, including Charts and Pivot Grid.

Filter Elements

Using Filter Elements, you can build your own UI to filter control data. As an example, here is a UI that uses Filter Elements for a filtering panel displayed next to the Data Grid.

This is an overview of the Filter Elements available in v18.2:

Checkbox
Radio List
Checked List
Checked Tree List
Predefined Filters
Range
Calendar

It is easy to connect Filter Elements to your control. Three steps are required:

  1. Add your Elements to a container (for instance the Accordion Control)
  2. Specify the field names of columns you want to filter, using the FieldName properties
  3. Set the attached property FilterElement.Context on the container to associate it with the filtering context of the data-bound control. The Filter Elements use this context to retrieve values, formatting settings and other details, and the context is configured with criteria from the Filter Elements in return.

Here is a XAML example:

<dxa:AccordionControl
  dxfui:FilterElement.Context="{Binding Path=FilteringContext, ElementName=grid}">
  <dxa:AccordionItem Header="Price ($)">
    <dxfui:RangeFilterElement FieldName="Price"/>
  </dxa:AccordionItem>
  <dxa:AccordionItem Header="Trademark">
    <dxfui:CheckedListFilterElement FieldName="TrademarkID"/>
  </dxa:AccordionItem>
  <dxa:AccordionItem Header="Transmission Type">
    <dxfui:RadioListFilterElement FieldName="TransmissionTypeID" />
  </dxa:AccordionItem>
</dxa:AccordionControl>

<dxg:GridControl Name="grid"/>

You can see a sample setup in the Filtering UI demo. If you are reading this post on a machine that has the WPF demos installed, please follow this link to start the demo.

Documentation is available for Filter Elements.

New Data Filter Editor

We received many requests for enhancements to the Filter Editor. In order to deliver these without introducing breaking changes, we implemented a new Filter Editor. The old editor is still the default in v18.2. To enable the new editor, set the property DataViewBase.UseLegacyFilterEditor to false.

In the new editor you can now use a search box to quickly find a required field:

You can select values from the data source to configure a filter. Each value in the list displays a record count.

You can apply predefined filters from the selection menu:

The Filter Editor demo shows this functionality in action. If you are reading this post on a machine that has the WPF demos installed, please follow this link to start the demo.

Here is the link to the feature documentation.

New Date Operators

In previous versions, if you applied a filter that included several dates…

… the Filter Panel displayed an expression similar to this:

In v18.2 we added the Is Between Dates and Is On Dates operators to optimize the expression:

You can also use these date operators in the new Filter Editor:

Record Count Display

When you apply a filter, it can be useful to know how many records match the value you’re filtering for. The Excel-inspired Filter Drop-Down now displays record counts next to filter values:

You can enable this feature for a column using the property ColumnBase.FilterPopupMode, or for a view using DataViewBase.ColumnFilterPopupMode – set these properties to ExcelSmart. The new Filter Editor and the Filter Elements support this feature, too.

This link runs the demo Excel Style Filtering if it is installed on your machine.

Predefined Filters

Our Filtering UI allows end users to create complex filters, but you may want to save them time by providing predefined filters out of the box. You can now specify such filters using the property BaseColumn.PredefinedFilters:

<dxg:GridColumn FieldName="MPGCity">
  <dxg:GridColumn.PredefinedFilters>
    <dxfui:PrefefinedFilterDescriptionCollection>
      <dxfui:PredefinedFilterDescription Filter="?p>=25" Name="More than 25" />
      <dxfui:PredefinedFilterDescription
        Filter="?p>15 AND ?p<25" Name="From 15 to 25" />
      <dxfui:PredefinedFilterDescription Filter="?p<15" Name="Less than 15" />
    </dxfui:PredefinedFilterDescriptionCollection>
  </dxg:GridColumn.PredefinedFilters>
</dxg:GridColumn>

You can then show these filters in the PredefinedFiltersElement:

You can select predefined filters in the new Filter Editor as well as the Excel-inspired Filter Drop-Down.

What’s Next?

v18.2 supports the enhanced filtering functionality for the Data Grid, including the TreeListView. In future releases we will support the same feature set for Instant Feedback UI Mode and Virtual Sources, and we will make the functionality available in other data-bound DevExpress WPF controls. Additionally, we plan to introduce these features:

  • Conditional Format Filters
  • Grouped filter items in the Excel-inspired Filter Drop-down, with results displayed as a checked tree list
  • Enhancements to the Filter Editor API

Your Feedback Is Important To Us!

Many of the features described in this post are based on your feedback – thank you! Please feel free to make new suggestions in the comments below or by opening Support Center tickets.

Please take a moment to answer the following survey question:

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

As we celebrate Code.gov’s second birthday, it seems like just yesterday Alvand Salehi was introducing Code.gov from the main stage at GitHub Universe. But now two years and over 5,200 projects later, Code.gov (and the Federal Source Code policy that created it) are starting to hit their stride. I wanted to take this opportunity to highlight some of the exciting government projects currently on GitHub, and dive into the data around how the government community uses GitHub to collaborate. Like the Code.gov team says, “[we] believe in innovation, and are passionate in making these open source projects all available to you.”

Government and open source

Out of the 4,800 publicly accessible government projects, more than 3,600 (or 75%) are hosted on GitHub.com. This makes sense, as the majority of the world’s open source already on GitHub. However, it’s also a pretty big deal. Government agencies like NASA and the U.S. Army are using GitHub to share their tools and resources with the greater open source community around the world. Take NASA’s 3D Resources project, for example:

Interested in textures, models, and images from NASA itself? The NASA-3D-Resources repository has it all, including pictures of earth from the Apollo missions and models of the satellite used in the Clementine mission.

You can’t 3D print your own Mars rover–yet. But with contributors like the NASA Jet Propulsion Laboratory and NASA Goddard Space Flight Center, “yet” may definitely be the operative word.

Another exciting government project is ZFS, a file management system released by the Department of Energy that runs specifically on Linux. This open source project has not only been embraced by other agencies, but has been adopted by private companies as part of their day-to-day operations.

Notable adopters of ZFS on Linux include GE Healthcare Systems, Intel, and Netflix. As for the Lawrence Livermore National Laboratory (LLNL)–the research facility answering to the Department of Energy and those behind this OSS–they continue to utilize ZFS, and continue to develop and improve the platform. LLNL is working closely with Intel to use a variation of ZFS— ZFS+Lustre—to manage the first planned U.S. exascale system, Aurora. Aurora is capable of a billion-billion calculations per second. (Yes, a billion-billion.) Aurora is slated for 2021 at Argonne National Lab.

How the government community uses GitHub

Aside from how the government is sharing projects, we also took a look at the numbers to find out how the community is using GitHub to collaborate on these projects.

Top 10 projects by stars

| | | | |—|—|—| | 1 | nasa/openmct | 5282 | | 2 | USArmyResearchLab/Dshell | 5098 | | 3 | scipy/scipy | 5079 | | 4 | nasa/NASA-3D-Resources | 1422 | | 5 | GSA/data | 1353 | | 6 | GSA/data.gov | 1278 | | 7 | Code-dot-mil/code.mil | 1229 | | 8 | openscenegraph/OpenSceneGraph | 1177 | | 9 | WhiteHouse/petitions | 1777 | | 10| NREL/api-umbrella | 1172 |

Top 10 projects by forks

| | | | |—|—|—| | 1 | scipy/scipy | 2556 | | 2 | USArmyResearchLab/Dshell | 1164 | | 3 | openscenegraph/OpenSceneGraph | 720 | | 4 | nasa/openmct | 585 | | 5 | spack/spack | 539 | | 6 | lammps/lammps | 534 | | 7 | idaholab/moose | 460 | | 8 | WhiteHouse/petitions | 373 | | 9 | GSA/data.gov | 356 | | 10| materialsproject/pymatgen | 309 |

Top 10 projects by watchers

| | | | |—|—|—| | 1 | USArmyResearchLab/Dshell | 673 | | 2 | scipy/scipy | 312 | | 3 | GSA/data.gov | 251 | | 4 | nasa/openmct | 233 | | 5 | nasa/NASA-3D-Resources | 220 | | 6 | WhiteHouse/petitions | 214 | | 7 | openscenegraph/OpenSceneGraph | 201 | | 8 | 18F/api-standards | 173 | | 9 | nsacyber/Windows-Secure-Host-Baseline | 172 | | 10 | Code-dot-mil/code.mil | 169 |

Top 10 projects by contributors

| | | | |—|—|—| | 1 | scipy/scipy | 669 | | 2 | trilinos/Trilinos | 197 | | 3 | SchedMD/slurm | 162 | | 4 | 18F/18f.gsa.gov | 139 | | 5 | Kitware/ParaView | 136 | | 6 | GSA/wordpress-seo | 119 | | 7 | department-of-veterans-affairs/vets-website | 116 | | 8 | idaholab/moose | 114 | | 9 | materialsproject/pymatgen | 113 | | 10| petsc/petsc | 113 |

And more

Our top 10 findings are just a few examples of how government projects use GitHub. Looking deeper into the data can tell us even more about how they contribute to the entire open source community. With thousands on thousands of commits, many have sparked the attention of both the public and private sector:

  • From the Environmental Protection Agency, WNTR (pronounced “winter”) is a Python package designed to simulate and analyze resilience of water distribution networks.
  • The Department of Transportation’s ITS ODE offers real-time data to a network of vehicles, infrastructure, and traffic management centers, providing logistics to subscribing transportation management applications and other similar devices.
  • Then there is Walkoff, from the National Security Agency, enabling security teams to automate and integrate apps, workflows, and analytics tools.

This is what Code.gov is all about. All of the government projects we’ve mentioned in this post are designated as open source. That means that you can access a repo, test, debug, submit pull requests, or download your own copy and adapt it for your own use.

As the Code.gov team has shared with us, they believe in innovation and providing everyone the opportunity to perform a civic duty on a digital platform. They’re passionate about making these open source government projects available for all. This spirit is embodied in their hashtag, seen often on their Twitter account: #CodeOn. The invitation to reach out to them on Twitter or LinkedIn is always open, and we highly encourage you to do so.

Want to learn more about Code.gov? Follow them on Medium and Twitter.You can also see what else GitHub is doing to help governments across the country and around the world.

Open source helps people create new and exciting things every day–including the code we used to collect data for this post. Check it out here.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Problem

User-defined functions have been an Achilles heel my entire career. Seemingly simple routines to apply formatting or perform a lookup caused inexplicable performance degradation, and often the evidence was obscured, or made excruciatingly worse by observer overhead. In this tip, I’m going to discuss the three types of functions in SQL Server, and what the next version does to address scalar UDF performance specifically.

Solution

There are three types of user-defined functions in SQL Server:

  1. Scalar user-defined functions (UDFs) – these are bad, since they have to be called for every row, and there is a lot of overhead there, which can lead to sub-optimal performance. They also are costed minimally and inhibit parallelism. The pain caused by scalar UDFs became easier to find when they added sys.dm_exec_function_stats in SQL Server 2016, if you knew to look for it. However, in all of this time they haven’t done much else to improve performance here (with one exception).
  2. Multi-statement table-valued functions (TVFs) – these are also bad, since they have a fixed cardinality estimate (1 or 100, depending on version), which can lead to sub-optimal execution plans. The pain caused by multi-statement TVFs has been eased a little bit by interleaved execution, introduced in SQL Server 2017. Essentially, this pauses execution, determines cardinality, and adjusts optimizations, accordingly.
  3. Inline TVFs – no problems here. The logic is "inlined" into the query, ensuring fewer issues with runtime performance or cardinality estimations. You will notice when you review the execution plan of a query that references an inline TVF, the function is nowhere to be found. This is a key point in how scalar UDFs will be handled going forward as well.

Last night, I installed SQL Server 2019 CTP 2.1, and restored a copy of AdventureWorks. I ran the following query:

DBCC FREEPROCCACHE;
GO
SELECT TOP (100) SalesOrderID, [Status] = dbo.ufnGetSalesOrderStatusText([Status]) 
  FROM Sales.SalesOrderHeader
  ORDER BY SalesOrderID DESC;
GO
SELECT execution_count, total_elapsed_time
  FROM sys.dm_exec_function_stats 
  WHERE [object_id] = OBJECT_ID(N'dbo.ufnGetSalesOrderStatusText');

I got the following plan:

Which included CPU and duration allocated to the user-defined function, as well as information about the function:

And the DMV query yielded 10 executions with an elapsed time of 26 microseconds. The time in this case is negligible; the important point is that the DMV records every execution.

Then I remembered that, when I’ve restored an older database to a new version of SQL Server, I should change the compatibility level of the database to match the new version. This allows me to take advantage of any new optimizations:

ALTER DATABASE AdventureWorks SET COMPATIBILITY_LEVEL = 150;

I tried my query again, and "noticed" that it ran a bit quicker (I didn’t really notice, but work with me here). I checked the plan, and it seemed more complex – and your gut instinct is probably to think that this must be worse:

But this is actually better and, if this query had been a candidate for parallelism, it could have now gone parallel, too.

Looking deeper, I noticed that the query against sys.dm_exec_functions now came back empty, and looking at the XML, no time was allocated to any UDF, and in fact there wasn’t even a <UserDefinedFunction> node anymore:

This is, coincidentally, how you determine that your UDF was inlined – it is not present in the XML. You can also tell when it hasn’t happened, with a new Extended Event that fires when the optimizer encounters a UDF that it can’t inline: tsql_scalar_udf_not_inlineable.

If a function did not inline, it should be easy to determine why. First, you can check if a function is inlineable in the first place, by looking at the new column is_inlineable in sys.sql_modules. This will be 1 for any function that *might* be inlined. It is important to note that this does not mean the function will always be inlined. It must not only conform to the requirements laid out in the official documentation, but also must pass other checks by the optimizer. These include things like complexity, level of nesting or recursion, and presence in a GROUP BY clause, as well as compatibility level, database scoped configuration settings, and hints. Basically, a lot of stars must align in order to make inlining happen. On the plus side, when it can happen, it will happen automatically – you don’t have to go change your functions, or recompile the queries or modules that call them.

Unlike older versions of SQL Server, where you had to know esoteric trace flags to enable/disable certain optimizer features, scalar UDF inlining can be turned on or off in a variety of ways:

  • At the database level, using compatibility level
  • At the database level, using the database scoped configuration TSQL_SCALAR_UDF_INLINING
  • At the specific function level, using WITH INLINE = ON | OFF
  • At the query level; to disable, you can use OPTION (USE HINT('DISABLE_TSQL_SCALAR_UDF_INLINING'))

There was a trace flag in earlier versions of the SQL Server 2019 CTP to enable this functionality (which was disabled by default), but that flag is no longer necessary.

Summary

SQL Server 2019 provides a new mechanism to vastly improve the performance of scalar user-defined functions. The change is transparent, provided you are running in the current compatibility level and your function conforms to all of the requirements. This is simply one more "it just runs faster" win you’ll enjoy when you upgrade. Ideally you will be working toward removing scalar UDFs, but this is one way you can restore decent performance without that work.

Next Steps

Read on for related tips and other resources:

Last Updated: 2018-11-12

About the author

Aaron Bertrand (@AaronBertrand) is a Product Manager at SentryOne, with industry experience dating back to Classic ASP and SQL Server 6.5. He is editor-in-chief of the performance-related blog, SQLPerformance.com, and serves as a community moderator for the Database Administrators Stack Exchange.

View all my tips

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

App Dev Manager Mike Barker builds on his previous article about Azure API Manager, exploring the pros and cons of various high availability and disaster recovery strategies.

In a previous post I discussed an approach to handling backend redundancy using Azure API Manager (APIM). In this post I want to discuss the various options for providing a high-availability (HA) and disaster recovery (DR) to your services exposed by API Manager.

Redundant Services in Azure

Before getting into the various options available let’s discuss the components which are relevant to delivering a high-availability solution within Azure.

Azure API Manager

Firstly, it is important to understand the redundancy available with API Manager. API Manager is composed of two components. There is the Service Management component which is responsible for servicing all of the administration and configuration calls, and the API Gateway component which services the actual endpoints which you are surfacing to your consumers and which routes the calls (via policy) to the backend services.

Using API Manager you can provision multiple API Gateway components (called “units” in the portal) and, under the Premium tier, these can be distributed across multiple regions (read more here). All of these gateways have the same configuration and policy definition, and therefore by default all point to the same backend endpoint. If you want your API Gateways to point to different backend endpoints you will need to modify the policy definition to route calls accordingly. Calls into API Manager will automatically be load-balanced to the closest available API Gateway, providing highly-available redundancy, as well as performance improvements to global users.

Notice however that the Service Management component does not replicate across regions, rather it exists only in the primary region. If the primary region experiences an outage, configuration changes cannot be made to the API Manager (including updating settings or policies).

Azure Traffic Manager

Azure Traffic Manager is a high-availability service which operates across Azure regions. It is a DNS-based load balancer which routes traffic globally to provide performance and resilience to your services. It uses endpoint monitoring to detect failures and, if a failure is detected, it will automatically “switch” the DNS entry served up to calls to re-route traffic to healthy endpoints.

You can read more about how Traffic Manager works here.

Azure Font Door

Azure Front Door is a new service offered by Microsoft (at the time of writing, in public preview). It offers a point-of-entry into Microsoft’s global network and provides you with automatic failover, high-availability and optimised performance. These entry points are global distributed to ensure that, no matter where your consumers are based geographically, an entry point exists close to them. These form the edge of the Microsoft network, and once the traffic is inside this network it can be routed to your services in whatever configuration is required.

Routing is done at layer 7, which means that traffic is directly controlled by Front Door and does not rely on DNS switching.

Like Traffic Manager health probes in Front Door monitor your services to provide automatic failover in the case of failure.

Creating a Highly-Available API Service

In a typical scenario we have a backend service which are fronted by API Manager and these endpoints are exposed to consumers. If we want to provide a true highly-available solution we need redundancy at every level, including the backend services and the API Manager layer. We also want to provide maximum performance to our consumers and therefore route them to the closest available endpoint to service their request.

The high-level architecture diagram would look some similar to the following:

In this diagram the backend services are represented as Azure Functions, but the solutions discussed here are independent of the technology used to service the endpoint. Also, for simplicity of the diagram I have shown the consumers existing in the Azure regions, however for all the options discussed below the consumers may be globally distributed and need not originate from within Azure.

The dark blue lines show the flow of traffic under normal operating conditions; and the light blue shows the possible fail-over traffic routes.

It is now clear from this diagram that there are two places where we must provide redundancy. Firstly, redundancy of the consumer facing API (i.e. from the consumer to the API Manager); and secondly, the backend API (i.e. from the API Manager to the backend service).

Redundancy of Consumer-Facing API

Option 1: Retry logic in the consumer

The first option when considering high-availability of the consumer-facing API is to provide two separate API Manager instances, and leave the responsibility to the consumer to retry each endpoint in the case of a failure. All logic and responsibility for doing this is pushed to the consumer. Vis:

PROs:

  • This allows each consumer to access their closest region by default, and to have automatic and instantaneous failover to a redundant region in the case of a failure.
  • As there are no additional Azure components there is no additional running cost, and if required lower-tier instances of API Manager (i.e. Basic or Standard) may be used, making this a relatively low-cost implementation.

CONs:

  • This design is inflexible in terms of deploying instances to new regions, or worse removing an instance. Each consumer must be aware of the endpoint options which are currently available.
  • The consumer is left to implement this retry pattern, causing multiple implementations for the same access logic. This can be mitigated to some extent by providing published SDKs for your services, but these must be provided in all consumer languages (C#, Java, Python, etc).
  • The code required to access your service becomes expensive to implement and maintain.
  • The API Manager instances must be separately provisioned, maintained and configured. These instances must then be kept in-sync.
  • The maximum SLA guaranteed for either API Manager by Microsoft will be 99.9% in this configuration. No additional SLA is given for the “combined” pair.
Option 2: Use Traffic Manager In Front of API Manager

The API Manager can be fronted by Azure Traffic Manager to provide load balancing and automatic failover.

PROs:

  • The consumer has only one endpoint to access your service and obtain high-availability.
  • The two instances of API Manager may be provided on a cheaper low-tier (Basic or Standard) option.
  • Failover is automatic when a fault occurs in either API Manager instance.
  • Adding and removing API Manager instances is as easy as updating the Traffic Manager.

CONs:

  • Azure Traffic Manager cannot probe the endpoints (for health) faster than 10 seconds. One would likely want to allow room for transient errors (as in all cloud deployments) and so only fail on, at least, the second failed probe. This means the fail-over time for switching the DNS entry is 20 seconds.
  • Consumers may (and most likely will) cache the results from DNS lookups. If multiple layers of DNS caching take place between the consumer and the Azure Traffic Manager this can cause significant delays between switching the DNS entry and the consumer using the new record.
  • The multiple API Manager instance must be kept in-sync for any configuration changes.
  • The maximum SLA guaranteed for either API Manager by Microsoft will be 99.9% in this configuration. No additional SLA is given for the “combined” pair.
Option 3: Using Azure DNS

Just like Traffic Manager we could front the API Manager with a manually configured DNS entry. This won’t supply automatic failover, nor will it route users automatically to the closest API Manager instance, but it will provide a level of disaster recovery to our system. This is not a high-availability solution but in included here for completeness of the discussion.

PROs:

  • DNS is a very cheap solution.
  • The consumer has only one endpoint to access your service.
  • The two instances of API Manager may be provided on a cheaper low-tier (Basic or Standard) option.
  • The active tier can be provisioned at a higher tier, whilst the inactive region is provisioned at a lower tier. The inactive region need only be scaled-up to the higher tier when switching the DNS entry. This reduces the running cost of the solution.
  • Adding and removing API Manager instances requires no additional configuration over provisioning the new region.

CONs:

  • No automatic health monitoring is available, and automatic fail-over does not occur. Manual intervention is required to initiate the failover. This provides disaster recovery but not high-availability.
  • Additional monitoring must be utilised to alert operations when a failover is required.

Notice that, in the event of a planned failover, first switch the DNS entry and monitor the traffic. When all traffic is routing to the secondary region it is safe to bring down the services in the primary.

Option 4: Azure Front Door

Using Azure Front Door purely for providing a high-availability layer in front of API Manager would be like trying to crack a nut with a sledgehammer. However, if you already making use of the Front Door service one can certainly embrace its load balancing functionality.

PROs:

  • The consumers have only one endpoint to access your service and obtain high-availability.
  • Adding and removing gateway instances is as easy as adding to the Front Door backend pool.
  • Failover is automatic when a fault occurs in any region.
  • Layer 7 switch is used to provide traffic routing, meaning no failover latency due to DNS caching.

CONs:

  • The fastest health probe rate for Azure Front Door is 5 seconds. This gives a much quicker failover time than Azure Traffic Manager, but it is not instantaneous.
  • The multiple API Manager instance must be kept in-sync for any configuration changes.
  • The maximum SLA guaranteed for either API Manager by Microsoft will be 99.9% in this configuration. No additional SLA is given for the “combined” pair.
Option 5: API Manager Redundancy

As mentioned above, the Azure API Manager’s API Gateway can be redundantly deployed, even across global regions. Using this setup our diagram becomes:

PROs:

  • The consumers have only one endpoint to access your service and obtain high-availability.
  • Adding and removing gateway instances is as easy as a configuration change.
  • Failover is automatic when a fault occurs in any region.
  • Configuration changes are automatically rolled-out to all gateways, keeping them in-sync.
  • Utilising multiple gateway deployments results in higher throughput.
  • The SLA provided by Microsoft for Azure API Manager when utilising multiple multi-region deployments inside a single instance in the Premium tier increases to 99.95%.

CONs:

  • API Manager must be deployed at the Premium tier to take advantage of multi-region deployment, which can be prohibitively expensive for some use cases.
  • If the primary region fails the API Manager Gateway instances will still operate but no configuration changes can be made until the primary region is restored, since the Service Manager is only hosted in the primary region.

Redundancy of Backend API

That covers the various automatic options available for automatic failover of the consumer facing endpoint. Let’s look now at the backend service endpoint redundancy options.

The redundancy options for the backend API are independent of the option chosen for the consumer facing API. In the examples shown below the diagram has two separate API Manager instances but this could equally be a single API Manager instance with region gateway deployments (i.e. option 5, above).

Option 1: Per-region deployment

We could choose to connect each API Manager instance to its own backend service, with no failover.

In this deployment we are not providing a totally highly-available solution, but we do protect against regional Azure outages. We can simplify the deployment of each API Manager instance by re-using the same policy in all deployments but using either configuration or custom switching logic to route to the required endpoint.

The following inbound policy fragment shows how the deployment-region (context.Deployment.Region) can be used to route traffic accordingly:

PROs:

  • Very easy policy configuration in the API Manager.

CONs:

  • Provides only protection against total regional outages, not against a failure nor planned outages of the individual backends
Option 2: Use Traffic Manager behind API Manager

Just as we used traffic manager in front of API Manager to provide resiliency to the consumer, so too we can use it to front of the backend service. Vis:

PROs:

  • API Manager requires no custom routing logic at all, but only has one endpoint to access the backed service and obtain high-availability.
  • Failover is automatic when a fault occurs in either backend service.
  • There is no restriction to have as many API Manager deployments as backend services. We could conceivably have (say) two API Manager instances serviced by five backend services.

CONs:

  • As before there is a time delay from the moment a failure occurs to when Azure Traffic Manager detects the failure and re-routes the DNS entry.
  • No specifics are published on the TTL time for DNS caching within Azure API Manager. There could be an additional delay from when the DNS switch happens in Traffic Manager to when the API Manager honours the new entry.
Option 3: Custom Fail-over Logic in API Manager

As discussed in my previous article we are able to utilise logic in the API Manager’s policies to provide custom retry and failover logic in the event of a failure in the default service endpoint (the detail is omitted here for brevity).

PROs:

  • Failover can be instantaneous and even protects against transient errors from the backends.
  • No further Azure services are required over those of the API Manager and the backend.

CONs:

  • The custom routing logic can be complex to write, and difficult to understand and maintain.
  • Adding or removing backend services requires changes to be deployed to the API Manager’s policies.

Discussion of regional endpoints

Before concluding this article I also wanted to point out that, when deploying multiple gateways in a single API Manager instance, each gateway receives its own addressable endpoint URL. These are in addition to API Manager’s “global” gateway URL. The regional URLs are discoverable from the API Manager blade in the Azure portal under the Settings -> Properties.

This may be useful to test the system’s performance and accessibility via various the global endpoints.

Summary

In this article I have attempted to give a complete picture to the redundancy options available within Azure to provide disaster recovery and high-availability to your services presented through API Manager. We showed that redundancy must be considered between the consumer and API Manager, and between API Manager and the backend service. A number of options were discussed for each “hop”.

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Folks have been trying to

fix
supercharge the console/command line on Windows since Day One. There’s a ton of open source projects over the year that try to take over or improve on "conhost.exe" (the thing that handles consoles like Bash/PowerShell/cmd on Windows). Most of these 3rd party consoles have weird or subtle issues. For example, I like Hyper as a terminal but it doesn’t support Ctrl-C at the command line. I use that hotkey often enough that this small bug means I just won’t use that console at all.

Per the CommandLine blog:

One of those weaknesses is that Windows tries to be "helpful" but gets in the way of alternative and 3rd party Console developers, service developers, etc. When building a Console or service, developers need to be able to access/supply the communication pipes through which their Terminal/service communicates with command-line applications. In the *NIX world, this isn’t a problem because *NIX provides a "Pseudo Terminal" (PTY) infrastructure which makes it easy to build the communication plumbing for a Console or service, but Windows does not…until now!

Looks like the Windows Console team is working on making 3rd party consoles better by creating this new PTY mechanism:

We’ve heard from many, many developers, who’ve frequently requested a PTY-like mechanism in Windows – especially those who created and/or work on ConEmu/Cmder, Console2/ConsoleZ, Hyper, VSCode, Visual Studio, WSL, Docker, and OpenSSH.

Very cool! Until it’s ready I’m going to continue to try out new consoles. A lot of people will tell you to use the cmder package that includes ConEmu. There’s a whole world of 3rd party consoles to explore. Even more fun are the choices of color schemes and fonts to explore.

For a while I was really excited about Hyper. Hyper is – wait for it – an electron app that uses HTML/CSS for the rendering of the console. This is a pretty heavyweight solution to the rendering that means you’re looking at 200+ megs of memory for a console rather than 5 megs or so for something native. However, it is a clever way to just punt and let a browser renderer handle all the complex font management. For web-folks it’s also totally extensible and skinnable.

As much as I like Hyper and its look, the inability to support hitting "Ctrl-C" at the command line is just too annoying. It appears it’s a very well-understood issue that will ultimately be solved by the ConPTY work as the underlying issue is a deficiency in the node-pty library. It’s also a long-running issue in the VS Code console support. You can watch the good work that’s starting in this node-pty PR that will fix a lot of issues for node-based consoles.

Until this all fixes itself, I’m personally excited (and using) these two terminals for Windows that you may not have heard of.

Terminus

Terminus is open source over at https://github.com/Eugeny/terminus and works on any OS. It’s immediately gorgeous, and while it’s in alpha, it’s very polished. Be sure to explore the settings and adjust things like Blur/Fluent, Themes, opacity, and fonts. I’m using FiraCode Retina with Ligatures for my console and it’s lovely. You’ll have to turn ligature support on explicitly under Settings | Appearance.

Terminus also has some nice plugins. I’ve added Altair, Clickable-Links, and Shell-Selector to my loadout. The shell selector makes it easy on Windows 10 to have PowerShell, Cmd, and Ubuntu/Bash open all at the same time in multiple tabs.

I did do a little editing of the default config file to set up Ctrl-T for new tab and Ctrl-W for close-tab for my personal taste.

FluentTerminal

FluentTerminal is a Terminal Emulator based on UWP. Its memory usage on my machine is about 1/3 of Terminus and under 100 megs. As a Windows 10 UWP app it looks and feels very native. It supports ALT-ENTER Fullscreen, and tabs for as many consoles as you’d like. You can right-click and color specific tabs which was a nice surprise and turned out to be useful for on-the-fly categorization.

FluentTerminal has a nice themes setup and includes a half-dozen to start, plus supports imports.

It’s not yet in the Windows Store (perhaps because it’s in active development) but you can easily download a release and install it with a PowerShell install.ps1 script.

I have found the default Keybindings very intuitive with the usual Ctrl-T and Ctrl-W tab managers already set up, as well as Shift-Ctrl-T for opening a new tab for a specific shell profile (cmd, powershell, wsl, etc).

Both of these are great new entries in the 3rd party terminal space and I’d encourage you to try them both out and perhaps get involved on their respective GitHubs! It’s a great time to be doing console work on Windows 10!

Sponsor: Check out the latest JetBrains Rider with built-in spell checking, enhanced debugger, Docker support, full C# 7.3 support, publishing to IIS and more advanced Unity support.

© 2018 Scott Hanselman. All rights reserved.

 
 
 
 
 

Pazartesi, 12 Kasım 2018 / Published in Uncategorized

Some people say ‘friends don’t let friends right click publish’ but is that true? If they mean that there are great benefits to setting up a CI/CD workflow, that’s true and we will talk more about these benefits in just a minute. First, let’s remind ourselves that the goal isn’t always coming up with the best long-term solution.

Technology moves fast and as developers we are constantly learning and experimenting with new languages, frameworks and platforms. Sometimes we just need to prototype something rather quickly in order to evaluate its capabilities. That’s a classic scenario where right click publish in Visual Studio provides the right balance between how much time you are going to spend (just a few seconds) and the options that become available to you (quite a few depending on the project type) such as publish to IIS, FTP  & Folder (great for xcopy deployments and integration with other tools).

Continuing with the theme of prototyping and experimenting, right click publish is the perfect way for existing Visual Studio customers to evaluate Azure App Service (PAAS). By following the right click publish flow you get the opportunity to provision new instances in Azure and publish your application to them without leaving Visual Studio:

When the right click publish flow has been completed, you immediately have a working application running in the cloud:

Platform evaluations and experiments take time and during that time, right click publish helps you focus on the things that matter. When you are ready and the demand rises for automation, repeatability and traceability that’s when investing into a CI/CD workflow starts making a lot of sense:

  • Automation: builds are kicked off and tests are executed as soon as you check in your code
  • Repeatability: it’s impossible to produce binaries without having the source code checked in
  • Traceability: each build can be traced back to a specific version of the codebase in source control which can then be compared with another build and figure out the differences

The right time to adopt CI/CD typically coincides with a milestone related to maturity; either and application milestone or the team’s that is building it. If you are the only developer working on your application you may feel that setting up CI/CD is overkill, but automation and traceability can be extremely valuable even to a single developer once you start shipping to your customers and you have to support multiple versions in production.

With a CI/CD workflow you are guaranteed that all binaries produced by a build can be linked back to the matching version of the source code. You can go from a customer bug report to looking at the matching source code easily, quickly and with certainty. In addition, the automation aspects of CI/CD save you valuable time performing common tasks like running tests and deploying to testing and pre-production environments, lowering the overhead of good practices that ensure high quality.

As always, we want to see you successful, so if you run into any issues using publish in Visual Studio or setting up your CI/CD workload, let me know in the comment section below and I’ll do my best to get your question answered.

Çarşamba, 05 Eylül 2018 / Published in Uncategorized

Introduction

I have an ASP.NET Core MVC Web Applications with Razor Views and Razor Pages. In this case, the last ones are the login and account management pages generated by Identity. For all requests, I wanted to be able to access the host domain from the request to customize the style accordingly.

Background

Filters allow you to run code before or after the request is processed. There are some coming out the box, such us, Authorization. On the other hand, you can create your own custom filters. I found these documents about adding filters in ASP.NET Core. As mentioned in this link, filters for ASP.NET Core MVC views do not apply for Razor Pages. For Razor Page, I found information here.

Using the code

As in this case, I use an asynchronous filter I have to implement the OnActionExecutionAsync method of the IAsyncActionFilter interface to apply the filter to Razor Views:  

public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
{
   // before the action executes
   var controller = context.Controller as Controller;
   if (controller == null) return;
   controller.ViewData["Host"] = context.HttpContext.Request.Host.Host;
    
   var resultContext = await next();
   // after the action executes
}

On the other hand, for the Razor Pages I have to implement the IAsyncPageFilter interface:

public async Task OnPageHandlerExecutionAsync(PageHandlerExecutingContext context,
                                               PageHandlerExecutionDelegate next)
{
   // Called asynchronously before the handler method is invoked, after model binding is complete.
   var page = context.HandlerInstance as PageModel;
   if (page == null) return;
   page.ViewData["Host"] = context.HttpContext.Request.Host.Host;
   var resultContext = await next();
}

public async Task OnPageHandlerSelectionAsync(PageHandlerSelectedContext context)
{
   //Called asynchronously after the handler method has been selected, but before model binding occurs.  
   await Task.CompletedTask;
}

The resulting filter is the following class:

using System.Threading.Tasks;
using Microsoft.AspNetCore.Mvc;
using Microsoft.AspNetCore.Mvc.Filters;
using Microsoft.AspNetCore.Mvc.RazorPages;

namespace Filters
{
    public class HostFilter: IAsyncActionFilter, IAsyncPageFilter
    {
        public async Task OnActionExecutionAsync(ActionExecutingContext context, ActionExecutionDelegate next)
        {
           // before the action executes
            var controller = context.Controller as Controller;
            if (controller == null) return;
            controller.ViewData["Host"] = context.HttpContext.Request.Host.Host;

            var resultContext = await next();

            // after the action executes
        }

        public async Task OnPageHandlerExecutionAsync(PageHandlerExecutingContext context, PageHandlerExecutionDelegate next)
        {
            // Called asynchronously before the handler method is invoked, after model binding is complete.
            var page = context.HandlerInstance as PageModel;
            if (page == null) return;
            page.ViewData["Host"] = context.HttpContext.Request.Host.Host;
            var resultContext = await next();
        }

        public async Task OnPageHandlerSelectionAsync(PageHandlerSelectedContext context)
        {
            //Called asynchronously after the handler method has been selected, but before model binding occurs.
            await Task.CompletedTask;
        }
    }
    }

    To enabled the filter in the Startup class:

services.AddMvc(options =>
{
    options.Filters.Add(new HostFilter());
});

    So, in this way with one filter we apply logic to be execute before the handling of any razor view or page in the application.

Points of Interest

 

History

Keep a running update of any changes or improvements you’ve made here.

TOP