Saturday, May 12, 2018

Deploying .net core web job in Azure web app

Lately I have written a .net core web job and wanted to publish it via CD (continuous deployment) from Visual Studio Online. Soon I figured, Azure Web Job SDK doesn’t support (yet) .net core. The work I expected will take 10 mins took about an hour.

If you are also figuring out this, this blog post is what you are looking for.

I will describe the steps and provide a PowerShell script that does the deployment via Kudu API. Kudu is the Source Control management for Azure app services, which has a Zip API that allows us to deploy zipped folder into an Azure app service.

Here are the steps you need to follow. You can start by creating a simple .net core console application. Add a PowerShell file into the project that will do the deployment in your Visual Studio online release pipeline. The PowerShell script will do the following:

  • Publish the project (using dotnet publish)
  • Make a zip out of the artifacts
  • Deploy the zip into the Azure web app


Publishing the project

We will use dotnet publish command to publish our project.

$resourceGroupName = "my-regource-group"
$webAppName = "my-web-job"
$projectName = "WebJob"
$outputRoot = "webjobpublish"
$ouputFolderPath = "webjobpublish\App_Data\Jobs\Continuous\my-web-job"
$zipName = "publishwebjob.zip"

$projectFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $projectName
$outputFolder = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $ouputFolderPath
$outputFolderTopDir = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $outputRoot
$zipPath = Join-Path `
    -Path "$((get-item $PSScriptRoot ).FullName)" `
    -ChildPath $zipName

if (Test-Path $outputFolder) { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}
$fullProjectPath = "$projectFolder\$projectName.csproj"

dotnet publish "$fullProjectPath" --configuration release --output $outputFolder

Create a compressed artifact folder

We will use System.IO.Compression.Filesystem assembly to create the zip file.

Add-Type -assembly "System.IO.Compression.Filesystem"
[IO.Compression.Zipfile]::CreateFromDirectory($outputFolderTopDir, $zipPath)

Upload the zip into Azure web app

Next step is to upload the zip file into the Azure web app. This is where we first need to fetch the credentials for the Azure web app and then use the Kudu API to upload the content. Here's the script:


function Get-PublishingProfileCredentials($resourceGroupName, $webAppName) {
 
    $resourceType = "Microsoft.Web/sites/config"
    $resourceName = "$webAppName/publishingcredentials"
 
    $publishingCredentials = Invoke-AzureRmResourceAction `
                 -ResourceGroupName $resourceGroupName `
                 -ResourceType $resourceType `
                 -ResourceName $resourceName `
                 -Action list `
                 -ApiVersion 2015-08-01 `
                 -Force 
    return $publishingCredentials
}
 
function Get-KuduApiAuthorisationHeaderValue($resourceGroupName, $webAppName) {
 
    $publishingCredentials = Get-PublishingProfileCredentials $resourceGroupName $webAppName
 
    return ("Basic {0}" -f `
        [Convert]::ToBase64String( `
        [Text.Encoding]::ASCII.GetBytes(("{0}:{1}" -f $publishingCredentials.Properties.PublishingUserName, `
        $publishingCredentials.Properties.PublishingPassword))))
}

$kuduHeader = Get-KuduApiAuthorisationHeaderValue `
    -resourceGroupName $resourceGroupName `
    -webAppName $webAppName

$Headers = @{
    Authorization = $kuduHeader
}
    
 
# use kudu deploy from zip file
Invoke-WebRequest `
    -Uri https://$webAppName.scm.azurewebsites.net/api/zipdeploy `
    -Headers $Headers `
    -InFile $zipPath `
    -ContentType "multipart/form-data" `
    -Method Post


# Clean up the artifacts now
if (Test-Path $outputFolder) { Remove-Item $outputFolder -Recurse -Force; }
if (Test-path $zipName) {Remove-item $zipPath -Force}

PowerShell task in Visual Studio Online

Now we can leverage the Azure PowerShell task in Visual Studio Release pipeline and invoke the script to deploy the web job.

The entire script is given below:

That's it!

Thanks for reading, and have a nice day!

Tuesday, March 13, 2018

Secure Azure Web sites with Web Application Gateway wtih end-to-end SSL connections

The Problem

In order to met higher compliance demands and often as security best practices, we want to put an Azure web site behind an Web Application Firewall (aka WAF). The WAF provides known malicious security attack vectors mitigation's defined in OWASP top 10 security vulnerabilities. Azure Application Gateway is a layer 7 load balancer that provides WAF out of the box. However, restricting a Web App access with Application Gateway is not trivial.
To achieve the best isolation and hence protection, we can provision Azure Application Service Environment (aka ASE) and put all the web apps inside the virtual network of the ASE. The is by far the most secure way to lock down a web application and other Azure resources from internet access. But ASE deployment has some other consequences, it is costly, and also, because the web apps are totally isolated and sitting in a private VNET, dev-team needs to adopt a unusual deployment pipeline to continuously deploy changes into the web apps. Which is not an ideal solution for many scenarios.
However, there's an intermediate solution architecture that provides WAF without getting into the complexities that AES brings into the solution architecture, allowing sort of best of both worlds. The architecture looks following:




The idea is to provision an Application Gateway inside a virtual network and configure it as a reverse proxy to the Azure web app. This means, the web app should never receive traffics directly, but only through the gateway. The Gateway needs to configure with the custom domain and SSL certificates. Once a request receives, the gateway then off-load the SSL and create another SSL to the back-end web apps configured into a back-end pool. For a development purpose, the back-end apps can use the Azure wildcard certificates (*.azurewebsites.net) but for production scenarios, it's recommended to use a custom certificate. To make sure, no direct traffic gets through the azure web apps, we also need to white-list the gateway IP address into the web apps. This will block every requests except the ones coming through the gateway.

How to do that?

I have prepared an Azure Resource Manager template into this Github repo, that will provision the following:

  • Virtual network (Application Gateway needs a Virtual network).
  • Subnet for the Application Gateway into the virtual network.
  • Public IP address for the Application Gateway.
  • An Application Gateway that pre-configured to protect any Azure Web site.

How to provision?

Before you run the scripts you need the following:
  • Azure subscription
  • Azure web site to guard with WAF
  • SSL certificate to configure the Front-End listeners. (This is the Gateway Certificate which will be approached by the end-users (browsers basically) of your apps). Typically a Personal Information Exchange (aka pfx) file.
  • The password of the pfx file.
  • SSL certificate that used to protect the Azure web sites, typically a *.cer file. This can be the *.azurewebsites.net for development purpose.
You need to fill out the parameters.json file with the appropriate values, some examples are given below:
        "vnetName": {
            "value": "myvnet"
        },
        "appGatewayName": {
            "value": "mygateway"
        },
        "azureWebsiteFqdn": {
            "value": "myapp.azurewebsites.net"
        },
        "frontendCertificateData": {
            "value": ""
        },
        "frontendCertificatePassword": {
            "value": ""
        },
        "backendCertificateData": {
            "value": ""
        }
Here, frontendCertificateData needs to be Base64 encoded content of your pfx file.
Once you have the pre-requisites, go to powershell and run:
    $> ./deploy.ps1 `
        -subscriptionId "< enter your subscription id >" `
        -resourceGroupName "< enter your resource group name >"
This will provision the Application Gatway in your resource group.

Important !

The final piece of work that you need to do, is to whitelist the IP address of the Application Gatway into your Azure Web App. This is to make sure, nobody can manage a direct access to your Azure web app, unless they come through the gateway only.

Contribute

Contribution is always appreciated.

Wednesday, November 22, 2017

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide

Install

Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side - Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let's start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here's an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton<IEventStore>(new EventStore( ... provide config )); 

Now the controller:

[Produces("application/json")]
[Route("users")]
public class UsersController : Controller
{   
    public UsersController(IEventStore eventStore)
    {
        this.eventStore = eventStore; // Here capture the event store handle
    }   
    
    ... other methods skipped here
}

Aggregate

Implementing event sourcing becomes way much handier, when it's fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID's are our partition keys on Event Store table - therefore, we can only apply an atomic operation on a single aggregate root level.)

Let's create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
{
    private string _userName;
    private string _emailAddress;
    private Guid _userId;
    private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot
{

    ... skipped other codes

    #region Apply events
    private void Apply(UserRegistered e)
    {
        this._userId = e.AggregateId;
        this._userName = e.UserName;
        this._emailAddress = e.Email;            
    }

    private void Apply(UserBlocked e)
    {
        this._blocked = true;
    }

    private void Apply(UserNameChanged e)
    {
        this._userName = e.NewName;
    }
    #endregion

    ... skipped other codes
}

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
{
    #region Accept commands
    public void RegisterNew(string userName, string emailAddress)
    {
        Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
        Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

        ApplyChange(new UserRegistered
        {
            AggregateId = Guid.NewGuid(),
            Email = emailAddress,
            UserName = userName                
        });
    }

    public void BlockUser(Guid userId)
    {            
        ApplyChange(new UserBlocked
        {
            AggregateId = userId
        });
    }

    public void RenameUser(Guid userId, string name)
    {
        Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

        ApplyChange(new UserNameChanged
        {
            AggregateId = userId,
            NewName = name
        });
    }
    #endregion


    ... skipped other codes
}

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
{  
    public string UserName { get; set; } 
    public string Email { get; set; } 
}

// POST: User
[HttpPost]
public async Task<JsonResult> Post(Guid projectId, [FromBody]UserPayload user)
{
    Ensure.ArgumentNotNull(user, nameof(user));

    var userId = Guid.NewGuid();    

    await eventStore.ExecuteNewAsync(
        Tenant, "user_event_stream", userId, async () => {

        var aggregate = new UserAggregate();

        aggregate.RegisterNew(user.UserName, user.Email);

        return await Task.FromResult(aggregate);
    });

    return new JsonResult(new { id = userId });
}

And another API to modify existing users into the system:

//PUT: User
[HttpPut("{userId}")]
public async Task<JsonResult> Put(Guid projectId, Guid userId, [FromBody]string name)
{
    Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

    await eventStore.ExecuteEditAsync<UserAggregate>(
        Tenant, "user_event_stream", userId,
        async (aggregate) =>
        {
            aggregate.RenameUser(userId, name);

            await Task.CompletedTask;
        }).ConfigureAwait(false);

    return new JsonResult(new { id = userId });
}

That's it! We have our WRITE side completed. The event store is now contains the events for user event stream.

EventStore

Read Side - Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application - Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It's doing a polling - instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams - which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
{
    var eventConsumer = new EventStreamConsumer(        
        ... skipped for simplicity
        "user-event-stream", 
        "user-event-stream-lease");
    
    await eventConsumer.RunAndBlock((evts) =>
    {
        foreach (var @evt in evts)
        {
            if (evt is UserRegistered userAddedEvent)
            {
                readModel.AddUserAsync(new UserDto
                {
                    UserId = userAddedEvent.AggregateId,
                    Name = userAddedEvent.UserName,
                    Email = userAddedEvent.Email
                }, evt.Version);
            }

            else if (evt is UserNameChanged userChangedEvent)
            {
                readModel.UpdateUserAsync(new UserDto
                {
                    UserId = userChangedEvent.AggregateId,
                    Name = userChangedEvent.NewName
                }, evt.Version);
            }
        }

    }, CancellationToken.None);
}

static void Main(string[] args)
{
    Run().Wait();
}

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.

Conclusion

The library is very light weight and havily influenced by Greg's event store model and aggreagate model. Feel free to use/contribute.

Thank you!

Monday, August 28, 2017

3D Docker Swarm visualizer with ThreeJS

Few days before I wrote about creating a Docker Swarm Visualizer here. I have enjoyed very much writing that peice of software. However, I wanted to have a more funny way to look at the cluster. Therefore, last weekend I thought of creating a 3D viewer of the cluster that might run on a large screen, realtime showing up the machines and their workloads.

After hacking a little bit, I ended up creating a 3D perspective of the Docker swarm cluster, that shows the nodes (workers in green, managers in blue and drained nodes in red colors and micro service containers in blue small cubes, all in real-time and by leveraging the ThreeJS library. It was just an added funny view to the existing visualizer. Hope you will like it!

You need to rebuild the docker image from the docker file in order to have the 3d view enabled. I didn't want to include this into the moimhossain/viswarm image.

Thursday, August 17, 2017

Docker Swarm visualizer

VISWARM

A visual representation facilitates understanding any complex system design. Running micro-services on a cluster is no exception. Since I am involved in running and operating containers in Docker swarm cluster, I often wonder how better I could be on top of the container tasks distribution is taking place at any given moment, their health, the health of the machine underneath etc. I have found the docker-swarm-visualizer in GitHub – which does a nice job in visualizing the nodes and tasks graphically. But I was thriving to associate a little more information (i.e. image, ports, node’s IP address etc.) to the visualization. Which could help me take the follow up actions as needed. Which led me writing a swarm visualizer that does fulfil my wishes.

Here’s video glimpse for inspirations.

Well, that’s one little excuse of mine. Of course writing a visualizer is always a fun activity for me. The objective of the visualizer was following:

  • Run the visualizer as a container into the swarm
  • Real time updates when nodes added, removed, drained, failed etc.
  • Real time updates when services created, updated, replications changed.
  • Real time updates when task added, removed etc.
  • Display task images, tags to track versions (when rolling updates take place)
  • Display IP address and ports for nodes – (helps troubleshooting)

How to run

To get the visualizer up and running in your cluster

> docker run -p 9009:9009 -v /var/run/docker.sock:/var/run/docker.sock moimhossain/viswarm

How to develop

This is what you need to do in order to start hacking the codebase

> npm install

It might be the case that you have installed webpack locally and received a webpack command not recognized error. In such scenarios, you can either install webpack globally or run it from local installation by using the following command:

node_modules\.bin\webpack

Start the development server (changes will now update live in browser)

> npm run dev

To view your project, go to: http://localhost:9009/

Some of the wishes I am working on:

  • Stop tasks from the user interface
  • Update service definitions from user interface
  • Leave, drain a node from the user interface
  • Alert on crucial events like node drain, service down etc.

The application written with Express 4 on Node JS. React-Redux for user interface. Docker Remote API’s are sitting behind proxy with a Node API on top, which better prevents exposing Docker API – when IPv6 used.

If you are playing with Docker swarm clusters and reading this, I insist, give it a go, you might like it as well. And of course, it’s in GitHub, waiting for helping hands to take to the next level.

Tuesday, July 25, 2017

Azure template to provision Docker swarm mode cluster

What is a swarm?

The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit. Docker engines participating in a cluster are running in swarm mode. You enable swarm mode for an engine by either initializing a swarm or joining an existing swarm. A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.

I was recently trying to come up with a script that generates the docker swarm cluster - ready to take container work loads on Microsoft Azure. I thought, Azure Container Service (ACS) should already have supported that. However, I figured, that's not the case. Azure doesn't support docker swarm mode in ACS yet - at least as of today (25th July 2017). Which forced me to come up with my own RM template that does the help.

What's in it?

The RM template will provision the following resources:

  • A virtual network
  • An availability set for manager nodes
  • 3 virtual machines with the AV set created above. (the numbers, names can be parameterized as per your needs)
  • A load balancer (with public port that round-robins to the 3 VMs on port 80. And allows inbound NAT to the 3 machine via port 5000, 5001 and 5002 to ssh port 22).
  • Configures 3 VMs as docker swarm mode manager.
  • A Virtual machine scale set (VMSS) in the same VNET.
  • 3 Nodes that are joined as worker into the above swarm.
  • Load balancer for VMSS (that allows inbound NATs starts from range 50000 to ssh port 22 on VMSS)

The design can be visualized with the following diagram:

There's a handly powershell that can help automate provisioing this resources. But you can also just click the "Deploy to Azure" button below.

Thanks!

The entire scripts can be found into this GitHub repo. Feel free to use - as needed!

Monday, May 8, 2017

ASP.net 4.5 applications on Docker Container

Docker makes application deployment easier than ever before. However, most of the Docker articles are often written how to containerized application that runs on Linux boxes. But since Microsoft now released windows containers, the legacy (yes, we can consider .net 4.5 as legacy apps) .net web apps are not left out anymore. I was playing with an ASP.net 4.5 web application recently, trying to make a container out of it. I have found the possibility exciting, not to mentioned that I have enjoyed the process entirely. There are few blogs that I have found very useful, especially this article on FluentBytes. I have however, developed my own scripts for my own application, which is indifferent than the one in FluentBytes (thanks to the author), but I have combined few steps into the dockerfile – that helped me get going with my own containers.
If you are reading this and trying to build container for your asp.net 4.5 applications, here are the steps: In order to explain the process, we’ll assume our application is called LegacyApp. A typical asp.net 4.5 application.
  • We will install Docker host on windows.
  • Create a directory (i.e. C:\package) that will contain all the files needed to build the container.
  • Create the web deploy package of the project from Visual studio. The following images hints the steps we need to follow.
Image: Create a new publish profile

Image: Use 'Web Deploy Package' option

Important note: We should name the site in the following format Default Web Site\LegacyApp to avoid more configuration works.

  • Download the WebDeploy_2_10_amd64_en-US.msi from Microsoft web site
  • I have ran into an ACL issue while deploying the application. Thanks to the author of FluentBytesarticle, that provides one way to solve the issue by using a powershell file during the installation. We will create that file into the same directory, let’s name it as fixAcls.ps1. The content of the file can be found here
  • We will create the dockerfile into the same directory, with the content of this sample dockerfile

At this moment our package directory will look somewhat following:

  • Now we will go to the command prompt and navigate the command prompt to the directory (i.e. C:\package) we worked so far.
  • Build the container
     c:/> docker build -t legacyappcontainer .
  • Run the container
    c:/> docker run -p 80:80 legacyappcontainer 
Now that the container is running, we can start browsing your application. we can’t use the localhost or 127.0.0.0 on our host machine to browse the application (unlike Linux containers), we need to use the machine name or the IP address in our URL. Here’s what we can do:
  • We will run the inspect command to see the container IP.
     C:\> docker -inspect < image guid > 
  • Now we will take the IP from the JSON and use the IP on our URL to navigate to the application.
The complete dockerfile can be found into the Github Repo.