Wednesday, November 22, 2017

CQRS and ES on Azure Table Storage

Lately I was playing with Event Sourcing and command query responsibility segregation (aka CQRS) pattern on Azure Table storage. Thought of creating a lightweight library that facilitates writing such applications. I ended up with a Nuget package to do this. here is the GitHub Repository.

A lightweight CQRS supporting library with Event Store based on Azure Table Storage.

Quick start guide

Install

Install the SuperNova.Storage Nuget package into the project.

Install-Package SuperNova.Storage -Version 1.0.0

The dependencies of the package are:

  • .NETCoreApp 2.0
  • Microsoft.Azure.DocumentDB.Core (>= 1.7.1)
  • Microsoft.Extensions.Logging.Debug (>= 2.0.0)
  • SuperNova.Shared (>= 1.0.0)
  • WindowsAzure.Storage (>= 8.5.0)

Implemention guide

Write Side - Event Sourcing

Once the package is installed, we can start sourcing events in an application. For example, let's start with a canonical example of UserController in a Web API project.

We can use the dependency injection to make EventStore avilable in our controller.

Here's an example where we register an instance of Event Store with DI framework in our Startup.cs

// Config object encapsulates the table storage connection string
services.AddSingleton<IEventStore>(new EventStore( ... provide config )); 

Now the controller:

[Produces("application/json")]
[Route("users")]
public class UsersController : Controller
{   
    public UsersController(IEventStore eventStore)
    {
        this.eventStore = eventStore; // Here capture the event store handle
    }   
    
    ... other methods skipped here
}

Aggregate

Implementing event sourcing becomes way much handier, when it's fostered with Domain Driven Design (aka DDD). We are going to assume that we are familiar with DDD concepts (especially Aggregate Roots).

An aggregate is our consistency boundary (read as transactional boundary) in Event Sourcing. (Technically, Aggregate ID's are our partition keys on Event Store table - therefore, we can only apply an atomic operation on a single aggregate root level.)

Let's create an Aggregate for our User domain entity:

using SuperNova.Shared.Messaging.Events.Users;
using SuperNova.Shared.Supports;

public class UserAggregate : AggregateRoot
{
    private string _userName;
    private string _emailAddress;
    private Guid _userId;
    private bool _blocked;

Once we have the aggregate class written, we should come up with the events that are relevant to this aggregate. We can use Event storming to come up with the relevant events.

Here are the events that we will use for our example scenario:

public class UserAggregate : AggregateRoot
{

    ... skipped other codes

    #region Apply events
    private void Apply(UserRegistered e)
    {
        this._userId = e.AggregateId;
        this._userName = e.UserName;
        this._emailAddress = e.Email;            
    }

    private void Apply(UserBlocked e)
    {
        this._blocked = true;
    }

    private void Apply(UserNameChanged e)
    {
        this._userName = e.NewName;
    }
    #endregion

    ... skipped other codes
}

Now that we have our business events defined, we will define our commands for the aggregate:

public class UserAggregate : AggregateRoot
{
    #region Accept commands
    public void RegisterNew(string userName, string emailAddress)
    {
        Ensure.ArgumentNotNullOrWhiteSpace(userName, nameof(userName));
        Ensure.ArgumentNotNullOrWhiteSpace(emailAddress, nameof(emailAddress));

        ApplyChange(new UserRegistered
        {
            AggregateId = Guid.NewGuid(),
            Email = emailAddress,
            UserName = userName                
        });
    }

    public void BlockUser(Guid userId)
    {            
        ApplyChange(new UserBlocked
        {
            AggregateId = userId
        });
    }

    public void RenameUser(Guid userId, string name)
    {
        Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

        ApplyChange(new UserNameChanged
        {
            AggregateId = userId,
            NewName = name
        });
    }
    #endregion


    ... skipped other codes
}

So far so good!

Now we will modify the web api controller to send the correct command to the aggregate.

public class UserPayload 
{  
    public string UserName { get; set; } 
    public string Email { get; set; } 
}

// POST: User
[HttpPost]
public async Task<JsonResult> Post(Guid projectId, [FromBody]UserPayload user)
{
    Ensure.ArgumentNotNull(user, nameof(user));

    var userId = Guid.NewGuid();    

    await eventStore.ExecuteNewAsync(
        Tenant, "user_event_stream", userId, async () => {

        var aggregate = new UserAggregate();

        aggregate.RegisterNew(user.UserName, user.Email);

        return await Task.FromResult(aggregate);
    });

    return new JsonResult(new { id = userId });
}

And another API to modify existing users into the system:

//PUT: User
[HttpPut("{userId}")]
public async Task<JsonResult> Put(Guid projectId, Guid userId, [FromBody]string name)
{
    Ensure.ArgumentNotNullOrWhiteSpace(name, nameof(name));

    await eventStore.ExecuteEditAsync<UserAggregate>(
        Tenant, "user_event_stream", userId,
        async (aggregate) =>
        {
            aggregate.RenameUser(userId, name);

            await Task.CompletedTask;
        }).ConfigureAwait(false);

    return new JsonResult(new { id = userId });
}

That's it! We have our WRITE side completed. The event store is now contains the events for user event stream.

EventStore

Read Side - Materialized Views

We can consume the events in a seperate console worker process and generate the materialized views for READ side.

The readers (the console application - Azure Web Worker for instance) are like feed processor and have their own lease collection that makes them fault tolerant and resilient. If crashes, it catches up form the last event version that was materialized successfully. It's doing a polling - instead of a message broker (Service Bus for instance) on purpose, to speed up and avoid latencies during event propagation. Scalabilities are ensured by means of dedicating lease per tenants and event streams - which provides pretty high scalability.

How to listen for events?

In a worker application (typically a console application) we will listen for events:

private static async Task Run()
{
    var eventConsumer = new EventStreamConsumer(        
        ... skipped for simplicity
        "user-event-stream", 
        "user-event-stream-lease");
    
    await eventConsumer.RunAndBlock((evts) =>
    {
        foreach (var @evt in evts)
        {
            if (evt is UserRegistered userAddedEvent)
            {
                readModel.AddUserAsync(new UserDto
                {
                    UserId = userAddedEvent.AggregateId,
                    Name = userAddedEvent.UserName,
                    Email = userAddedEvent.Email
                }, evt.Version);
            }

            else if (evt is UserNameChanged userChangedEvent)
            {
                readModel.UpdateUserAsync(new UserDto
                {
                    UserId = userChangedEvent.AggregateId,
                    Name = userChangedEvent.NewName
                }, evt.Version);
            }
        }

    }, CancellationToken.None);
}

static void Main(string[] args)
{
    Run().Wait();
}

Now we have a document collection (we are using Cosmos Document DB in this example for materialization but it could be any database essentially) that is being updated as we store events in event stream.

Conclusion

The library is very light weight and havily influenced by Greg's event store model and aggreagate model. Feel free to use/contribute.

Thank you!

Monday, August 28, 2017

3D Docker Swarm visualizer with ThreeJS

Few days before I wrote about creating a Docker Swarm Visualizer here. I have enjoyed very much writing that peice of software. However, I wanted to have a more funny way to look at the cluster. Therefore, last weekend I thought of creating a 3D viewer of the cluster that might run on a large screen, realtime showing up the machines and their workloads.

After hacking a little bit, I ended up creating a 3D perspective of the Docker swarm cluster, that shows the nodes (workers in green, managers in blue and drained nodes in red colors and micro service containers in blue small cubes, all in real-time and by leveraging the ThreeJS library. It was just an added funny view to the existing visualizer. Hope you will like it!

You need to rebuild the docker image from the docker file in order to have the 3d view enabled. I didn't want to include this into the moimhossain/viswarm image.

Thursday, August 17, 2017

Docker Swarm visualizer

VISWARM

A visual representation facilitates understanding any complex system design. Running micro-services on a cluster is no exception. Since I am involved in running and operating containers in Docker swarm cluster, I often wonder how better I could be on top of the container tasks distribution is taking place at any given moment, their health, the health of the machine underneath etc. I have found the docker-swarm-visualizer in GitHub – which does a nice job in visualizing the nodes and tasks graphically. But I was thriving to associate a little more information (i.e. image, ports, node’s IP address etc.) to the visualization. Which could help me take the follow up actions as needed. Which led me writing a swarm visualizer that does fulfil my wishes.

Here’s video glimpse for inspirations.

Well, that’s one little excuse of mine. Of course writing a visualizer is always a fun activity for me. The objective of the visualizer was following:

  • Run the visualizer as a container into the swarm
  • Real time updates when nodes added, removed, drained, failed etc.
  • Real time updates when services created, updated, replications changed.
  • Real time updates when task added, removed etc.
  • Display task images, tags to track versions (when rolling updates take place)
  • Display IP address and ports for nodes – (helps troubleshooting)

How to run

To get the visualizer up and running in your cluster

> docker run -p 9009:9009 -v /var/run/docker.sock:/var/run/docker.sock moimhossain/viswarm

How to develop

This is what you need to do in order to start hacking the codebase

> npm install

It might be the case that you have installed webpack locally and received a webpack command not recognized error. In such scenarios, you can either install webpack globally or run it from local installation by using the following command:

node_modules\.bin\webpack

Start the development server (changes will now update live in browser)

> npm run dev

To view your project, go to: http://localhost:9009/

Some of the wishes I am working on:

  • Stop tasks from the user interface
  • Update service definitions from user interface
  • Leave, drain a node from the user interface
  • Alert on crucial events like node drain, service down etc.

The application written with Express 4 on Node JS. React-Redux for user interface. Docker Remote API’s are sitting behind proxy with a Node API on top, which better prevents exposing Docker API – when IPv6 used.

If you are playing with Docker swarm clusters and reading this, I insist, give it a go, you might like it as well. And of course, it’s in GitHub, waiting for helping hands to take to the next level.

Tuesday, July 25, 2017

Azure template to provision Docker swarm mode cluster

What is a swarm?

The cluster management and orchestration features embedded in the Docker Engine are built using SwarmKit. Docker engines participating in a cluster are running in swarm mode. You enable swarm mode for an engine by either initializing a swarm or joining an existing swarm. A swarm is a cluster of Docker engines, or nodes, where you deploy services. The Docker Engine CLI and API include commands to manage swarm nodes (e.g., add or remove nodes), and deploy and orchestrate services across the swarm.

I was recently trying to come up with a script that generates the docker swarm cluster - ready to take container work loads on Microsoft Azure. I thought, Azure Container Service (ACS) should already have supported that. However, I figured, that's not the case. Azure doesn't support docker swarm mode in ACS yet - at least as of today (25th July 2017). Which forced me to come up with my own RM template that does the help.

What's in it?

The RM template will provision the following resources:

  • A virtual network
  • An availability set for manager nodes
  • 3 virtual machines with the AV set created above. (the numbers, names can be parameterized as per your needs)
  • A load balancer (with public port that round-robins to the 3 VMs on port 80. And allows inbound NAT to the 3 machine via port 5000, 5001 and 5002 to ssh port 22).
  • Configures 3 VMs as docker swarm mode manager.
  • A Virtual machine scale set (VMSS) in the same VNET.
  • 3 Nodes that are joined as worker into the above swarm.
  • Load balancer for VMSS (that allows inbound NATs starts from range 50000 to ssh port 22 on VMSS)

The design can be visualized with the following diagram:

There's a handly powershell that can help automate provisioing this resources. But you can also just click the "Deploy to Azure" button below.

Thanks!

The entire scripts can be found into this GitHub repo. Feel free to use - as needed!

Monday, May 8, 2017

ASP.net 4.5 applications on Docker Container

Docker makes application deployment easier than ever before. However, most of the Docker articles are often written how to containerized application that runs on Linux boxes. But since Microsoft now released windows containers, the legacy (yes, we can consider .net 4.5 as legacy apps) .net web apps are not left out anymore. I was playing with an ASP.net 4.5 web application recently, trying to make a container out of it. I have found the possibility exciting, not to mentioned that I have enjoyed the process entirely. There are few blogs that I have found very useful, especially this article on FluentBytes. I have however, developed my own scripts for my own application, which is indifferent than the one in FluentBytes (thanks to the author), but I have combined few steps into the dockerfile – that helped me get going with my own containers.
If you are reading this and trying to build container for your asp.net 4.5 applications, here are the steps: In order to explain the process, we’ll assume our application is called LegacyApp. A typical asp.net 4.5 application.
  • We will install Docker host on windows.
  • Create a directory (i.e. C:\package) that will contain all the files needed to build the container.
  • Create the web deploy package of the project from Visual studio. The following images hints the steps we need to follow.
Image: Create a new publish profile

Image: Use 'Web Deploy Package' option

Important note: We should name the site in the following format Default Web Site\LegacyApp to avoid more configuration works.

  • Download the WebDeploy_2_10_amd64_en-US.msi from Microsoft web site
  • I have ran into an ACL issue while deploying the application. Thanks to the author of FluentBytesarticle, that provides one way to solve the issue by using a powershell file during the installation. We will create that file into the same directory, let’s name it as fixAcls.ps1. The content of the file can be found here
  • We will create the dockerfile into the same directory, with the content of this sample dockerfile

At this moment our package directory will look somewhat following:

  • Now we will go to the command prompt and navigate the command prompt to the directory (i.e. C:\package) we worked so far.
  • Build the container
     c:/> docker build -t legacyappcontainer .
  • Run the container
    c:/> docker run -p 80:80 legacyappcontainer 
Now that the container is running, we can start browsing your application. we can’t use the localhost or 127.0.0.0 on our host machine to browse the application (unlike Linux containers), we need to use the machine name or the IP address in our URL. Here’s what we can do:
  • We will run the inspect command to see the container IP.
     C:\> docker -inspect < image guid > 
  • Now we will take the IP from the JSON and use the IP on our URL to navigate to the application.
The complete dockerfile can be found into the Github Repo.