Wednesday, November 19, 2014

Continuous Deployment Notes

Preface

Continuous integration and deployment are important concepts in efficiently delivering products. Here's some notes I've made about setting up a continuous deployment pipeline using TFS, TeamCity, and Octopus Deploy. It's a work in progress, but I hope it will help someone else out there.

The Process

The process is pretty simple. Code is checked into source control. A build server picks up the changes. It uses a build script to create a .nupkg file containing the build artifacts. This .nupkg file is uploaded to an Octopus-hosted NuGet repository. Octopus is then used to deploy the artifacts to a target environment.

The Build Server

Here's some details on the build server. It is a Server 2008 R2 machine with SP1. It's got .NET 4.5.1, Visual Studio and TeamCity installed. Some things were dropped into a 'Tools' directory on the main drive: MsBuildTasks, a custom Regex task, NuGet, NUnit, and Octo.exe.

The Sample Project

I'll be using a small, sample project illustrate where things go, and how they are used. The project structure, as it is checked into source control is similar to the following:

./SampleProject/
 SampleProject.proj
 src/
  SampleProject.sln
  Version.cs
  SampleProject.Host/
  SampleProject.Library/
  SampleProject.Library.UnitTests/

The Project File

The sample project uses the SampleProject.proj file to define the steps necessary for building the solution. Using a project file allows us to have a (mostly) product independent build sequence. All the major steps for creating a product artifact are codified in the project file. Be sure to check out the project file documentation on MSDN's site.



Yes, it could use some cleanup, but this is what's running now. It was originally designed to work with either Jenkins or TeamCity. It's being updated to work only with TeamCity.

Why use a file instead of setting the steps up in TeamCity? With the exception of the NUnitTeamCity addin, the script can be used in Jenkins. That means you can pick this file up and go with whatever CI server you want. I'm hoping to post something about that later. It's also easier for me to visualize the build process in one file, versus the million option pages that is TeamCity.


Versioning

Mike Hadlow has a pretty nifty trick for assembly versions in a solution. It uses one file to set the version information for all the artifacts in a solution. His blog post explains it. I'm a big fan of Semantic Versions. Using the one-file trick really eases process of maintaining the changes to the version numbers.

Using the one-file trick, it became possible to use a regex task to update the file. This made it possible to have the version number based on the TeamCity build number. A co-worker found the build task, so I'm not sure where it originally came from. This custom task is also added to the Build Server's tools directory.




TeamCity

The bummer about TeamCity is the clicky-ness of the interface. There are roughly a million different links, each leading to a new page. Each page has a dozen or so things you can set. Sure, it's amazingly powerful and flexible. But, it's easy to get lost. This isn't a knock on TeamCity. I'm just easily confused.

The first thing to set in TeamCity is the build number format. This is accessible on the first page of the build configuration settings.

Note: The format of the variable changes when used in a MSBuild file. In the project file, the any '.' in the variable name must be replaced with an '_'. That means 'build.number' becomes 'build_number'.



Octopus

Using Octopus to deploy is a straightforward process: create an environment, add some machines, create a release. Installing Octopus and tentacles on target machines is covered in the Octopus online documentation.

The first step is to create an environment. Once the environment is setup with machines, a project is needed. Finally, a release is created to actually deploy the artifacts. Once the environment is setup, you can perform your deployment as normal.



Octopus is pretty flexible in terms of the scripting and other custom install actions. The sample project is a TopShelf service. Installing and uninstalling it just needs a couple custom actions around the deploy action.

-

The scripts can be as simple as, C:\Services\SampleProject\SampleProject.Host.exe uninstall.


Wrapping It Up

Hopefully this will help someone resolve some of the issues with setting up a CI build process.

Thursday, October 9, 2014

Toggling With Windsor

Preface

There are a number of times when we've all had to implement new features or modify the implementation of an existing code base. An intern recently asked me how to feature toggle something using Castle.Windsor. This post will show how to use some Castle.Windsor features to toggle implementations. The example code can be found on GitHub.

Primitive Dependencies

The toggle will be an app setting in the application's config file. It will be loaded by Castle.Windsor by using a custom dependency resolver. This resolver is taken from Mark Seeman's post on AppSettings.

The Service

We'll be using a simple interface as our service definition. There will be two implementations. One represents an old implementation, the other a new.


It's useful to note that this is a common way to achieve some branching by abstraction. This is done by replacing calls to a service with an interface. This interface is the abstraction. Once the calls to the old service are replaced, you are free to implement a new service. When ready, the new service can be substituted for the old without the consumers being aware since they depend on the interface not the concrete.

Typed Factory Selector

Castle.Windsor comes with a handy little bit: the typed factory facility. The typed factory facility lets Castle.Windsor create a factory implementation from an interface defined by you. This relieves you of the task of implementing the factory on your own. It is especially useful if you want to defer the creation of the object.


Our class will use this factory to get an instance of our service, and call the .Print() method. The default for this object will be the first one registered in the container. This behavior can be overridden by implementing a custom selector.


The typed factory and selector must both be registered with the container. The selector must also be specified in the configuration of the typed factory. This is done on lines 21 and 22 of the ContainerFactory class.


Using IHandlerSelector

Mike Hadlow provides a very good example of using a custom IHandlerSelector.

We can use a similar technique to pull in a config value and supply the appropriate implementation at run time. The custom IHandlerSelector uses the config value to select the appropriate handler. If no handlers are found it throws an exception. This handler is then returned.


This service handler must be registered and added to the container's kernel. Line 19 of the ContainerFactory class show the registration. Line 26 shows the selector being added to the kernel. While there is only one selector in this example, the snippet shows how to add more than one.

Running the Console

Changing the config value and running the console app shows that the selectors are functioning correctly.



Wrapping It Up

Feature toggles and branching by abstraction are powerful ways to control whether new code is being used in production. They provide a way to replace old behavior with new, while maintaining the integrity of the product's build. Hopefully these two examples will help you integrate feature toggling into your builds.

Tuesday, July 22, 2014

Basics: Removing the 'if' (Using Polymorphism)

Preface

Performing transformations of one object type to another type is a very common task in programming. It might be publishing an event based on a command, or an externally known DTO from an internal DTO. It's pretty common to see some use of the if or switch keywords to determine the code flow. I thought I'd take a minute to show how we can go from a typical implementation using if statements to one which uses Linq and AutoMapper to reduce the coupling in the implementation.

The example code uses the following tools:
The Interface

For this example, we'll have three different publishers. Each will implement a common interface: IPublisher. The implementations will be responsible for accepting a Command object and publishing the associated Event object. We'll be using two commands and events: Start -> Started, Stop -> Stopped.

The Publish method on the IPublisher interface is intentionally not using a generic declaration.

Overloading

The first Publisher accepts a command. It checks the type of the command received, and calls the appropriate overload. Each overloaded method creates the appropriate event, and publishes it.


This works, but it has a few problems. It both uses an if to determine which type to publish, and manually maps the inbound command to the outbound event. That means this class is responsible for both determining what kind of event to publish and creating that event.

Adding AutoMapper

AutoMapper removes the responsibility of creating the event from the publisher class. AutoMapper Profiles could be used to map more complex associations, but the DynamicMap method works just fine here. Our publisher class is relieved of this responsibility, limiting it to just sorting out the type of event to be published.

It still has the problem of using the if statement to determine the type of command received (and event to be be published).

Removing the 'if'

Introducing a map from the commands and a Linq query allows us to remove the if statements. The class is still responsible for selecting the appropriate action. The concept of associating commands to events is distilled into the dictionary. This leaves the class' methods to simply select the appropriate action and execute it.

The Dictionary was left inside the publisher class to keep everything in once class. It wouldn't take much to move the mappings out of the class. This would further reduce the coupling on the Publisher. More complex mappings could be introduced by changing the Dictionary out for a custom type.

Wrapping It Up

This was a quick demonstration of removing two concerns from a class. The manual mapping of one class to another by introducing AutoMapper. The if statement was removed by introducing a map between the two types. I hope this helped describe a different was of building classes with reduced responsibilities.

Thursday, February 6, 2014

RabbitMQ Federated Queues

Preface

RabbitMQ added support for federated queues in v3.2.0. This feature gives a simple way to move messages from one Rabbit cluster to another. I'll show you one way to set this up. The sample code can be found on GitHub. I'm using EasyNetQ to handle the publishing and subscription. It's a very nice RabbitMQ client library. Check it out.

The Clusters

Note: The virtual host names are not the same on the two clusters. Broker A is using the virtual host FederationDemo. Broker B is using the virtual host FederatedStuff.

The RabbitMQ documentation does cover the federation plug-in.  In our scenario, there are two clusters. Each is an upstream for the other. Message hops are at 1 to prevent the messages from circling back to the publisher. Below are pictures of the upstreams defined on each of the clusters.

Shows the upstream definition on Broker A.
Broker A Upstream

Shows the upstream definition on Broker B.
Broker B Upstream


Each broker will need to have a policy defined. This policy is used by the broker to figure out what things come from the upstream bluster. The policies are very similar for both clusters. Below are pictures of the policies:

Federation Policy on Broker A

Federation Policy on Broker B


The Clients

This example will use two console applications. Each will subscribe to and publish two messages. The messages will ping-pong between the two clients. The first client, FooConsole, will publish two messages: start, and stop. The second client, BarConsole, will publish the following: started and stopped.

This example uses two clients. A message from one client will be transported to another. Then a response message will be transported back to the original client.The message sequence is Start, Started, Stop, and Stopped.

I've put the publishing and subscriptions into one class, so we could see everything going on. Both classes, Foo and Bar, are very similar. Here's a look at the Foo class:


Foo subscribes to two messages: Started and Stopped. It then publishes a Start message to get the ball rolling. On the other end, Bar subscribes to Start and Stop messages. It responds to each message with one of its own messages. A Start from Foo causes Bar to send Started. A Stop from Foo causes Bar to send a Stopped message.

Wiring Two Joints

When an app uses EasyNetQ to subscribe, EasyNetQ creates the queues and exchanges for us. This doesn't happen when working with federated queues. The queues will be federated, but there will be no bindings made on the upstream cluster. The pictures below show the downstream and upstream clusters after a client subscribes to a message.

Upstream (publisher) Federated Queues

Downstream (subscriber) Federated Queues


The last little bit is to bind the exchange to the queue on the upstream cluster. This allows the messages to flow from the upstream publisher to the downstream subscriber.

Federated Queue Without Binding
Publishing Exchange Bound To Queue


With everything put together, it's now possible to show the two console apps passing messages across the broker. Note: I've disabled EasyNetQ's default debug logging by using a null logger.

The Console Display


Wrapping It Up

The addition of federated queues to RabbitMQ really simplifies transporting messages between clusters. Hopefully this helps show how you can get two clients to communicate across a federation.