Wednesday, November 19, 2014

Continuous Deployment Notes

Preface

Continuous integration and deployment are important concepts in efficiently delivering products. Here's some notes I've made about setting up a continuous deployment pipeline using TFS, TeamCity, and Octopus Deploy. It's a work in progress, but I hope it will help someone else out there.

The Process

The process is pretty simple. Code is checked into source control. A build server picks up the changes. It uses a build script to create a .nupkg file containing the build artifacts. This .nupkg file is uploaded to an Octopus-hosted NuGet repository. Octopus is then used to deploy the artifacts to a target environment.

The Build Server

Here's some details on the build server. It is a Server 2008 R2 machine with SP1. It's got .NET 4.5.1, Visual Studio and TeamCity installed. Some things were dropped into a 'Tools' directory on the main drive: MsBuildTasks, a custom Regex task, NuGet, NUnit, and Octo.exe.

The Sample Project

I'll be using a small, sample project illustrate where things go, and how they are used. The project structure, as it is checked into source control is similar to the following:

./SampleProject/
 SampleProject.proj
 src/
  SampleProject.sln
  Version.cs
  SampleProject.Host/
  SampleProject.Library/
  SampleProject.Library.UnitTests/

The Project File

The sample project uses the SampleProject.proj file to define the steps necessary for building the solution. Using a project file allows us to have a (mostly) product independent build sequence. All the major steps for creating a product artifact are codified in the project file. Be sure to check out the project file documentation on MSDN's site.

<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="default" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<PropertyGroup>
<Configuration Condition=" '$(Configuration)'=='' ">Release</Configuration>
<ProjName Condition=" '$(ProjName)'=='' ">$(MSBuildProjectName)</ProjName>
<OutDir Condition=" '$(OutDir)'=='' ">$(MSBuildThisFileDirectory)bin\</OutDir>
<PackDir Condition=" '$(PackDir)'=='' ">$(MSBuildThisFileDirectory)pkg\</PackDir>
<SourceHome Condition=" '$(SourceHome)'=='' ">$(MSBuildThisFileDirectory)src\</SourceHome>
<ToolsHome Condition=" '$(ToolsHome)'=='' ">c:\tools\</ToolsHome>
<PackageVersion>0</PackageVersion>
<TPath>C:\Program Files\MSBuild\ExtensionPack\4.0\MSBuild.ExtensionPack.tasks</TPath>
</PropertyGroup>
<Import Project='$(TPath)' />
<Import Project='$(ToolsHome)MsBuildTasks\RegexTransformTask.tasks' />
<UsingTask TaskName="NUnitTeamCity" AssemblyFile="$(teamcity_dotnet_nunitlauncher_msbuild_task)" />
<Target Name='default' DependsOnTargets='UpdateAssemblyVersion; ReadAssemblyVersion; Clean; RestorePackages; Build; RunTests; Package; Publish' />
<ItemGroup>
<Solution Include="$(SourceHome)*.sln">
<AdditionalProperties>Configuration=$(Configuration)</AdditionalProperties>
</Solution>
</ItemGroup>
<ItemGroup>
<RegexTransform Include="$(SourceHome)Version.cs">
<Find>(\[assembly: AssemblyVersion\(")([^/n]*)\.([^/n]*)\.([^/n]*)\.([^/n]*)("\)\])</Find>
<ReplaceWith>$1$2.$3.$4.$(build_number)$6</ReplaceWith>
</RegexTransform>
</ItemGroup>
<Target Name="UpdateAssemblyVersion">
<RegexTransform Items="@(RegexTransform)" />
</Target>
<Target Name="ReadAssemblyVersion">
<ReadLinesFromFile File="$(SourceHome)Version.cs">
<Output TaskParameter="Lines" ItemName="ItemsFromFile"/>
</ReadLinesFromFile>
<PropertyGroup>
<Pattern>\[assembly: AssemblyVersion\("([^/n]*)\.([^/n]*)\.([^/n]*)\.([^/n]*)"\)\]</Pattern>
<In>@(ItemsFromFile)</In>
<MatchedExpression>$([System.Text.RegularExpressions.Regex]::Match($(In), $(Pattern)))</MatchedExpression>
<MatchedExpressionCleanup>$(MatchedExpression.Remove(0, 28))</MatchedExpressionCleanup>
<Subtract>$([MSBuild]::Subtract($(MatchedExpressionCleanup.Length),3))</Subtract>
<PackageVersion>$(MatchedExpressionCleanup.Substring(0, $(Subtract)))</PackageVersion>
</PropertyGroup>
</Target>
<Target Name='Clean'>
<MSBuild Targets='Clean' Projects='@(Solution)' />
</Target>
<Target Name='RestorePackages'>
<Exec Command='"$(ToolsHome)NuGet\NuGet.exe" restore "%(Solution.Identity)" -source "OPTIONAL_NUGET_SERVER_SOURCE"' />
</Target>
<Target Name='Build'>
<MSBuild Targets='Build' Projects='@(Solution)' />
</Target>
<Target Name='RunTests'>
<ItemGroup>
<TestAssemblies Include="$(SourceHome)\\**\bin\\**\*Tests.dll"/>
<TestAssemblies Include="$(SourceHome)\\**\bin\\**\*tests.dll"/>
</ItemGroup>
<NUnitTeamCity Assemblies='@(TestAssemblies)' NUnitVersion='NUnit-2.6.3' ExcludeCategory='Database' />
</Target>
<Target Name='Package'>
<ItemGroup>
<MainBinaries Include='$(SourceHome)SampleProject.Host/bin/$(Configuration)/*.*' />
</ItemGroup>
<MakeDir Directories='$(OutDir)' />
<Folder TaskAction='RemoveContent' Path='$(OutDir)' />
<Copy SourceFiles='@(MainBinaries)' DestinationFolder='$(OutDir)' />
<MakeDir Directories='$(PackDir)' />
<Folder TaskAction='RemoveContent' Path='$(PackDir)' />
<Exec Command='"$(ToolsHome)Octopus\Octo.exe" pack --id="SampleProject.Library" --version="$(PackageVersion)" --basePath="$(OutDir) " --outFolder="$(PackDir) "' />
</Target>
<Target Name='Publish'>
<Exec Command='"$(ToolsHome)NuGet\NuGet.exe" push "$(PackDir)*.nupkg" YOUR_API_HERE -s "YOUR_NUGET_SERVER_ADDY"' />
</Target>
</Project>


Yes, it could use some cleanup, but this is what's running now. It was originally designed to work with either Jenkins or TeamCity. It's being updated to work only with TeamCity.

Why use a file instead of setting the steps up in TeamCity? With the exception of the NUnitTeamCity addin, the script can be used in Jenkins. That means you can pick this file up and go with whatever CI server you want. I'm hoping to post something about that later. It's also easier for me to visualize the build process in one file, versus the million option pages that is TeamCity.


Versioning

Mike Hadlow has a pretty nifty trick for assembly versions in a solution. It uses one file to set the version information for all the artifacts in a solution. His blog post explains it. I'm a big fan of Semantic Versions. Using the one-file trick really eases process of maintaining the changes to the version numbers.

Using the one-file trick, it became possible to use a regex task to update the file. This made it possible to have the version number based on the TeamCity build number. A co-worker found the build task, so I'm not sure where it originally came from. This custom task is also added to the Build Server's tools directory.


<?xml version="1.0" encoding="utf-8"?>
<Project ToolsVersion="4.0" DefaultTargets="Go" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<UsingTask TaskName="RegexTransform" TaskFactory="CodeTaskFactory" AssemblyFile="$(MSBuildToolsPath)\Microsoft.Build.Tasks.v4.0.dll">
<ParameterGroup>
<Items ParameterType="Microsoft.Build.Framework.ITaskItem[]" />
</ParameterGroup>
<Task>
<Using Namespace="System.IO" />
<Using Namespace="System.Text.RegularExpressions" />
<Using Namespace="Microsoft.Build.Framework" />
<Code Type="Fragment" Language="cs">
<![CDATA[
foreach(ITaskItem item in Items) {
string fileName = item.GetMetadata("FullPath");
string find = item.GetMetadata("Find");
string replaceWith = item.GetMetadata("ReplaceWith");
if(!File.Exists(fileName)) {
Log.LogError(null, null, null, null, 0, 0, 0, 0, String.Format("Could not find version file: {0}", fileName), new object[0]);
}
string content = File.ReadAllText(fileName);
File.WriteAllText(
fileName,
Regex.Replace(
content,
find,
replaceWith
)
);
}
]]>
</Code>
</Task>
</UsingTask>
</Project>
view raw RegexTask.xml hosted with ❤ by GitHub


TeamCity

The bummer about TeamCity is the clicky-ness of the interface. There are roughly a million different links, each leading to a new page. Each page has a dozen or so things you can set. Sure, it's amazingly powerful and flexible. But, it's easy to get lost. This isn't a knock on TeamCity. I'm just easily confused.

The first thing to set in TeamCity is the build number format. This is accessible on the first page of the build configuration settings.

Note: The format of the variable changes when used in a MSBuild file. In the project file, the any '.' in the variable name must be replaced with an '_'. That means 'build.number' becomes 'build_number'.



Octopus

Using Octopus to deploy is a straightforward process: create an environment, add some machines, create a release. Installing Octopus and tentacles on target machines is covered in the Octopus online documentation.

The first step is to create an environment. Once the environment is setup with machines, a project is needed. Finally, a release is created to actually deploy the artifacts. Once the environment is setup, you can perform your deployment as normal.



Octopus is pretty flexible in terms of the scripting and other custom install actions. The sample project is a TopShelf service. Installing and uninstalling it just needs a couple custom actions around the deploy action.

-

The scripts can be as simple as, C:\Services\SampleProject\SampleProject.Host.exe uninstall.


Wrapping It Up

Hopefully this will help someone resolve some of the issues with setting up a CI build process.

Thursday, October 9, 2014

Toggling With Windsor

Preface

There are a number of times when we've all had to implement new features or modify the implementation of an existing code base. An intern recently asked me how to feature toggle something using Castle.Windsor. This post will show how to use some Castle.Windsor features to toggle implementations. The example code can be found on GitHub.

Primitive Dependencies

The toggle will be an app setting in the application's config file. It will be loaded by Castle.Windsor by using a custom dependency resolver. This resolver is taken from Mark Seeman's post on AppSettings.

The Service

We'll be using a simple interface as our service definition. There will be two implementations. One represents an old implementation, the other a new.

namespace ExampleApp
{
public interface IService
{
void Print(string message);
}
}
namespace ExampleApp
{
using System;
public class OldService : IService
{
public void Print(string message)
{
Console.WriteLine("Old Service: {0}", message);
}
}
}
namespace ExampleApp
{
using System;
public class NewService : IService
{
public void Print(string message)
{
Console.WriteLine("New Service: {0}", message);
}
}
}
view raw IService.cs hosted with ❤ by GitHub

It's useful to note that this is a common way to achieve some branching by abstraction. This is done by replacing calls to a service with an interface. This interface is the abstraction. Once the calls to the old service are replaced, you are free to implement a new service. When ready, the new service can be substituted for the old without the consumers being aware since they depend on the interface not the concrete.

Typed Factory Selector

Castle.Windsor comes with a handy little bit: the typed factory facility. The typed factory facility lets Castle.Windsor create a factory implementation from an interface defined by you. This relieves you of the task of implementing the factory on your own. It is especially useful if you want to defer the creation of the object.

namespace ExampleApp
{
public interface IServiceFactory
{
IService Create();
void Release(IService dead);
}
}

Our class will use this factory to get an instance of our service, and call the .Print() method. The default for this object will be the first one registered in the container. This behavior can be overridden by implementing a custom selector.

namespace ExampleApp
{
using System.Reflection;
using Castle.Facilities.TypedFactory;
public class ServiceSelector : DefaultTypedFactoryComponentSelector
{
private readonly bool useNewService;
public ServiceSelector(bool useNewService)
{
this.useNewService = useNewService;
}
protected override string GetComponentName(MethodInfo method, object[] arguments)
{
return useNewService ? typeof(NewService).FullName : typeof(OldService).FullName;
}
}
}

The typed factory and selector must both be registered with the container. The selector must also be specified in the configuration of the typed factory. This is done on lines 21 and 22 of the ContainerFactory class.

namespace ExampleApp
{
using Castle.Core.Internal;
using Castle.Facilities.TypedFactory;
using Castle.MicroKernel;
using Castle.MicroKernel.Registration;
using Castle.Windsor;
public static class ContainerFactory
{
public static IWindsorContainer Create()
{
var container = new WindsorContainer();
container.Kernel.Resolver.AddSubResolver(new AppSettingsConvention());
container.AddFacility<TypedFactoryFacility>();
container.Register(
Classes.FromThisAssembly().BasedOn<IHandlerSelector>().WithService.FromInterface(),
Classes.FromThisAssembly().BasedOn<IService>().WithService.FromInterface(),
Component.For<IServiceFactory>().AsFactory(configuration => configuration.SelectedWith<ServiceSelector>()),
Component.For<ServiceSelector>(),
Classes.FromThisAssembly().BasedOn<IExample>().WithService.FromInterface()
);
container.ResolveAll<IHandlerSelector>()
.ForEach(selector => container.Kernel.AddHandlerSelector(selector));
return container;
}
}
}

Using IHandlerSelector

Mike Hadlow provides a very good example of using a custom IHandlerSelector.

We can use a similar technique to pull in a config value and supply the appropriate implementation at run time. The custom IHandlerSelector uses the config value to select the appropriate handler. If no handlers are found it throws an exception. This handler is then returned.

namespace ExampleApp
{
using System;
using System.Linq;
using Castle.MicroKernel;
public class ServiceHandler : IHandlerSelector
{
private readonly bool useNewService;
public ServiceHandler(bool useNewService)
{
this.useNewService = useNewService;
}
public bool HasOpinionAbout(string key, Type service)
{
return service == typeof(IService);
}
public IHandler SelectHandler(string key, Type service, IHandler[] handlers)
{
var s = useNewService ? typeof(NewService) : typeof(OldService);
var q = (from h in handlers
where h.ComponentModel.Implementation == s
select h).FirstOrDefault();
if (q == null)
throw new ApplicationException(string.Format("No handlers for {0}", service.Name));
return q;
}
}
}

This service handler must be registered and added to the container's kernel. Line 19 of the ContainerFactory class show the registration. Line 26 shows the selector being added to the kernel. While there is only one selector in this example, the snippet shows how to add more than one.

Running the Console

Changing the config value and running the console app shows that the selectors are functioning correctly.



Wrapping It Up

Feature toggles and branching by abstraction are powerful ways to control whether new code is being used in production. They provide a way to replace old behavior with new, while maintaining the integrity of the product's build. Hopefully these two examples will help you integrate feature toggling into your builds.

Tuesday, July 22, 2014

Basics: Removing the 'if' (Using Polymorphism)

Preface

Performing transformations of one object type to another type is a very common task in programming. It might be publishing an event based on a command, or an externally known DTO from an internal DTO. It's pretty common to see some use of the if or switch keywords to determine the code flow. I thought I'd take a minute to show how we can go from a typical implementation using if statements to one which uses Linq and AutoMapper to reduce the coupling in the implementation.

The example code uses the following tools:
The Interface

For this example, we'll have three different publishers. Each will implement a common interface: IPublisher. The implementations will be responsible for accepting a Command object and publishing the associated Event object. We'll be using two commands and events: Start -> Started, Stop -> Stopped.

The Publish method on the IPublisher interface is intentionally not using a generic declaration.

namespace PolyMap
{
public interface IPublisher
{
void Publish(Command command);
}
public abstract class Command
{
public Guid Id { get; set; }
}
public abstract class Event
{
public Guid Id { get; set; }
}
}
view raw IPublisher.cs hosted with ❤ by GitHub
Overloading

The first Publisher accepts a command. It checks the type of the command received, and calls the appropriate overload. Each overloaded method creates the appropriate event, and publishes it.

namespace PolyMap
{
public class OverloadedPublisher : IPublisher
{
private readonly IBus bus;
public OverloadedPublisher(IBus bus)
{
this.bus = bus;
}
public void Publish(Command command)
{
var commandType = command.GetType();
if (commandType == typeof(Start))
Publish((Start)command);
if (commandType == typeof(Stop))
Publish((Stop)command);
}
private void Publish(Start command)
{
bus.Publish(new Started { Id = command.Id });
}
private void Publish(Stop command)
{
bus.Publish(new Stopped { Id = command.Id });
}
}
}

This works, but it has a few problems. It both uses an if to determine which type to publish, and manually maps the inbound command to the outbound event. That means this class is responsible for both determining what kind of event to publish and creating that event.

Adding AutoMapper

AutoMapper removes the responsibility of creating the event from the publisher class. AutoMapper Profiles could be used to map more complex associations, but the DynamicMap method works just fine here. Our publisher class is relieved of this responsibility, limiting it to just sorting out the type of event to be published.

namespace PolyMap
{
using AutoMapper;
public class OverloadedAutomappingPublisher : IPublisher
{
private readonly IBus bus;
private readonly IMappingEngine mappingEngine;
public OverloadedAutomappingPublisher(IBus bus, IMappingEngine mappingEngine)
{
this.bus = bus;
this.mappingEngine = mappingEngine;
}
public void Publish(Command command)
{
var commandType = command.GetType();
if (commandType == typeof(Start))
Publish((Start)command);
if (commandType == typeof(Stop))
Publish((Stop)command);
}
private void Publish(Start command)
{
var @event = mappingEngine.DynamicMap<Started>(command);
bus.Publish(@event);
}
private void Publish(Stop command)
{
var @event = mappingEngine.DynamicMap<Stopped>(command);
bus.Publish(@event);
}
}
}
It still has the problem of using the if statement to determine the type of command received (and event to be be published).

Removing the 'if'

Introducing a map from the commands and a Linq query allows us to remove the if statements. The class is still responsible for selecting the appropriate action. The concept of associating commands to events is distilled into the dictionary. This leaves the class' methods to simply select the appropriate action and execute it.

namespace PolyMap
{
using System;
using System.Collections.Generic;
using System.Linq;
using AutoMapper;
public class MappedPublisher : IPublisher
{
private readonly IBus bus;
private readonly IMappingEngine mappingEngine;
private readonly Dictionary<Type,Action<Command>> publishMap = new Dictionary<Type, Action<Command>>();
public MappedPublisher(IBus bus, IMappingEngine mappingEngine)
{
this.bus = bus;
this.mappingEngine = mappingEngine;
publishMap.Add(typeof(Start), Publish<Started>);
publishMap.Add(typeof(Stop), Publish<Stopped>);
}
public void Publish(Command command)
{
var publishAction = GetPublishAction(command.GetType());
publishAction(command);
}
private Action<Command> GetPublishAction(Type commandType)
{
var publishAction = (from a in publishMap
where commandType == a.Key
select a.Value).Single();
return publishAction;
}
private void Publish<TEvent>(Command command) where TEvent : Event
{
var toPublish = mappingEngine.DynamicMap<TEvent>(command);
bus.Publish(toPublish);
}
}
}
The Dictionary was left inside the publisher class to keep everything in once class. It wouldn't take much to move the mappings out of the class. This would further reduce the coupling on the Publisher. More complex mappings could be introduced by changing the Dictionary out for a custom type.

Wrapping It Up

This was a quick demonstration of removing two concerns from a class. The manual mapping of one class to another by introducing AutoMapper. The if statement was removed by introducing a map between the two types. I hope this helped describe a different was of building classes with reduced responsibilities.

Thursday, February 6, 2014

RabbitMQ Federated Queues

Preface

RabbitMQ added support for federated queues in v3.2.0. This feature gives a simple way to move messages from one Rabbit cluster to another. I'll show you one way to set this up. The sample code can be found on GitHub. I'm using EasyNetQ to handle the publishing and subscription. It's a very nice RabbitMQ client library. Check it out.

The Clusters

Note: The virtual host names are not the same on the two clusters. Broker A is using the virtual host FederationDemo. Broker B is using the virtual host FederatedStuff.

The RabbitMQ documentation does cover the federation plug-in.  In our scenario, there are two clusters. Each is an upstream for the other. Message hops are at 1 to prevent the messages from circling back to the publisher. Below are pictures of the upstreams defined on each of the clusters.

Shows the upstream definition on Broker A.
Broker A Upstream

Shows the upstream definition on Broker B.
Broker B Upstream


Each broker will need to have a policy defined. This policy is used by the broker to figure out what things come from the upstream bluster. The policies are very similar for both clusters. Below are pictures of the policies:

Federation Policy on Broker A

Federation Policy on Broker B


The Clients

This example will use two console applications. Each will subscribe to and publish two messages. The messages will ping-pong between the two clients. The first client, FooConsole, will publish two messages: start, and stop. The second client, BarConsole, will publish the following: started and stopped.

This example uses two clients. A message from one client will be transported to another. Then a response message will be transported back to the original client.The message sequence is Start, Started, Stop, and Stopped.

I've put the publishing and subscriptions into one class, so we could see everything going on. Both classes, Foo and Bar, are very similar. Here's a look at the Foo class:

namespace Federations.FooConsole
{
using System;
using BarMessages;
using EasyNetQ;
using FooMessages;
public class Foo
{
private readonly IBus bus;
public Foo(IBus bus)
{
this.bus = bus;
}
public void DoIt()
{
bus.Subscribe<Started>("foo", started =>
{
Console.WriteLine("Started received: {0}", started.Id);
bus.Publish(new Stop { Id = started.Id });
});
bus.Subscribe<Stopped>("foo", stopped =>
{
Console.WriteLine("Stopped received: {0}", stopped.Id);
});
Console.WriteLine("press key to send message");
Console.ReadKey();
var start = new Start { Id = Guid.NewGuid() };
bus.Publish(start);
}
}
}
view raw Foo.cs hosted with ❤ by GitHub

Foo subscribes to two messages: Started and Stopped. It then publishes a Start message to get the ball rolling. On the other end, Bar subscribes to Start and Stop messages. It responds to each message with one of its own messages. A Start from Foo causes Bar to send Started. A Stop from Foo causes Bar to send a Stopped message.

Wiring Two Joints

When an app uses EasyNetQ to subscribe, EasyNetQ creates the queues and exchanges for us. This doesn't happen when working with federated queues. The queues will be federated, but there will be no bindings made on the upstream cluster. The pictures below show the downstream and upstream clusters after a client subscribes to a message.

Upstream (publisher) Federated Queues

Downstream (subscriber) Federated Queues


The last little bit is to bind the exchange to the queue on the upstream cluster. This allows the messages to flow from the upstream publisher to the downstream subscriber.

Federated Queue Without Binding
Publishing Exchange Bound To Queue


With everything put together, it's now possible to show the two console apps passing messages across the broker. Note: I've disabled EasyNetQ's default debug logging by using a null logger.

The Console Display


Wrapping It Up

The addition of federated queues to RabbitMQ really simplifies transporting messages between clusters. Hopefully this helps show how you can get two clients to communicate across a federation.