Tuesday, February 19, 2019

Two RabbitMQ Messaging Patterns

Don't want to read this article? The short version is, "If you didn't create a queue, don't pull messages from it." Removing messages from a queue is a destructive action.

This post is really a summary of the RabbitMQ docs. It's a quick highlight of the two big messaging patterns commonly seen on the broker.

How does Rabbit work?

RabbitMQ is, at its heart, a message broker. It has one focus. That focus is to deliver messages sent it to consumers. That's all it does, and it does it very well. It does not offer more complex patterns like sagas.
Key to understanding how Rabbit works is knowing that there are two parts to the broker: exchanges, and queues. Exchanges receive messages from publishers. Queues hold messages for subscribers. Inside the broker, messages move from exchanges to queues based on different rules. Once a message is taken from the queue, Rabbit forgets about it. Exchanges can send copies of a message to different queues, but they do not send multiple copies to the same queue.



Above is pictured the basic message flow. This can be expanded in two ways. Multiple consumers can get messages from the same queue. Multiple queues can be wired to the same exchange. Multiple consumers pulling messages from the same queue create the competing consumer pattern. Multiple queues wired to the same exchange create the Publish/Subscribe pattern. These patterns are not exclusive; it is possible to have competing consumers on one of the pub/sub queues.

What are Competing Consumers?

The Rabbit docs describe a work queue as a queue with multiple consumers connected. The intent is to distribute tasks to multiple consumers. This lets the tasks be distributed across all the consumers connected to a queue. Want to scale horizontally? Just add more consumers. Rabbit will deal the message out, one per consumer, until all messages have been dealt. It is important to know that each message will be sent only once. Once a message has been passed to a consumer, it will be removed from the queue. That means it will be unavailable to any other consumers of that queue. Below is an illustration of how the messages flow.

What is Publish/Subscribe?

Publish/subscribe is another common use of brokers. This is the pattern to use if you want multiple copies of the same message to be sent to different consumers. In this pattern, exchanges are used to send copies of messages to multiple queues. This pattern is especially useful for logging, audits, or passing messages to consumers with very different purposes. It is important to remember that multiple consumers on a queue will form competing consumers on that queue. The illustration below shows publish/subscribe, along with a competing consumer on one of the queues.

In closing...

Remember that when you want only one copy of a message to be distributed across consumers, the competing consumer pattern is the way to go. Using publish/subscribe will send copies of messages to different consumers. If you want the same message to go to different consumers, publish/subscribe is your friend.

Further Reading

Thursday, January 11, 2018

Visual Studio 2017 & IIS: Unable to start debugging on the Web Server.

Can't Debug?

Ok, first I have to say that I've always thought it to be a failure when I'm using the debugger. That said, there are times when being able to hit F5 in Visual Studio, have it launch the web app, and connect debugging to IIS is convenient.

Imagine my surprise, and frustration when I hit F5 and got the following dialog:



Doing the research...

Googling the message led me to the MSDN article for the error. The section of that page which covers the remote server returning errors offered the following, "Make sure that the Application Pool is configured for the correct version of ASP.NET." I verified the Application Pool was configured correctly. A reinstall of the Framework, and reboot didn't help either.

So, what was it?

It turns out I had some features disabled. Searching the Windows Features window, I found a category (Application Development Features) buried within the Internet Information Services feature. They were all disabled.


Fixing the issue was a simple matter of enabling the features, and rebooting. I chose the following features.



Now, I can use the debugger again. But, I still prefer to rely on TDD. ;)

Thursday, August 10, 2017

Building a Pipeline (the Template Pattern)

Building a Simple Pipeline

A lot of my development experience has been building back-end systems. There's been plenty of times where something I've built needed to process a request which was composed of a number of tasks. These tasks had to be performed in the correct sequence forming an algorithm. I've heard these things called many different names including Pipelines.

Unfortunately, many times this need leads to a class which contains both the steps of the algorithm, and the logic to complete the steps. These classes quickly turn into huge monsters. They also tend to become a real pain to test. It turns out, there's a GoF pattern for them: Template Method Design Pattern.

In other words, I wanted to express these simple algorithms like this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
public class FileCreator : IRequestHandler<CreatePackage>
{
    private readonly IPipeline<CreatePackage> pipeline;
    
    public FileCreator(IPipeline<CreatePackage> pipeline)
    {
        this.pipeline = pipeline;
    }
    
    public void Handle(CreatePackage request)
    {
        pipeline.Subject(request);
        pipelline.Do<BoxTheItems>();
        pipelline.Do<CreateTheShippingLabel>();
        pipeline.Do<ShipTheBox>();
        pipeline.Do<SendShippingNotification>();
    }
}

PS - MediatR is awesome. That's where IRequestHandler<T> comes from.

Enter LittlePipeline

LittlePipeline is a small library, or set of example code on how one can build a template that is easy to read, easy to test, and can work with an IoC container. It's designed to be a starting point, not an end-all solution to the problem. The gist of using the library (or classes if you want to copy/paste the code) is simple...

Start with a subject class. This class is the guy who holds the data necessary to process the request. It can be as dumb (or smart) as you need. The only requirement is the subject be a reference object (a class).

1
2
3
4
5
6
public class CreatePackage
{
    public int OrderId { get; set; }
    public Address ShipTo { get; set; }
    public string TrackingNumber { get; set; }
}

Create any number of tasks. These tasks should implement the ITask<T> interface where T is the subject type you created earlier.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
public class SendShippingNotification : ITask<CreatePackage>
{
    private readonly IBus bus;
    
    public SendShippingNotification(IBus bus)
    {
        this.bus = bus;
    }

    public void Run(CreatePackage subject)
    {
        var notification = new ShippingNotification { TrackingNumber = subject.TrackingNumber };
        bus.Publish(notification);
    }
}

Use the baked-in pipeline creator, or your favorite IoC container to register the tasks, and create the pipeline. Then, use it like in the class above. Here's the built-in, no frills, no guaranties example.

1
2
3
var pipeline = MakePipeline.ForSubject<FirstTestSubject>()
    .With<Increment>(() => new Increment())
    .Build();

Oh, and this is what testing a pipeline looks like (with FakeItEasy):


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
public class PipelineTestExample
{
    [Test]
    public void ThePipelineCanBeTested()
    {
        var pipeline = A.Fake<IPipeline<FirstTestSubject>>();
        var example = new ThingThatUsesThePipeline(pipeline);

        var subject = new FirstTestSubject();
        example.Run(subject);

        A.CallTo(() => pipeline.Subject(subject)).MustHaveHappened()
            .Then(A.CallTo(() => pipeline.Do<Increment>()).MustHaveHappened())
            .Then(A.CallTo(() => pipeline.Do<Square>()).MustHaveHappened());
    }
}

public class ThingThatUsesThePipeline
{
    private readonly IPipeline<FirstTestSubject> pipeline;

    public ThingThatUsesThePipeline(IPipeline<FirstTestSubject> pipeline)
    {
        this.pipeline = pipeline;
    }

    public void Run(FirstTestSubject subject)
    {
        pipeline.Subject(subject);
        pipeline.Do<Increment>();
        pipeline.Do<Square>();
    }
}

That's It

There you have it, a simple template class that (hopefully) helps clean up some code by separating the algorithm steps from their implementations. The code is on github. The ReadMe.md file gives some information about how it might be tested.

Tuesday, February 14, 2017

NUnit Exception: Error Loading Settings

This applies to NUnit 3.4.1.

I was doing some TDD one day, when Visual Studio crashed. After a few choice words, I booted things back up. Running the tests after starting back up threw up an error window I'd never seen before. It only showed when I used the R# test runner.


It wasn't very helpful, since half the message didn't seem to be showing. I was able to get the error message using the first trick from this article. That error message wasn't much help either.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
---------------------------
ReSharper Ultimate – System.ApplicationException: Error loading settings file
---------------------------
   at NUnit.Engine.Internal.SettingsStore.LoadSettings()

   at NUnit.Engine.Services.SettingsService.StartService()

   at NUnit.Engine.Services.ServiceManager.StartServices()

   at NUnit.Engine.TestEngine.Initialize()

   at NUnit.Engine.TestEngine.GetRunner(TestPackage package)

   at JetBrains.ReSharper.UnitTestRunner.nUnit30.BuiltInNUnitRunner.<>c__DisplayClass1.<RunTests>b__0()

   at JetBrains.ReSharper.UnitTestRunner.nUnit30.BuiltInNUnitRunner.WithExtensiveErrorHandling(IRemoteTaskServer server, Action action)
---------------------------
OK
---------------------------

Fortunately, this project happened to have a build script which also ran the unit tests. Running the build script returned the full error text. That error message showed me the real problem. It turns out the NUnit settings file was empty. That was causing the root element missing error.


Deleting the NUnit30Settings.xml file from the $AppData$\Local\NUnit directory cleared the problem.

Friday, December 23, 2016

MediatR, FluentValidation, and Ninject using Decorators (Pt. 2)

Validate all the Things?

The last post showed how to combine MediatR, FluentValidation, and Ninject to handle the validation of requests handled by MediatR. The drawback in the previous post is that all requests must have a corresponding validator implemented. Here, we'll look at using the null object pattern to bypass validation when a validator isn't present.

Null Object What?

The null object pattern is a way to pass an object which represents nothing, but does not throw a runtime exception when used. This pattern can be used to add default behavior to a process when there is nothing for the process to use.


 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
    public class ValidatingHandler<TRequest, TResponse> : IRequestHandler<TRequest, TResponse>
        where TRequest : IRequest<TResponse>
    {
        private readonly IRequestHandler<TRequest, TResponse> handler;
        private readonly IValidator<TRequest> validator;

        public ValidatingHandler(IRequestHandler<TRequest, TResponse> handler, IValidator<TRequest> validator)
        {
            this.handler = handler;
            this.validator = validator;
        }

        [DebuggerStepThrough]
        public TResponse Handle(TRequest message)
        {
            var validationResult = validator.Validate(message);

            if (validationResult.IsValid)
                return handler.Handle(message);

            throw new ValidationException(validationResult.Errors);
        }
    }

The handler above is taken from the last post. It accepts a handler and validator for a given request. It validates the request, then either passes the request to the next handler, or throws an exception. Ninject will throw an exception when it cannot find an appropriate validator for the request.


1
2
3
4
    public class NullValidator<TType> : AbstractValidator<TType>
    {
        
    }

Enter a null validator. This validator has no rules. When used, it will return no errors, because there are no rules defined in it. Because it's defined as a generic, it can be used to validate any request. When the handler processes a request, the validation passes by default, and the request is passed down the chain.

The cool thing is there's no extra steps needed. Just add the null validator to the solution. It will be picked up by Ninject in the normal registration process. When Ninject can't find the appropriate validator for a request, it will fall back to the generic implementation.

PS...

That's all there is to adding a class which will provide a default behavior of passing requests that have no validation rules defined. Adding code like this is also a great example of the open-closed principle: It was possible to extend the behavior of the validator without changing the code of the validator. So, there ya go. Default, passing validation using the null object pattern.

As always, the code is on GitHub.

Thursday, December 8, 2016

MediatR, FluentValidation, and Ninject using Decorators

Notebook

I recently had to fiddle with getting Ninject to use decorators for validation with Jimmy Bogard's library, MedatR. Sure, there's tons of blogs out there. There's even some articles on the MediatR and Ninject sites. I had problems getting them to work. So, here's my solution.

CQRS

I've been a fan of the CQRS pattern for some time. For me it's just seems to be a more elegant way to do things. It has some cons: there are usually more classes, and the workflow is not always as clear.

CQRS is Command Query Responsibility Separation (or Segregation). Martin Fowler has a good posting which describes it. It's what it sounds like: separating code logic into commands, queries, and (sometimes) events. One common way of implementing the pattern is to have small objects which are little more than DTOs. These objects are passed to handlers which perform logic based on the contents of the small object.

MediatR

Some time ago, I created a set of classes I used for doing this in projects. These classes were based of Jimmy Bogard's work. Fast-forward a couple years, and I've discovered his library, MediatR. It's as good or better than the set of classes I was using. That was enough for me to make the switch, since I'm a fan reuse when possible.

MediatR is well documented, so I won't repeat all of it. But, for a basic understanding, here's a test showing how a command can be handled by MediatR:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[Test]
public void ItShouldHandleBasicCommands()
{
    var mediator = GetMediator();

    var command = new Command();
    var response = mediator.Send(command);

    response.Should().NotBeNull();
}

The basic working pieces are the command and the handler. MediatR receives a command, and dispatches it to the appropriate handler. This is what they look like:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
public class Command : IRequest<Response>
{
}

public class CommandHandler : IRequestHandler<Command, Response>
{
    public Response Handle(Command message)
    {
        return new Response();
    }
}

Registering commands and command handlers is pretty easy. Since I've been using Ninject a lot lately, here's an example of registration using Ninject's convention-based registers. It says, find all the handler interfaces, and bind them.

1
2
3
4
kernel.Bind(scan => scan.FromThisAssembly()
    .SelectAllClasses()
    .Where(o => o.IsAssignableFrom(typeof(IRequestHandler<,>)))
    .BindAllInterfaces());

There are a couple other registrations which are important when hooking MediatR and Ninject up. Registering the IMediator interface depends on three calls. The instance factory calls tell MediatR how to resolve single or multiple instances. Then, of course, we register MediatR.

1
2
3
4
5
kernel.Bind<SingleInstanceFactory>()
    .ToMethod(context => (type => context.Kernel.Get(type)));
kernel.Bind<MultiInstanceFactory>()
    .ToMethod(context => (type => context.Kernel.GetAll(type)));
var mediator = kernel.Get<IMediator>();

Validation and Decorators

The decorator pattern is a way of adding behavior to an object without changing the object itself. Decorators can be used to add a number of cross-cutting concerns to another class. One common use is adding input validation. Wrapping a command handler with a decorator makes it possible to validate the command, before the handler processes it. The following command and command handler simply returns a response.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
public class Foo : IRequest<Response>
{
    public string Message { get; set; }
}

public class FooHandler : IRequestHandler<Foo, Response>
{
    public Response Handle(Foo message)
    {
        return new Response();
    }
}

If we wanted to ensure the command, Foo, has a message, we'd want to validate it. FluentValidation is a really handy validation package. Validation requires a validation class, and those classes need to be registered with Ninject.

This is an example of a simple validator. It checks to see if the Message property is empty. If it is, it will return an error.

1
2
3
4
5
6
7
public class FooValidator : AbstractValidator<Foo>
{
    public FooValidator()
    {
        RuleFor(ping => ping.Message).NotEmpty();
    }
}

Registering the validator with Ninject is pretty easy. This line binds all validators in the assembly.

1
2
3
4
kernel.Bind(scan => scan.FromThisAssembly()
    .SelectAllClasses()
    .InheritedFrom(typeof(AbstractValidator<>))
    .BindAllInterfaces());

The next step is to work in a class which will use the validator. The class below comes from Jimmy Bogard's site. It's a pretty common example of how to implement a decorator class which will validate a command, before it is passed to the next handler class.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
public class ValidatingHandler<TRequest, TResponse> : IRequestHandler<TRequest, TResponse>
    where TRequest : IRequest<TResponse>
{
    private readonly IRequestHandler<TRequest, TResponse> handler;
    private readonly IValidator<TRequest> validator;

    public ValidatingHandler(IRequestHandler<TRequest, TResponse> handler, IValidator<TRequest> validator)
    {
        this.handler = handler;
        this.validator = validator;
    }

    [DebuggerStepThrough]
    public TResponse Handle(TRequest message)
    {
        var validationResult = validator.Validate(message);

        if (validationResult.IsValid)
            return handler.Handle(message);

        throw new ValidationException(validationResult.Errors);
    }
}

The next step is working out how to configure Ninject to create a handler and decorate it. This blog post has a really good description of the process. I've distilled it down for validation below. It says, "Register the handlers. When a validating handler is created, inject a handler. When a handler is requested, return the validating handler." To be honest, I'm not quite sure why the ValidatingHandler has to be registered twice.

When Ninject is asked to create a handler, it first creating a validating handler. It injects the correct command handler and validator into the validating handler.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
kernel.Bind(scan => scan.FromThisAssembly()
    .SelectAllClasses()
    .Where(o => o.IsAssignableFrom(typeof(IRequestHandler<,>)))
    .BindAllInterfaces());

kernel.Bind(scan => scan.FromThisAssembly()
    .SelectAllClasses()
    .InheritedFrom(typeof(IRequestHandler<,>))
    .BindAllInterfaces()
    .Configure(o => o.WhenInjectedInto(typeof(ValidatingHandler<,>))));

kernel.Bind(typeof(IRequestHandler<,>)).To(typeof(ValidatingHandler<,>));

The tests below capture what should happen when the command's message is empty or not.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
[Test]
public void ItShouldProcessCommands()
{
    var mediator = GetMediator();

    var command = new Foo { Message = "valid ping" };
    var response = mediator.Send(command);

    response.Should().NotBeNull();
}

[Test]
public void ItShouldValidateTheCommand()
{
    var mediator = GetMediator();

    var ping = new Foo();
    Action act = () => mediator.Send(ping);

    act.ShouldThrow<ValidationException>();
}

Conclusion

There it is. That's the basics of setting up command validation with Ninject, MediatR, and FluentValidation. It's also a good demonstration of how a decorator can be used to modify behavior without changing existing objects.

As always, there is a sample project on GitHub which has the code from this blog.


Thursday, November 17, 2016

NancyFX: Stateless Auth Example

An Update...

Some time ago, I did an example of using JWTs with Nancy. This post is an update to that. It shows a lighter way of consuming JWTs with a Nancy service. The example solution is hosted on Github.

This example uses the following libraries:
  • Nancy
  • Nancy.Hosting.Aspnet
  • Nancy.Authentication.Stateless
  • JWT.
The test project adds the following libraries:
  • FakeItEasy
  • NBuilder
  • FluentAssertions
  • Nancy.Testing
A Word on JWTs

JWTs are one of the token formats that are quite common today. The jwt.io site has a pretty good introductory page on them. The short story is they are a compact way to securely move information between parties (to paraphrase the jwt.io site).

The Bootstrap Class

I'll start with the Nancy Bootstrap class. This class initializes the stateless authentication when the application is started. It gets an instance of the stateless authentication configuration, and enables it with a call to StatelessAuthentication.Enable(pipelines, configuration);.

Verifying The Token

The StatelessAuthConfigurationFactory returns a configuration used in the previous step. It contains the steps used to verify the JWT is valid. This is done by ensuring the issuer and audience are correct, checking if the token has expired, and if the user is valid.

Securing the Endpoint

The last bit part of the process is securing the endpoint(s). Nancy has a number of extension methods to help with this. The Health module will respond to any call with an OK status. It's not a great way to report the health of a service, but it demonstrates the idea.

The Secure module demonstrates a few ways to use the extension methods. What's really cool is the requirements can be placed at the module level as well as the endpoint level. There are two endpoints in the example: /secure and /needsclaim. Both endpoints require the call use SSL and be made with a valid JWT. The /needsclaim endpoint further requires the authenticated user be an administrator.


The End

That's the basics for getting started with stateless authentication and Nancy.