A co-worker sent me an article, IoC Battle in 2015..., comparing the speed differences between several IoC containers. It's a topic that comes up every once in a while. This usually happens when developers are debating which container is best. I'm a big fan of dependency injection, so I was interested to see the results.
I Ran It And Then...
I saw the results, and forked the linked repository. My first run of the benchmark app produced results similar to the article:
I dug into the code, wanting to see what was up with Windsor. The registration code was really funky. Each component was registered in a separate registration call. Regular users of Windsor know this isn't the right way to do this.
The old registrations looked like this;
After I updated them, they looked like this (I'm just showing the singleton registrations):
I reran the benchmark and got these results. The most stunning was the transient registration times. They were roughly 10% of the first run. I didn't spend much time on this. It would be interesting to see if things could be optimized further.
It's important to use something correctly when doing benchmarks.
Different containers run at different speeds. The resolutions were for 1 million "web services" each which had fairly deep object graphs. So, yeah, it took Windsor some 30 seconds to do this. But then, it's unlikely a single app will be doing that. When you hit that performance demand, it's time to start scaling horizontally (more on that some other time).
In a Nutshell
Donald Knuth is quoted as having written, "The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times; premature optimization is the root of all evil (or at least most of it) in programming." If you're at the point you're worried about how fast resolving 1M object graphs is, then it's a problem. Until then, use the container that works for you.
I put my fork of this project on GitHub.