Wednesday, January 19, 2011

Stardate 2011.19.1: Domain Events - Powerful, Stupidly Named

Today, I thought I'd point out something that is terrifically obvious: Domain events may be the strongest concept to come to domain-driven programming since the ubiquitous language. I almost always have an object of type DomainEvents these days because they're just so useful. They become a kind of repository for the different things that can happen in the domain that really have no other place - see Udi Dahan's website for more information about the concept of domain events.

Also, this is probably the worst name for a type in the history of mankind, and might be a mistake to have the object at all. The reason for this is that, like my rant against Commons projects, it often becomes a kind of repository for trash that just needs to be maintained over time. There are sometimes Action<T>s in it, which is a mistake, but also it is often a static class that just has the different possible domain events in it that can be raised statically. This makes it difficult to test, but there's also something more obvious: There's virtually no ubiquitous language which contains the concept of an explicit DomainEvents object in it. 

This is pretty obvious, however, and does not take away at all from the usefulness of the concept. Often, we require a place in the domain where we can just do different things that aren't necessarily part of the normal flow of execution of the business domain. For example, when filing a legislative bill, there are a crapton of things that can happen before the bill even gets into the first committee. However, there's no explicit, "Here's what's going to happen every time!" You can't necessarily model it effectively as a function that follows another function, e.g. if you have a SendToCommittee function, you can't necessarily prefix it with all of the functions that may happen before it goes to committee - there are just too many possibilities!

So how do you handle it? The best way (in my opinion) is basically with an event. There are many different ways to implement events in most programming languages. Since I'm a .NET developer exclusively, I'm going to talk about developing domain events in .NET, and C# in particular. I have no fuss with VB development, nor VB developers, but I'm just not familiar enough with it to be able to talk about it intelligently.

The standard DDD way of handling domain events in .NET uses the weak-eventing pattern. This is a pattern where you have a list of WeakReferences to actions. These actions get executed if they're valid references, or are skipped if they are not. That then becomes your domain event. It has to be a list of WeakReferences due to technical issues with subscribing causing a hard reference to stick around when it probably shouldn't - perhaps because the objects that have hooked up to the event would otherwise have gone out of scope.

My next post will have more useful content. I just wanted to point out that having an explicit type called DomainEvents is probably not the right way to go. In the next post, I'll discuss how I typically handle it, and why I think it's stronger and better OO design.

Friday, January 14, 2011

Stardate 2011.14.1: Friday's Book

I like to read. Heck, I love reading. Reading is also incredibly important. As such, I'd like to start a kind of Friday book club, where I review a book I've read or am reading and make recommendations based on it.

The first book is what should be largely considered to be the Bible of Software Design. It is Agile Principles, Patterns, and Practices, by Uncle Bob Martin. The link to the book links to the C# version of the book, but I actually found the C++ book to be a bit better.

The work produced by Robert Martin in this book is quite prolific. It is built upon a solid foundation of scientific thinking about programming and I really appreciate that aspect of it. It is reminiscent of the good old days of C++ programming with Scott Meyers and Herb Sutter. Here are some


Solid SOLID discussion: This book was the first one to completely and clearly define all of the SOLID principles, to my knowledge. These include all of the principles that I've talked about before. They are incredibly useful and can almost be used to create a quantifiable design metric that tells you the quality of a given system. There are a lot of resources on the web about the SOLID principles, including Robert Martin's own blog, so I'll leave it up to you to research them.

Surprising project design discussion: When I first read the book, I hadn't really encountered much in terms of setting up physical project solutions. This book goes into great detail about the various principles that can be used to set up the projects in your solutions and it works incredibly well for the book, despite more of a focus on patterns and principles for the real code and not just the structure of the projects.

Practical pattern discussion: Whenever I see that someone has written a blog post or a new book on patterns, I die a little on the inside. 99% of these efforts are garbage, and half of the remaining percent are written without including any real code or practical examples. On the other hand, Robert Martin's books never lack for code and it's code that makes sense. It's not always enterprise-level code, but it can easily be analyzed and taken and used on enterprise-level code.


BYOBB? That's a typo: While I like the book in general, my near-OCD for proper spelling, grammar, and punctuation makes most of Robert Martin's books somewhat difficult to read. I'm not sure who the editor for them is, but most of the time, a problem text can be figured out with a little thought. At the same time, they're not really significantly worse than other programming books.

Diagram it!: There's a huge, huge focus on diagrams and things like this. I find this to be incredibly tedious. It may be quite, let's say, youthful of me, but I find diagrams to be quite distracting. They summarize the information somewhat well, but following them can lead to a significant disconnect with the actual code and I find this to be dangerous. At the same time, diagrams themselves often don't completely or accurately reflect the evolution of the code over time, so it's another piece of waste that ends up being generated, and very few people will ever look at it, preferring rather to just dive right into the code.

Thursday, January 13, 2011

Stardate 2011.13.1: Interfeces - Property-only Interfaces

When it comes to abstraction, I always try to be very careful. The abstraction should not dictate the types that the implementing classes should contain. This is very dangerous, and is one of the reasons that inheritance can be very dangerous. Even having a base class for a piece of data that is common to all derivatives of the base is not that wise in most instances - it is more likely that the derivative is going to either need to manage that piece of data in some way, and if it doesn't, then why have it in the base class and not in its own class that isn't in the inheritance hierarchy?

Sometimes, properties on the interfaces are somewhat useful. For instance, Ayende once mentioned to me that something like SupportsRollbacks, a boolean, is perfectly valid on an interface defining an object that can have data rolled back. However, having property-only interfaces should be a design stench in any system. I have found two ways to this same conclusion, so I think it's justified.

The first path is that, if you have an interface that is only properties, then it doesn't really manage its own data: Some other class must obviously be managing the changing of the values and so forth. When looked at this way, it's obvious that encapsulation is pantsed.

The second way is much more general in that interfaces are intended to represent some functional unit, or some specific responsibility in terms of functions. While properties in .NET are technically functions in that, when you get or set them, you have a point that you can insert code, in terms of the public-facing interface, you can't tell if it's a property or just a public variable. Of course, Visual Studio allows you to know if it is or not, but I'm talking from a purely practical standpoint here. How many times did your teacher back in CS tell you not to use public variables?

Anyway, since properties give no indication as to any functionality that they actually contain, since they often represent nouns in the system and not verbs, having interfaces that are only properties is like having a base class with only public variables.

It gets worse if the properties are custom types. What if the custom type is abstract in some way? Then you're in a real pickle (especially since those abstract types are probably just pure properties!). These are all things that I've seen on projects, and it really points out just how bad this type of system is to work with, and especially to maintain.

Wednesday, January 12, 2011

Stardate 2011.12.1: Hasty Decisions

I recently stumbled upon the blog of GM Mark Bluvshtein. He's a relatively new Canadian grand master (I think) of chess who is terrifically pragmatic. It occurred to me that most good chess players *are* pragmatic, because you have to be: You have to know how to do the least amount of mental work but still get the most out of your moves - otherwise you can run out of steam midway through the game, or through the tournament. More importantly than conserving on energy, however, Mark seems to be hypercritical of his own play. This is another trait that I've noticed on the highest-level chess players, and is something I think we need in programming more.

In programming, I've noticed a lot of architect astronauts thinking that it's more important to define up-front an architecture which encompasses every aspect of the project, and that architecture must be strictly adhered to, no matter the consequences. If a lowly dev comes up with a reason why it won't work? Then they're wrong, just because they're not an architect!

You can see I have no fondness for these kinds of developers who think they know everything, and aren't afraid of pushing their ideas onto you. At the same time, I have problems with it myself. As such, I've taken my natural hypercritical chess (and general life) approach and begun applying it to my coding. I can't tell you just how important this has been to my personal development in chess, in life, and especially in my programming.

When I first started studying chess, back in 2002 or 2003, I was a pretty big nub. I mostly played blitz, of course, but even in my tournament games, despite the fact that I was not bad at all at the club level, I rarely reviewed them. While I'd do the occasional analysis after the game with the person I played against, more often I'd go play blitz games with the others who were done and just relax. Blitz games are incredibly relaxing, although they also make us develop bad thinking habits, such as assuming that the thing we're planning to do is not that bad, and we don't need to do too deep of an analysis because we need to get stuff done quickly.

This is terrible thinking. Just because we have to get something done quickly in programming, that doesn't mean we should not analyze it as deeply as we can. We need to be very critical of our own ideas so that we can weed out the moves that are shoddy, and only proceed when we have concrete, realistic goals. For architectures, this is perhaps the most important thing to have: If we do only a shallow analysis and define by fiat a series of "Shalls" and "Shall nots" without dogfooding, then the project is likely going to die an early death when the requirements start to change even a little bit. We have to create an atmosphere of questioning every decision when it comes to the architecture, because it's going to affect the *entire* project. If it is only shallowly analyzed, then it's way more likely that the project will fail, in my opinion.

Have you ever made any hasty decisions on a project that ended up hurting the overall quality of the project? Let me know in the comments!

Tuesday, January 11, 2011

Stardate 2011.11.1: Interface Segregation Principle vs Common Sense

The Interface Segregation Principle is an incredibly useful and powerful principle of software design. The ISP basically says that you should keep relatively lightweight interfaces so that when a person implements them in order to hook into your system, they do not need to implement tons of methods that aren't relevant to their class. I usually try to have no more than five functions on an interface - that's just a rule of thumb, but I find it to be a useful one.

But it's also very important to remember that, like all principles, it can be taken too far and make your life a living hell.

In particular, the interfaces should still be large enough to define the appropriate functionality that is being replaced. If you go further than that, putting each function and/or property into its own interface, then you have the opposite problem from the ISP occurring: Implementers who want to change how your system works will likely not be able to tell which interfaces have to be implemented in order to provide a given functionality. This is especially true if you've shunned type-safety and used object/dynamic throughout your system.

Some people try to solve the above problem by building up their interfaces into one unified interface that has all of the functions/properties required. This leads to other problems, though. When I decide to use an interface, I always ask myself this question: Do I have some way in the system (at all) to change where/how that interface is instantiated, and do I have a reference explicitly to that interface anywhere that would make switching the underlying implementation useful? If that point of extensibility doesn't exist, then having an interface around it is actually not really that good of an idea. With that in mind, if you have no way to actually change which underlying implementation is used, or if you don't even have any references to an interface except in an inheritance chain, I think we can safely say that that interface is likely unnecessary.

What do you think? Do you agree? Leave some feedback.

Monday, January 10, 2011

Stardate 2011.10.1: Revenge of the Commons

In my day, I've seen a lot of shoddy architectures. From a COM system that read data as strings from the user-interface, converted it to its inherent datatypes, then turned it back into a string and sent it from the client to the server, then read it as a string and turned it into its datatypes, did work on it and sent it back, the entire program of which was written in about 200 3,000-line switch cases (copy-pasted to the server and the client), to a project that only had a few successful continuous-integration builds for the entirety of its lifespan due to such a low-quality codebase, and such immature developers.

Today, I'd like to discuss the Commons; not Boston Commons, but rather Commons projects. These are typically defined to be a series of interfaces, classes, and user controls that didn't seem to have an obvious spot in the domain. They often represent purely technical concerns that have little to do with any real business functionality, such as a combo-box that automatically formats its input in various ways: There's no obvious place that that really fits into the domain, so why not throw it into the junk drawer?

Wait, junk drawer? Is that really all Commons projects are? It seems so. While I understand the utility of having a Big Ball of Mud for things like this, they really do tend to cause significant issues later on down the road. The reason they do this is because they tend to have insanely high afferent coupling. In other words, there are a lot of projects which depend on some or all the classes in the Commons project, and so when another project needs a special version of the class in Commons, and they decide to change that class directly because it fits in the other places that they're aware of, it breaks all the places that work with that class because they didn't realize they had to fix it, and didn't bother to check it (whether because they didn't know they needed to, or because their test-cases weren't well-defined).

A very high afferent coupling doesn't necessarily mean it's a terrible architecture, though - it just means that it's very likely to have problems when those classes need to be changed or fixed, and so the overall design quality is likely lower. The main reason for this is that Commons projects tend to be highly unstable. Consider, for example, Core projects - projects which are decided to be core either to the domain or to the architecture. These projects are usually extremely abstract and tend to be more stable than any Commons projects, because the Core projects define the interfaces and abstract classes which are in-turn implemented specifically in the modules. Since they are abstract, there is a kind of implicit understanding that they should be stable within the domain, i.e. they aren't going to be changing much for any reason that the business people might offer, and at the same time should be relatively stable within the architecture.

Does that mean we should make the Commons projects more stable and abstract? I would say we should certainly make them stabler, but it would be incredibly difficult to make them abstract. Often-times, they actually have abstract components buried in them, but those abstractions are not at all stable. I think better is to avoid, if at all possible, Commons projects altogether, and instead look for ways to embed these pieces into the modules themselves. I would even go so far as to say it's "OK" to have copies of each class that seems to be common distributed throughout the modules. Whoa whoa whoa - what about the DRY principle - don't repeat yourself? Well as it turns out, when DRY could potentially cause you a lot more work due to one class handling several different use-case scenarios, DRY should take a back-seat to the SRP - the single responsibility principle. In that case, the SRP allows each of the different classes to serve their respective modules without breaking the other modules' implementations when bug-fixes/domain changes/architecture changes are required in those particular modules.

I certainly understand that that leaves a kind of gritty after-taste in your brain when you have to do it, but in my experience, it is simply much, much easier to work with/maintain this kind of system than it is one with multiple Commons projects that serve every module individually. "Find and replace" is still better than "Make one class dangerously handle every responsibility."

Friday, January 7, 2011

Stardate 2011.7.1: Netbooks

I recently purchased a netbook for my wife. They're amazingly useful, but also pretty fun and cute. I use it largely to watch Hulu while I'm exercising on the elliptical. What experiences have you had with netbooks? Which do you recommend? Which weren't so good?

Thursday, January 6, 2011

Stardate 2011.6.2: Capptain's Law

Brooks' Law states that adding people to an already late project will make the project later. I think there's a similar law that I pose here for the first time (to my knowledge, please correct me in the comments if I'm wrong):

Adding developers beyond one to the bug-fixing effort on an app will increase the number of bugs.

I find this to be very appealing and very close to Brooks' Law, especially when working on a Death March project. I believe this applies even if the developers that are added are incredibly good developers. This doesn't require a badly-written app. This happens regardless of the quality of the app.

I do not have a real proof for the law, which, as a mathematician, annoys me to no end. It is one of those things that to me and my experience, it seems really intuitive. Basically, no developer you add will know all of the requirements and previous/current bugs that need to be fixed/implemented, so adding them increases the possibility of them adding a new bug. With each new person, that probability increases exponentially because they may now step on everyone else in the group's toes/bugs in some way. At the same time, when there are multiple stakeholders (which is almost always true), it's guaranteed that nothing is ever going to get fixed properly and completely.

What do you think? Agree/disagree? Let me know in the comments.

Stardate 2011.6: Interesting Exercises

Lately, I've been delving a lot into DDD, or domain-driven design. I find it to be the single most elegant way to tackle really complicated systems. Some people say it's a lot of work, especially just for small apps, but I find it to be completely natural. For example, on a previous job, the users needed a batch publishing system that would loop over a set of Word documents, convert them to HTML, and then save them to a directory which already had publishing features turned on, so that when HTML files were put into it, they'd be served appropriately up to the web. I didn't have time to write more than the bare necessities, but since every class and functional piece of code was named in such a way that the business user understood the meaning of the pieces (for example, I used an IFilingCabinet interface for the documents, which they used all the time of course) and they were able to take it and add to it so that even though they started with the barest essentials, they ended up with something really powerful.

I think it would be interesting for a book on DDD to propose exercises. These could be considered to be a kind of kata for DDD. As such, I'm going to begin publishing exercises that I've thought of, from the very simple kinds of things you might find useful around the house, to complicated, industry-standard systems that are extraordinarily painful and convoluted.

The first kata I propose for DDD is rather childish, but comes from a joke I made in an email the other day.

Name: Mouse Trap Simulation
Description: The idea behind this kata is to write a simple simulation for how a mouse trap should work. There are many variants of mouse traps.

  1. The standard spring-loaded bar mouse trap (with cheese)
  2. Glue paper
  3. Poisonous foods
Let's just consider those three. What are the pieces that make up every mouse trap? How can you write the code so that a mouse trap entrepreneur could come in and extend your system without having to change the original code? What does your test harness look like - does it have a human setting the trap, a random mouse encounter?

One thing to note about it is that the requirements are not well-defined. Feel free to ask questions in the comments!

Goal: The goal of this exercise is to develop a simulated home/work environment where rats may be an issue. The code should be structured in such a way that when a normal person reads it, they can have some idea of how the system works. They shouldn't need to be programmers - for example, they might just be business analysts who dislike rats. The key to this exercise is creating a truly decoupled design that allows every mouse trap to do their own work, without making it difficult for an extender of the system to implement that work. If your base classes/interfaces make it difficult to extend them, then You're Doing It All Wrong ®.

Bonus points: For extra points, design your *test harness* to be extensible as well, because what if a trap can't truly be tested in the same way that other traps can be?