Wednesday, 29 September 2010

Loosly coupled systems

Ricco recently posted an article: Less Loosely Coupled Than Meets The Eye which talks about something I have been thinking about for a long time from another perspective. We all try to write loosly coupled systems, and to some extent this is possible, for example using interfaces in code or Web Services to de-couple things.

This works to some extent, and is getting better with features such as Code Contracts in .NET or Data Contracts in Web Services and WCF, however there are always unwritten or assumed behaviours of an API which create dependencies that the architect or developer are unaware of and could cause you to have to re-write and recompile parts or all of your application because of that implied behaviour.

Ways around this can be by using Unit Tests to assert various expectations on the behaviour of dependent code. Mocking frameworks are useful in this situation, but often don't help you to identify when a subtle behaviour in the upstream or downstream system or dependency has changed. Integration testing can certainly help, but it can't ever truely capture all of these complex behaviours in systems.

Ricco says:
"I don’t know that it is possible to write anything like a unitary software
system in a way that is truly loosely coupled. It’s not that you can’t make
boxes and lines that are cleanly separated in all sorts of pretty ways, though
that’s hard enough. The problem is that even if you manage to do that, what you
end up with is actually still pretty tightly coupled.

What do I mean?

Well, let me use Visual Studio as an example. It’s made up of all kinds of
extensions which communicate through formal interfaces. Putting aside the warts
and just looking at how it was intended to work it seems pretty good. You can
make and replace pieces independently, there is a story for how now features
light up; it’s all pretty good. Yes, it could be better but let’s for the moment
idealize what is there. But is it loosely coupled, really?

Well the answer is heck no."

He is correct in that unitary software (e.g. software hosted on the same machine or process or even on the same network) can not be isolated by impact from another part of the system becoming errant and impacting on your perfectly tested, perfectly working software.

The closest we can probably get at the moment with the tools, processes and software we have is to isolate key parts of the system or other systems behind a buffered queue and having data and operation contracts which provide isolation of data changes, to some extent behavioural changes, process memory requirements and CPU time as well as timing and race conditions by ensuring that we only depend on the behaviour of the messaging system (still we are relying on subtle behaviours of the message queue technology).

Does it matter?

In simple systems, probably not. Just ensure that your classes have low cohesion by using interfaces and dependency injection along with unit testing and a mocking framework to verify the state and behaviour of the system.

For medium sized projects you are going to have to do all of the above, plus ensure that you have low cohesion between the systems involved as well as defining your data, operation and version contracts.

For large systems, these are always going to require complex testing of end-to-end systems. However where I think we can make advances is in the testing of the Interface specification.

In interfaces today we go away and write a specification, then each end of the integration go away and write their part, each side will use a stub or driver to ensure that it works while the other side completes their work. Then comes the big-bang integration which costs lots of money and time and usually only results in a system which works for the tested scenario. This is why systems integration projects are so difficult.

Think of an analogy to this: HTML and CSS. This is a 20 year old spec which has gone through five versions and yet browser manufacturers still render the same HTML very differently. They are moving in the right direction with the ACID and the W3C test suites which allow the implementations to be tested.

So this presents a way forward. Write a spec for your interface; define the data and operation contracts as well as the versioning story. Then write a program which can sit between the sides of the interface and can provide a common set of behaviours between the two. At least then the systems will be dealing with the same "bugs" in the behaviour. Don't have each implementor write a stub or driver which will differ, but provide something they can actually run against.

1 comment:

  1. I have no idea what you are on about - but it sounds fascinating