"Silo, where are you!"

Thursday, 27 May 2010 09:19 by ebenroux

I just read an interesting article.

The reason I say it is interesting is that silos have been denounced for quite some time now, and they should be.  Yet, here is someone that appears to be a proponent thereof:

"Silos are the only way to manage increasingly complex concepts"

Almost well said.  I would paraphrase and say:

"Bounded Contexts are the only way to manage increasingly complex concepts"

But bounded contexts do not solve complexity on their own.  You definitely need a sprinkling of communication between them.  A lack of communication is what leads to silos.  Communication is a broad term but in this sense is simply means we need a publish / subscribe mechanism between the bounded contexts so that relevant parties are notified when a particularly interesting event takes place.

Without such communication systems duplicate data that does not directly relate to the core of that system.  In this way systems get bloated, difficult to maintain, and complex to integrate with other systems.  Eventually someone will come up with the grand idea of a single, unified system to represent everything in the organistion.  Enter the über silo.  At this point we can say hello to a 3 to 5 year project and after the first 90% the second 90% can begin.

Don't do it!

There is an aggregate root in my soup!

Thursday, 8 April 2010 10:01 by ebenroux

So there is another discussion around value objects and repositories on the domain driven design yahoo group.  In this case it revolves around a the definition of a Country.

Firstly, a repository should return an entity; never a value object.  Well, so the definition goes.  One may want to break that rule but it probably is not necessary.  Now the definition of a value object vs. an entity has been rehashed to a point of boredom but another way to look at a value object is a value (albeit a composite value) that never changes.  The date '27 April' never changes.

But getting back to a country: one would hear an argument along the lines of the name of a country being able to change.  Actually, no.  It doesn't.  A new country comes into being.  The fact that Rhodesia became Zimbabwe does not mean that the country of Rhodesia no longer exists (as a value object that we care about).  It may not be any use in the present or ever again in the future but it definitely did exist.  The capital Salisbury was never in Zimbabwe; it was in Rhodesia.

It may seem a bit confusing when a value object appears to have an identity; or even when is does have an identifier.  The fact of the matter is that there are different codes that countries may be referred to.  But the thing to note is that those codes do not change either.

Now if we make a country an entity suddenly 3 different databases can have 3 different IDs.  This happens quite readily when using synthetic keys for entities and aggregate roots; even though they may have some stable, unchanging natural key.  Something I'll discuss in another post.

Categories:  
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

More or less exactly the same

Tuesday, 2 March 2010 09:13 by ebenroux

In high school I had a maths teacher that would show us various problems with solutions and simply state that they are "More or less exactly the same".

I recently came across the Infinite Monkey Theorem.  This got me thinking about how we humans approach some things in a really weird way.  I mean, having a random number generator create the complete works of Shakespeare (or any body of work for that matter) is just plain silly.  The probability is somehow calculated.  Now, I believe some things are not possible, or even probable.

When one switches on a television that has not been tuned you see some form of Brownian Movement.  This seems quite random to me.  In fact, we can simulate the same thing using a computer and simply place dots all over the display; something, I'm sure, most programmers have done at some stage while learning to code.

Taking into account how many possible images exist one may expect that at some stage a recognizable image would appear.  It never will.  It seems strange to say that since the probability associated with a recognizable image is now a big fat 0.

What is the probability of taking a tour through the universe and digging through every single planet and finding a perfectly formed clay brick.  I mean, a simple brick.  Not something that looks like a brick.  Even that seems strange.

The big thing is that we have intelligence on our side.  Our DNA contains informattion that didn't appear at random.  Having a bunch of monkeys type up the text in the latest copy of People magazine seems trivial compared to that.

To summarise: saying something is so doesn't necesaarily make it so, even if you use a mathematical model.
Categories:   Personal | Religion
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Why *I* think software fails

Friday, 12 February 2010 07:56 by ebenroux

Since the software industry has now been around for quite some time it is possible to look at the statistics around software failure.  Although a great deal has been written and said about software development failure there is probably not too much in the line of anything that can be done about it.  There should be.

Looking at the history of software development it is quite easy to blame the process or method used to manage the development.  Software development is a broad discipline encompassing quite a number of spheres so singling out any particular cause is quite difficult.  That is why there are so many lists specifying the reasons for software failure, e.g.:

There are even good suggestions in there as to what can be done.

Software fails because it sucks

Now that was simple.  But why does it suck?  What makes it suck?  I was watching a podcast by David Platt about his book on the subject and he said a very interesting thing:

"...that's using the computer to do what the computer is good at so the human can do what the human is good at."

This immediately stood out because it has been my experience that as soon as one tries to implement functionality in any system that is flexible it becomes extremely complex.  Anything flexible is not only easy for humans to grasp and do, but also very obvious.  Unfortunately, the geniuses in the chip-making industry have yet to come up with the 'obvious chip'.  That is why one would, as a software developer, explain a particularly complex problem to someone (like a business analyst or a domain expert) and get a response of "So what's the problem?".  They typically don't realize that the human brain is OK with fuzzy logic but that fuzzy logic cannot be represented very well by a simple algorithm.

Defining complexity

When I think about software complexity Roger Sessions always comes to mind.  From what I loosely recall he has mentioned that complexity in a system increases as the number of states increase and complexity between systems increases drastically as the number of connections between them increases.  And then, of course, within a system there is also the coupling aspect in terms of class interaction.

So the following are the main culprits in the complexity game:

  • Heuristic level required
  • Number of states
  • Coupling (intra- and inter-system)

How could we prevent the failures

I read Re-imagine! by Tom Peters a few years back and if memory serves that is where I read:

"Fail faster, succeed sooner."

Sounds very agile to me.  So the quicker we can prove that a specific technology, product, or development project is not going to work, the better.  Another approach would be to burn $10,000,000 of your client's money and keep on keeping on until your momentum runs out.  At *that* point there will be anger, finger-pointing, tears, blows, and in all probably a good measure of litigation.

Granted, there are definitely systems that require heuristics and it is a rather specialised area that may require expertise in areas that the normal run-of-the-mill software developer simply does not have.  It is hard enough keeping abreast of changes in technology.  Software developers are not accountants, pharmacists, inventory managers, logistics experts, statisticians, or mathematicians.  Yet we find ourselves in situations where we are simply given a vague direction to go in and 'since we are so sharp' we'll just figure it out.  Maybe.  Probably not.  That is why a domain expert is so crucial to software success.  A business analyst would find themselves in the same quandry as a programmer.  They document things but may not have the required extertise either (although some may).  This is why management commitment is also listed quite often in the lists of reasons for software failure.  Management needs to ensure that the domain experts are indeed available.

Technology to the rescue! Not.

Many companies are lead to believe that their problems are related to technology.  Now sometimes the latest technology would help and sometimes old technology simply becomes obsolete.  But more often than not it is a case of a vendor telling the client that they simply need to use a single platform along with all the merry products that are built there-on.  But a quick visit to the common sense center of the brain will tell the client that it may just be that they merge with another company in future and then there may be different technologies at play or they may purchase a product that relies on different technology altogether.

A homogenous technology environment will definitely not solve anything and is somewhat of a pipe dream in any but the simplest environments.

Products to the rescue!

OK, so we'll throw in a rules engine, workflow engine, CRM, CMS or any other COTS product and voilá!  We'll simply build on top of that.  The thing is: each product satisfies a specific need.  By the time most folks realise that there is no one ring to rule them all it has all gone south.

Heuristic level / Flexibility

Too many systems try to do too many things.  I once worked on a system that was, essentially, a loan management system.  It has been in development for over three years with a constant team of around twenty folks and I don't think it will see the light of day.  One thing the client does in their daily life is to authorise loans or extensions on loans based on a motivation.  Now this motivation document has traditionally been typed up in a word processor.  In one instance a loan of say $10,000 needs to be authorised to a small company.  Now, not much motivation required in this case since it is a small risk.  Half a page did the trick.  In another case a $20,000,000 loan needs to be scrutinized.  Here a decent document is required that contains the company structure along with directors and lists of securities, and so on and so forth.  This was put into the system along with all the input required to make this authorisation document work.  It is total unnecessary to do so.  Absolutely no value is gained by doing so.  Just link to the original document or scan it in.  Extreme complexity and cost was added for very little value.

Number of states

This part is actually slightly related to coupling anyway.  One of the examples used by Roger Sessions is that of a coin.  A coin has two sides: heads or tails.  So there are two states that one needs to test for in your system.  But how about two coins: heads-heads or heads-tails or tails-heads or tails-tails.  OK, that was only four.  But how about 3 coins.  Well, that would be eight combinations since it is 2 ^ 3.

Now to be honest that wasn't too bad.  But let's take a dice.  Six sides there, so six states.  Two dice would be 6 ^ 2 = 36 and 3 dice would be 6 ^ 3 = 216.

The only way to reduce this complexity is to reduce the states.  At least, we need to reduce the coupling of the states so that, in our dice example, we have 3 independent dice so the number of states is 6 + 6 + 6 = 18 ( a *lot* less that 216) if we could find a way to separate the states from each other.

Coupling

I have blogged about coupling before.  As software developers we need to identify different bounded contexts (as per Eric Evan's Domain-Driven Design) and implement them independently of each other and I have a sneaky suspicion that this is more-or-less what Roger Session does with his simple iterative partitions.  The problem here is: how?

Quite a few of the lists of reasons for software failure include skill level as a factor.  However, even highly skilled developers may get this wrong.  So the software will still fail.  A highly coupled system or enterprise solution is very, very fragile.

Service-Orientation and Tacit Knowledge

The key to decoupling bounded contexts or ending up with simple partitions is service-orientation.  But it has to be done right.  A service bus is all good-and-well but that other technology we need to talk to in our hetergenous environment is problematic.  A possible solution to that interaction is bridging between the various service bus implementations so that the systems remain unaware of the other systems.  Of course, most companies probably will not find themselves in such a position but it is a way to handle different technologies.

How would one actually go about stringing all these bits together?  The answer is obviously not that simple since all the knowledge and experience required cannot really be measured.  I think the reason for this is that it falls into the realm of tacit knowledge.

The best thing to do for companies is to keep trying new ideas on a small or experimental projects and then to see whether something useful can be produced.  Take the ideas developers and others have and give it a bash.  The Big-Bang kind of approach has proven itself to be a mistake time-and-time again so growing something in an organic way may be best.  There is also nothing wrong with 'shedding some pounds' as we move along.

A last word on Tasks vs. Functions

Over the years I have found that we as developers tend to develop software that is a real pain to use since we are not the ones using it.  Giving a user a bunch of loose ends does not help.  As an example I would like to use my Room2010 site that I actually developed for myself and then had to tweak:

Anyone can register a free accommodation listing.  At a later stage they may choose to upgrade to a premium listing.  They can then send me up to 10 images that I will upload.  So here is the problem:  I had a function in my system where I could mark a listing as premium.  I could then manage the images by browsing to the image locally on my disk and then uploading it.  I had to do this for each image received.  The only problem was that I received images of various sizes that all had to be scaled to 200 x 150.  So before uploading I had to use a graphics package to change the sizes.  Quite a tedious affair really. 

So I created a small Windows application where I simply specify the listing number to process.  The images I received would be stored in their original format in the relevant listing folder.  Once I say GO the system would talk to the website and set the listing to premium, the images would be resized, compressed and uploaded.  As a last step the site would be instructed to send an e-mail to the owner of the listing notifying them of the changes.

So a whole host of functions were grouped together to accomplish a simple task.

Our software needs to be simple to use.

How I got a context menu onto my Google Maps API V3

Tuesday, 9 February 2010 09:34 by ebenroux

I have a jquery context menu plug-in that I have been using for a while.  As things go the context menu is attached to the relevant element (my map div) like so:

$('#map-canvas').contextMenu('context-menu', {options});

This is all good-and-well except that when the map is rendered all kinds of magic happens that basically means my context menu is never displayed.  This is in all likelihood because the Google Maps populates the map canvas div with elements that obvious do not have my context menu attached.

So the approach I took was to somehow tell the context menu to pop up where I need it.  But then I couldn't find a context menu that had a simple Show or Display or PopUp method (maybe I just missed it).  So the next step was to trigger the 'contexmenu' event on the map-canvas div.

First I needed to add an event listener for the rightclick event on the map that would call the openContextMenu function.  However, the event object passed to the function by Google Maps is a MouseEvent class.  This only has a latLng property specifying the latitude and longitude.  From this I needed to get the XY pixel coordinates.  Fortunately there is a MapCanvasProjection object that can return a Point from the fromLatLngToContainerPixel method.  Unfortunately the MapCanvasProjection object cannot be returned from the map but rather from an overlay.  So after some googling I found a bit of code on StackOverflow that does just that.

If anyone has a more elegant way to achieve the same thing please let me know.

Here is the complete code:

var map;
var mapOverlay;
var contextMenuEvent;

$(document).ready(function() {
    var sw = new google.maps.LatLng(-34.80, 16.43);
    var ne = new google.maps.LatLng(-22.06, 32.75);
    var bounds = new google.maps.LatLngBounds(sw, ne);

    map = new google.maps.Map($("#map-canvas").get(0), { mapTypeId: google.maps.MapTypeId.ROADMAP });

    map.setCenter(bounds.getCenter());
    map.fitBounds(bounds);
    map.enableKeyDragZoom();

    mapOverlay = new MapOverlay(map);

    google.maps.event.addListener(map, "rightclick", openContextMenu);

    $.contextMenu.defaults(
      {
          menuStyle:
         {
             width: '200px'
         }
      });

    $('#map-canvas').contextMenu('context-menu', {
        bindings:
      {
          'context-menu-zoom-in': function(trigger) {
              map.setZoom(map.getZoom() + 1);
          },
          'context-menu-zoom-out': function(trigger) {
              map.setZoom(map.getZoom() - 1);
          },
          'context-menu-center-map': function(trigger) {
              map.setCenter(contextMenuEvent.latLng);
          }
      }
    });
});

function openContextMenu(e) {
    var ev = new jQuery.Event('contextmenu');

    var p = mapOverlay.getProjection().fromLatLngToContainerPixel(e.latLng);

    ev.pageX = p.x;
    ev.pageY = p.y;
   
    contextMenuEvent = e;

    $('#map-canvas').trigger(ev, [e.latLng]);
}

MapOverlay.prototype = new google.maps.OverlayView();
MapOverlay.prototype.onAdd = function() { }
MapOverlay.prototype.onRemove = function() { }
MapOverlay.prototype.draw = function() { }

function MapOverlay(map) { this.setMap(map); }

Tags:   ,
Categories:   Google Maps | jQuery
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

I cannot know what I do not know

Sunday, 7 February 2010 20:07 by ebenroux

Sounds silly, right?

But it is interesting looking at something like the Dreyfus model of skill acquisition and pulling that through to software development and what I have experienced over the years.  Now whenever there is a paradigm shift one inevitably goes back to the Novice level.  The fact that one has been doing something A for 10 years may not count for much when I move to something B.  I say may not because A and B may be related, so I may not end up way back on the Novice level.

I think the fact that software development is such an immensely broad field makes it terribly difficult to guage who fits in where.  Coupled to this is the fact that software development is only a means to an end.  That end is the automation of some domain that, in most cases, have absolutely nothing to do with software development: e.g. Accounting, Inventory, Steel Sheet Cutting, Medicine, Asset Management, ...n.

It is also extremely difficult to convince someone that they are on the wrong path.  Hopefully, you yourself are on the correct path, that is, at a higher level of skill.

Another obstacle is open-mindedness.  Sometimes folks are set in their ways.  They have no intentions to learn anything new since what they have gets the job done just fine.  So maybe that is why like-minded individuals are required in very small teams to get the job done.

 

Categories:  
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Momentum Breakers vs Momentum Makers

Friday, 5 February 2010 17:13 by ebenroux

I came across this article by John C. Maxwell.  These ideas can be equally applied to software development.

Categories:  
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Applying CQRS

Friday, 22 January 2010 11:21 by ebenroux

Whenever a new technique comes to the fore it takes a while for it to settle in.  Quite often one tries to apply it everywhere and that is when it seems as though it doesn't always fit.  Of course, one should at least *try* to give it a go but there are instances where it may just not be the right tool for the job.

Command / Query Responsibily Segregation is one such an animal.  It may not be applicable in every scenario.

Now don't get me wrong.  CQRS really is the way to go but it works only when there is actually data to query.  Probably seems obvious.  So if the data you require is *not* there and you want to CQRS-it then you first need to create it.  Let me present an example or two.

One may often hear a question along the lines of:

"I need to present the total for the quote / order to the user.  This total will incorporate tax and discounts and other bells and whistles.  It seems as though I need to duplicate my domain behaviour on the front-end.  How can I use CQRS to do this?"

More recently someone asked about repeating calendar events on the DDD group.  Same thing.  The data is not there.  It cannot be queried.  So CQRS does not work when domain interaction is required. 

So how now?  There are two options:

  • Request the domain to calculate the results, persist it, and then query it - CQRS
  • Request the domain to calculate the results, and return it - no CQRS

It really is as simple as that.

So for situations where you do not need, or want, to persist the results get the domain to simply return the transient results and throw them away when you are done. 

Therefore, it is not a case of CQRS Everywhere.

Categories:   CQRS | Database Design
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

DDD != AR

Tuesday, 24 November 2009 07:12 by ebenroux

I have been struggling with many aspects of OO development over last couple of years, all in an effort to improve my software.  I have, along with others, been trapped in the DDD Aggregate Root thinking that appears to be everywhere.  It appears as though there is this opinion that ARs are the centre of the universe and I have come to the conclusion that it must have something to do with the consistency boundary afforded by an AR.  This seems to have become the central theme.  Almost as though it became necessary to define DDD in some structural sense.

So, all-in-all, the idea that an the AR changes from use-case to use-case is not true.  The consistency boundary does, however, change.  What contributed to my thinking seems to be the fact that everyone thinks that the only object that may be loaded by a Repository is an AR.  This too is a fallacy.

Any entity may be loaded from a Repository.

So what is an Aggregate Root?

The aggregate concept is not new.  The classic Order->OrderLine example comes to mind.  An OrderLine has no reason to exist without it's Order.    However, people tend to confuse ownership with aggregation.  I know I have.  One may manipulate an Order directly, even though it belongs to a Customer.  One would never manipulate an OrderLine directly.  So an AR boils down to how you manipulate your objects.  The consistency is a side effect since an AR only makes sense as a whole in the same way a class only makes sense as a whole.  Both should remain consistent.

So to manipulate an aggregate we nominate an entity within the aggregate to represent the aggregate.  This becomes the Aggregate Root.

But in many, if not most, cases this entity is only the representative.  Using Eric Evans' example in his 'What I learned...' presentation: an aggregate consisting of a Stem and a collection of Grape objects may have, as the root, the Stem class.  But this does not really represent the GrapeBunch aggregate properly.  In some instances one may probably want a GrapeBunch class and use composition to get to the aggregate.  Mr. Evans mentions that he has no real issue with doing this.  However, I feel too many aggregates end up as abstract concepts in the domain.  It may be that the defined Ubiquitous Language has missed the concept or that the domain experts even missed the aggregation themselves.

Aneamic Use-Case Model

It is my opinion that use-cases (or user stories, or whatever you want to call them) may not receive the necessary attention in our modelling.  Well, mine anyway.  I have been using a 'Tasks' layer but have been trying to move these into my entities and ARs.  This may have been a mistake.

I will be mentioning bits from the use case and sequence diagram Wikipedia articles.

Firstly, we need to see where a use-case fits in.  There are essentially two kinds of 'workflows' in any system: sequential and a state-machine.

Now looking at what a sequence diagram does:

"A sequence diagram shows, as parallel vertical lines ("lifelines"), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner."

And what a use-case is:

  • "Each use case focuses on describing how to achieve a goal or task."
  • "Use cases should not be confused with the features of the system under consideration."
  • "A use case defines the interactions between external actors and the system under consideration to accomplish a goal."

A use-case appears to fit the idea of a sequential workflow.  It is completed in one step.  So we can call it an operation.  This operation takes place in response to a command from an actor in the system.  If may then publish an event.  These operations resemble transaction script [PoEAA] and may be why some folks choose to stay away from them since it is confused with an aneamic domain model.  However, they are, in sooth, operation script (also Fowler).  Now, I have been defining a 'Task' for my operations but since moving to DDD-thinking I have been trying to move these into my domain classes.  It isn't working.  I feel that use-cases need to be made explicit also.  They lie in-between task-based domain interaction and state-machines.  The other thing is that one operation may need to interact with another.

State Machines

The term workflow is most often associated with a state machine.  The finite state machine article on Wikipedia may be referenced for more information.

Workflow does not fit into the typical use-case definition and it is probably the reason why the term process is used quite often in business requirements documentation.

Conclusion

What I find interesting is that in any business domain of reasonable complexity one will find all these concepts hidden away.  Developing a computer-based solution that takes all of these factors into consideration is a monumental effort and in many cases is under-estimated.  What I have seen over the years appears to be a tendency to focus on specific technologies to try overcome this intricate mass.  Software such as BizTalk has been abused.

At the root of everything, though, is data.  We need data to represent the real world state.  This leads to a group of developers relying only on data manipulation to perform all these specialized areas.  But everything is built up from the data, so (bold is good):

  • Data Structures
    • Data Manipulation (Procedural Code / Transaction Script)
    • Behaviour (OO Code)
      • Entity-Based Interaction
      • Task-Based Interaction
        • Use-Case Modeling (Operation Script)
          • State Machines (Workflow / Saga)

And to top it all off we will add Service-Orientation.  SOA would wrap all of these.

 

 

Domain-Events / ORM (or lack thereof)

Friday, 16 October 2009 22:46 by ebenroux

Now I don't particularly fancy any ORM.  I can, however, see the usefulness of these things but I still don't like them.  One of the useful features is change-tracking.  So as you fiddle with your domain objects so the ORM keeps track and will commit the changes to the data store when called to do so.

Now let's say I create an order with it's order lines.  My use-case is such that I have to call Customer.AddOrder(order).  That is OK I guess but what about now storing the changes?  What were the changes?  For each use case I need to have my repository be aware of what to do.  Maybe my OrderRepository.Add(order) is clever enough to save the changes --- well, it better be if I am the one responsible for making the code work.  But what if adding the order changed some state in the Customer.  Perhaps an ActiveOrdersTotal?

This has bugged me for a while and made me consider an ORM on more than one occasion.  I usually fetch a cup of coffee and by the time I get back to my desk the urge has left me.  But today I read about Udi Dahan's domain events again when someone brought it up on the Domain-Driven Design group.  The particular person was using it to perform persistence.  It immediately made sense and I set about giving it a bash on some of my objects.  And I like it!

You can read Udi's post here.

What I like about it is that I can have something like this in my service bus command handler:

using (var uow = UnitOfWorkFactory.Create())
{
   var customer = CustomerRepository.Get(command.CustomerId);

customer.ProcessCommand(command);

uow.Commit();
}

The ProcessCommand could create the new order and add it to the internal active orders collection and also update the ActiveOrdersTotal.  So far nothing is saved and the domain state has changed.  Now the 'magic' happens.  After the internal AddOrder is called an OrderAdded domain event is raised along with a ActiveOrdersTotalChanged domain event.  The domain event handlers for these two events will then use the relevant repositories to persist the data.  Since the events are raised in the unit of work and are, therefore, within the same transaction scope so the changes adhere to the ACID properties.

Now I have thought about a situation previously where one may need to send an e-mail.  But what if the transaction fails and the e-mail has been sent.  Udi did mention this in his post, though.  The simple answer is to always have all operations be transactional.  So rather than sending the mail directly using some SMTP gateway one would rather use a service bus and send a SendEMailMessage to the e-mail processing node using a transactional queue, for instance.  In this way if the transaction fails the message will never be sent and the 'problem' is solved.

Categories:   Domain-Driven Design | ORM
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed