MSMQ and TransactionScope

Thursday, 8 March 2012 07:09 by ebenroux

It turns out that when using TransactionScope you should be creating the MessageQueue instance inside the TransactionScope:

    using(var scope = new TransactionScope())
    using(var queue = new MessageQueue(path, true))
    {
        // timeout should be higher for remote queues
        // transaction type should be None for non-transactional queues
        var message = queue.Receive(TimeSpan.FromMilliseconds(0), MessageQueueTransactionType.Automatic);
        // ...normal processing...
        scope.Complete();
    }

So MSMQ behaves the same way that a database connection does in that one has to create the database connection inside the scope.

Categories:   Software Architecture | c#
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Bite-Size Chunks

Thursday, 4 November 2010 23:03 by ebenroux

I came across this article on ZDNet: "The bigger the system, the greater the chance of failure"

Now, I am all for "bite-size chunks" and Roger Sessions does make a lot of sense.  However, simply breaking a system into smaller parts is only part of the solution.  The greater problem is the degree of coupling within the system.  A monolithic system is inevitably highly coupled.  This makes the system fragile and complex.  The fragility becomes apparent when a "simple" change in one part of the system has high ramifications in another part.  The complexity stems from the number of states within the system that increases exponentially since they are all coupled.

If we simply break the system down into smaller components without removing the high coupling we end up with the exact same problems.  This is typically what happens when folks implement JBOWS.

What is required is to break down the system into components and remove the coupling.  One option is to make use of a service bus to facilitate sending messages between the components asynchronously.  In this way the different components rely only on the message structures in the same way that an interface (as in class interface) abstracts the contract between classes.  So in a way it is a form of dependency inversion since the components no longer depend on the concrete component implementation but rather on the message structures (the data contracts).

Coupling comes in two forms: temporal and behavioural.  Web services are a classic case of high temporal coupling.  Since a service bus is asynchronous it results in low temporal coupling.  Behavioural coupling is tricky either way.

Behavioural Coupling

Friday, 29 October 2010 08:32 by ebenroux

When I left work yesterday I had to work through a turnstile.  One of those big ones that only one person can go throught at any one time, in either direction.  So it turns both ways.  It is secured so I have to swipe my access card to get in or out.

This got me thinking about how that system works.  Let's look at a highly coupled mechanism:

Since the turnstile allows either in or out the service the readers needs to know which reader does what.  When a request arrives to allow someone access into the building our card reader service checks with the autorization system whether the person has access and if so sends a message to the turnstile to allow it to turn in the relevant direction.  So here our card reader service needs to know how the turnstile works and which card reader is attached to which action.  This is rather high behavioural coupling.

Now for something a bit less coupled.  Let's say our turnstile simply 'unlocks' allowing the turnstile to turn once in any direction.  Now the card reader service simply has to ask whether the person has access to unlock the turnstile and send the unlock message.  So our card reader service now does not care how the turnstile works.  This is low behavioural coupling.

Either way can be implemented but I'd say the first is somewhat more complex to get done.

Entity vs. Value Object

Thursday, 19 August 2010 08:54 by ebenroux

This is another question that keeps popping up on the Domain-Driven Design group.  It seems to be a difficult task and some of the answers seem to complicate matters since they come from a technical context.

How do you use your object?

The same concept may very well be implemented as a value object in one context and as an entity in the next and depending on the type of object this distinction would be more, or less, pronounced.  So it is important to understand how you intend using the object.  If you are you interested in a very particular instance it is an entity and that's that.  An object like an Employee or an Invoice can immedsiately be identified as an entity since we are all familiar with these concepts and 'know' that we need to work with specific instances.  So we'll take something a bit more fluid like an Address

Now when would an Address need to be an entity?  Well, do we care about a specific instance?  Is our application (in our Bounded Context) interested in a particular address?

Example 1: Courier - Delivery Bounded Context

Let's imagine that we are couriers and when we receive a parcel we need to deliver it to a recipient at a particular address.  Since we specialise in same-day business delivery we frequently deliver to office blocks that have the same street address but may house many of our clients.  Here we care about a particlar Address and we link recipients to it.  The Address is an Entity.

Example 2: Courier - HR Bounded Context

In our courier company we also have an HR system so we store Employee objects.  Each employee has a home address stored as fields in the employee record in our database.  However, in our object model we have an Address object represented by the Employee.HomeAddress property (this is just for illustration so we won't split hairs as far as software design is concerned).  In this case it seems quite obvious that Address has to be a value object since it is purely a container.

So let's say this same Employee object can have a list of ways to contact the employee and we model a ContactMethod class.  In our data store we will have a one-to-many relationship between our Employee table and the ContactMethod table.  In fact, we go so far as to give ContactMethod an Id so that we can directly update the data in the database (for whatever reason).  ContactMethod would be aggregated with Employee so whenever we save the employee the contact methods are re-populated in the database (deleted and re-inserted).  The ContactMethod is still a value object.  Even though it may have its own life-cycle and identifier we do not care about a specifc instance in our application.  We will never, and can never, say "go and fetch me contact method... {what}".  So there is no way to uniquely identify a Value Object using the Ubiquitous Language for our domain even though it may have an identifier that is universally unique.  It is simply a synthetic key used for technical efficiency.

Immutable Value Objects

Some folks are of the opinion that value objects must be immutable.  This is not necessarily so.  As with our first address example an immutable value object would mean we need to create a new instance to make simple changes such as fixing a spelling mistake.  It is perfectly acceptable to use immutable objects but there is also no reason why we can't change the properties of the same object instance.  The only time that an immutable value object would be required is when the object instance is shared.  But the only time you would share a value object in this way is when you are implementing the flywieght pattern and those cases are pretty rare.

 

 

Aggregate Roots vs. Single Responsibility (and other issues)

Tuesday, 27 July 2010 10:34 by ebenroux

It is interesting to note how many questions there are around Aggregate Roots.  Every-so-often someone will post a question on the Domain-Driven Design Group regarding how to structure the Aggregate Roots.  Now, I may be no genius but there are too many questions for my liking.  It indicates that something is hard to understand; and when something is hard to understand it probably highlighting a larger problem.

Aggregate vs. Aggregate Root

"Cluster ENTITIES and VALUE OBJECTS into AGGREGATES and define boundaries around each.  Choose one ENTITY to be the root of the AGGREGATE, and control all access to the objects inside the boundary through the root." (Evans, p. 129)

In one of Eric's presentations he touches on this; stating the difference between an Aggregate and an Aggregate Root.  I have mentioned Eric's bunch of grapes example in a previous post so I will not re-hash it here.

The gang of four has two principles of object-oriented design:

  1. "Program to an interface, not an implementation." (Gamma et. al. p. 18)
  2. "Favor object composition over class inheritance." (Gamma et. al. p. 20)

So what's the point?  At first glance it seems that we are working with composition for both the Aggregate and the Aggregate Root.  On page 128 of Eric's blue book there is a class diagram modelling a car.  The car class is adorned with two stereotypes: <<Entity>> and <<Aggregate Root>>.

Single Responsibility?

So is an Aggregate Root called Car sticking to the Single Resonsibility Principle?  It is responsible for Car behaviour; but also for the consistency within the Aggregate since it now represents an Aggregate with the Car entity as the Root.  It seems as though the Car is doing more than it should and I think that this leads to many issues.

Could it be that the Car Aggregate Root concept is a specialisation of the Car entity?  So, following this reasoning it is actually inheritance.  The reason we do not see it is because it is flattened into the Car entity and, therefore, the car no longer adheres to SRP.  My reasoning could be flawed so I am open to persuasion.

Does the Aggregate Root change depending on the use case?

The problem the Aggregate Root concept is trying to solve is consistency.  It groups objects together to ensure that they are used as a unit.  When one looks at the philosophy behind Data-Context-Interaction (DCI) it appears as though the context is pushed into the root entity.  When a different context (use-case) enters the fray that also makes use of the same Aggregate Root it appears as though the Aggregate Root is changing.

There has been some discussion around the issue of the Aggregate Root changing depending on the use case, i.e. a different entity is regarded as the root depending on the use case.  Now some folks state that the Aggregate Root isn't really changing but the fact that it appears to be changing should be an indication that you are working with different Bounded Context.  Now this is probably true; especially since the word context make an appearance.

A quick note on Bounded Contexts:

Let's stay with the Car example.  Let's say we have a car rental company and we have our own workshop.  Now our car could do something along the lines of Car.ScheduleMaintenance(dependencies) and Car.MakeBooking(dependencies).

This is where issues start creeping into the design.  The maintenance management folks don't give two hoots about rentals and, likewise, the rental folks are not too interested in maintenance; they only want to know whether the car is available for rental.  Enter the Bounded Context (BC).  We have a Maintenance Management BC and Rental Administration BC.  Of course we would probably also need a Fleet Management BC with e.g Car.Commission() and Car.Decommission().

The particular Car is the same car in the real world.  Just look at the registration number.  However, the context within which it is used changes.  It is, in OO terms, a specialization of a Car class to incorporate the context.  This inevitably leads to data duplication of sorts since the data for each BC will probably be stored in different databases.

Assuming we view this as a problem, how could we solve this?  As proposed by the GoF we could try composition.  In DCI terms the context can aggregate the different entities.  I previously blogged about an aneamic use-case model and the context looks an awful lot like a use-case.  I have not played around with how to get these concepts into code but we'll get there.

Repositories return Aggregate Roots?

Now this one is rather weird.  I have no idea where this comes from.  For those that have the Domain-Driven Design book by Eric Evans it is a simple case of opening the front cover and having a look at the diagram printed on the inside where it clearly shows two arrows that point to Repositories.  One comes from Entities and the other from Aggregates (note: not Aggregate Root), like so:

  • [Entities] --- access with --> [Repositories]
  • [Aggregates] --- access with --> [Repositories]

The fact is that a repository always returns an entity whether or not that entity is an aggregate. So if folks want to call it an AR when there is no aggregation it probably will not restrict the design.

"Silo, where are you!"

Thursday, 27 May 2010 09:19 by ebenroux

I just read an interesting article.

The reason I say it is interesting is that silos have been denounced for quite some time now, and they should be.  Yet, here is someone that appears to be a proponent thereof:

"Silos are the only way to manage increasingly complex concepts"

Almost well said.  I would paraphrase and say:

"Bounded Contexts are the only way to manage increasingly complex concepts"

But bounded contexts do not solve complexity on their own.  You definitely need a sprinkling of communication between them.  A lack of communication is what leads to silos.  Communication is a broad term but in this sense is simply means we need a publish / subscribe mechanism between the bounded contexts so that relevant parties are notified when a particularly interesting event takes place.

Without such communication systems duplicate data that does not directly relate to the core of that system.  In this way systems get bloated, difficult to maintain, and complex to integrate with other systems.  Eventually someone will come up with the grand idea of a single, unified system to represent everything in the organistion.  Enter the über silo.  At this point we can say hello to a 3 to 5 year project and after the first 90% the second 90% can begin.

Don't do it!

Why *I* think software fails

Friday, 12 February 2010 07:56 by ebenroux

Since the software industry has now been around for quite some time it is possible to look at the statistics around software failure.  Although a great deal has been written and said about software development failure there is probably not too much in the line of anything that can be done about it.  There should be.

Looking at the history of software development it is quite easy to blame the process or method used to manage the development.  Software development is a broad discipline encompassing quite a number of spheres so singling out any particular cause is quite difficult.  That is why there are so many lists specifying the reasons for software failure, e.g.:

There are even good suggestions in there as to what can be done.

Software fails because it sucks

Now that was simple.  But why does it suck?  What makes it suck?  I was watching a podcast by David Platt about his book on the subject and he said a very interesting thing:

"...that's using the computer to do what the computer is good at so the human can do what the human is good at."

This immediately stood out because it has been my experience that as soon as one tries to implement functionality in any system that is flexible it becomes extremely complex.  Anything flexible is not only easy for humans to grasp and do, but also very obvious.  Unfortunately, the geniuses in the chip-making industry have yet to come up with the 'obvious chip'.  That is why one would, as a software developer, explain a particularly complex problem to someone (like a business analyst or a domain expert) and get a response of "So what's the problem?".  They typically don't realize that the human brain is OK with fuzzy logic but that fuzzy logic cannot be represented very well by a simple algorithm.

Defining complexity

When I think about software complexity Roger Sessions always comes to mind.  From what I loosely recall he has mentioned that complexity in a system increases as the number of states increase and complexity between systems increases drastically as the number of connections between them increases.  And then, of course, within a system there is also the coupling aspect in terms of class interaction.

So the following are the main culprits in the complexity game:

  • Heuristic level required
  • Number of states
  • Coupling (intra- and inter-system)

How could we prevent the failures

I read Re-imagine! by Tom Peters a few years back and if memory serves that is where I read:

"Fail faster, succeed sooner."

Sounds very agile to me.  So the quicker we can prove that a specific technology, product, or development project is not going to work, the better.  Another approach would be to burn $10,000,000 of your client's money and keep on keeping on until your momentum runs out.  At *that* point there will be anger, finger-pointing, tears, blows, and in all probably a good measure of litigation.

Granted, there are definitely systems that require heuristics and it is a rather specialised area that may require expertise in areas that the normal run-of-the-mill software developer simply does not have.  It is hard enough keeping abreast of changes in technology.  Software developers are not accountants, pharmacists, inventory managers, logistics experts, statisticians, or mathematicians.  Yet we find ourselves in situations where we are simply given a vague direction to go in and 'since we are so sharp' we'll just figure it out.  Maybe.  Probably not.  That is why a domain expert is so crucial to software success.  A business analyst would find themselves in the same quandry as a programmer.  They document things but may not have the required extertise either (although some may).  This is why management commitment is also listed quite often in the lists of reasons for software failure.  Management needs to ensure that the domain experts are indeed available.

Technology to the rescue! Not.

Many companies are lead to believe that their problems are related to technology.  Now sometimes the latest technology would help and sometimes old technology simply becomes obsolete.  But more often than not it is a case of a vendor telling the client that they simply need to use a single platform along with all the merry products that are built there-on.  But a quick visit to the common sense center of the brain will tell the client that it may just be that they merge with another company in future and then there may be different technologies at play or they may purchase a product that relies on different technology altogether.

A homogenous technology environment will definitely not solve anything and is somewhat of a pipe dream in any but the simplest environments.

Products to the rescue!

OK, so we'll throw in a rules engine, workflow engine, CRM, CMS or any other COTS product and voilá!  We'll simply build on top of that.  The thing is: each product satisfies a specific need.  By the time most folks realise that there is no one ring to rule them all it has all gone south.

Heuristic level / Flexibility

Too many systems try to do too many things.  I once worked on a system that was, essentially, a loan management system.  It has been in development for over three years with a constant team of around twenty folks and I don't think it will see the light of day.  One thing the client does in their daily life is to authorise loans or extensions on loans based on a motivation.  Now this motivation document has traditionally been typed up in a word processor.  In one instance a loan of say $10,000 needs to be authorised to a small company.  Now, not much motivation required in this case since it is a small risk.  Half a page did the trick.  In another case a $20,000,000 loan needs to be scrutinized.  Here a decent document is required that contains the company structure along with directors and lists of securities, and so on and so forth.  This was put into the system along with all the input required to make this authorisation document work.  It is total unnecessary to do so.  Absolutely no value is gained by doing so.  Just link to the original document or scan it in.  Extreme complexity and cost was added for very little value.

Number of states

This part is actually slightly related to coupling anyway.  One of the examples used by Roger Sessions is that of a coin.  A coin has two sides: heads or tails.  So there are two states that one needs to test for in your system.  But how about two coins: heads-heads or heads-tails or tails-heads or tails-tails.  OK, that was only four.  But how about 3 coins.  Well, that would be eight combinations since it is 2 ^ 3.

Now to be honest that wasn't too bad.  But let's take a dice.  Six sides there, so six states.  Two dice would be 6 ^ 2 = 36 and 3 dice would be 6 ^ 3 = 216.

The only way to reduce this complexity is to reduce the states.  At least, we need to reduce the coupling of the states so that, in our dice example, we have 3 independent dice so the number of states is 6 + 6 + 6 = 18 ( a *lot* less that 216) if we could find a way to separate the states from each other.

Coupling

I have blogged about coupling before.  As software developers we need to identify different bounded contexts (as per Eric Evan's Domain-Driven Design) and implement them independently of each other and I have a sneaky suspicion that this is more-or-less what Roger Session does with his simple iterative partitions.  The problem here is: how?

Quite a few of the lists of reasons for software failure include skill level as a factor.  However, even highly skilled developers may get this wrong.  So the software will still fail.  A highly coupled system or enterprise solution is very, very fragile.

Service-Orientation and Tacit Knowledge

The key to decoupling bounded contexts or ending up with simple partitions is service-orientation.  But it has to be done right.  A service bus is all good-and-well but that other technology we need to talk to in our hetergenous environment is problematic.  A possible solution to that interaction is bridging between the various service bus implementations so that the systems remain unaware of the other systems.  Of course, most companies probably will not find themselves in such a position but it is a way to handle different technologies.

How would one actually go about stringing all these bits together?  The answer is obviously not that simple since all the knowledge and experience required cannot really be measured.  I think the reason for this is that it falls into the realm of tacit knowledge.

The best thing to do for companies is to keep trying new ideas on a small or experimental projects and then to see whether something useful can be produced.  Take the ideas developers and others have and give it a bash.  The Big-Bang kind of approach has proven itself to be a mistake time-and-time again so growing something in an organic way may be best.  There is also nothing wrong with 'shedding some pounds' as we move along.

A last word on Tasks vs. Functions

Over the years I have found that we as developers tend to develop software that is a real pain to use since we are not the ones using it.  Giving a user a bunch of loose ends does not help.  As an example I would like to use my Room2010 site that I actually developed for myself and then had to tweak:

Anyone can register a free accommodation listing.  At a later stage they may choose to upgrade to a premium listing.  They can then send me up to 10 images that I will upload.  So here is the problem:  I had a function in my system where I could mark a listing as premium.  I could then manage the images by browsing to the image locally on my disk and then uploading it.  I had to do this for each image received.  The only problem was that I received images of various sizes that all had to be scaled to 200 x 150.  So before uploading I had to use a graphics package to change the sizes.  Quite a tedious affair really. 

So I created a small Windows application where I simply specify the listing number to process.  The images I received would be stored in their original format in the relevant listing folder.  Once I say GO the system would talk to the website and set the listing to premium, the images would be resized, compressed and uploaded.  As a last step the site would be instructed to send an e-mail to the owner of the listing notifying them of the changes.

So a whole host of functions were grouped together to accomplish a simple task.

Our software needs to be simple to use.

Order.Total(), CQS [Meyer], and object semantics

Friday, 16 October 2009 22:24 by ebenroux

Right, so I have seen some examples of determining an order total by adding the totals for the order lines, and then applying tax, etc.  Firstly, however, we need to determine whether an order would actually ever do this.  Why?  Because an order represents an agreement of sorts and is a near-immutable object.  One typically starts out with a quote (or shopping cart).  This quote may have items that are on special.  We may even apply a certain discount (based on our client segmentation or a discretionary discount) to the quote.  There will probably never really be any rules around how many quotes a client may have or maximum amounts or the like.

Now once a client accepts a quote it becomes an order.  The quote can, for all intents and purposes, disappear from the transaction store since it is now history.  The order, on the other hand, is now an agreement that needs to be fulfilled.  Some items may be placed on back-order or be removed totally if stock is no longer available.  But the point is that the order now has it's own life cycle.  It may even be cancelled in it's entirety.  So why can we not 'edit' an order?  The problem lies in determining the validity.  Let's say that yesterday there was a special on lollipops and the client ended up ordering 100.  Today he phones and says that he would rather like to order 200.  What pricing would one use?  The simplest would be to give the client a new quote and create a new order for the next 100 lollipos.  The complete data used to quote the client is now available for this new order.

Now let's get to the Order.Total().  Now our order may still need to do this since we may place certain of the items on a BackOrder and need to recalculate the total based on the values stored in the remaining items.  A typical scenario is to run through the order items and add up the totals.  Now this, to me, seems to go against CQS [Meyer] since our query is changing the object state by changing the total.  "But wait!", I hear you say, "The total is a calculated value so it is not state."  This may be true when the order is the Aggregate Root for the use case, but what happens when we have a rule such as "A client may not have active orders totalling more than $5,000"?  Now I know with an ORM the order items may be lazy-loaded.  I do not use an ORM and lazy-loading borders on evil (IMO).  So in the Customer class may have an AddOrder method for this use-case.  But how does it get the total.  Well, the total has to be stored as state.

Changing the order total

OK, so we could have a method such as CalculateTotal() on the order.  But that would result in the order not being valid at all times since I could add an order line and call Total() before calling CalculateTotal().  So the answer is to perhaps call CalculateTotal() internally each time it needs to be called and to then keep the method private.

Aggregate Roots with a collection of another AR

Wednesday, 30 September 2009 08:43 by ebenroux

I have been wondering whether an aggregate root should ever contain a collection of another aggregate root.  Back to the Customer.  Is it OK to have a collection of Orders?  One could argue that a customer may have many thousands of orders after some time so it would be impractical.  However, such a collection (in the domain) would mean that the customer only has a collection of the open orders.  So in a real sense this list should never be all that big.  Using CQS one may even remove the completed orders from the domain store.

Even so, what purpose would this collection serve.  It seems, to me anyway, that the only time such a collection of ARs should be considered is when it falls within the consistency boundary of the containing AR.  So, as per DDD, an aggregate root represents a consistency boundary.  For example, an order with its contained order items needs to be consistent; and it seems rather obvious.

Then one comes across issues such as: "A bronze customer may not have more than 2 open orders at any one time and with a total of no more than $10,000".  Similar rules apply to other customer types.  A typical reaction is to have a reference to the customer in an order to get the type (bronze, etc.) and to be able to ask the OrderRepository for the total of all open orders for the customer.  That is one way to do it, Iguess.

However, suddenly the consistency boundary has moved.  It is now around the customer.  This means that one would no longer add an order willy-nilly but rather add the order to the customer.  So now we need an Orders collection for the customer.  This does not mean that an Order is no longer an aggregate.  This is also why some people reckon that the aggregate root changes depending on the use case.  It is possible that when applying an 'apply order discount' task that the order stays the AR since it doesn't affect the overall collection of orders within the customer boundary.  It also does not mean that we will not have an OrderRepository.  But it does mean that when applying the CreateOrderCommand in our domain that the customer will be loaded, the newly created order will be added to the customer, the consistency checked and if OK the order will be persisted using the OrderRepository.

Domain Task-Based Interaction

Tuesday, 22 September 2009 09:10 by ebenroux

After some interesting to-and-froing on the Domain-Driven Design group the penny dropped on task-based interaction.  Now this is something Udi Dahan seems to have been doing a while and it is actually *very* nifty.  Now I must admit that I thought I actually understood task-based vs. entity based interaction but it turns out that even having something like 'Activate Account' is not necessarily task-based.  Well, the UI would appear to be task-based to the user but how we implement this 'task' in the domain is where it gets interesting. 

For our scenario we will use a disconnected environment as is the case with any real-world software solution.

Entity-based implementation

Now the 'typical' way database interaction takes place, and the way I for one have been doing this, is as follows:  The user invokes the 'Activate Account' function through some menu.  The account is loaded from the table and all the account data is returned to the client.  The user then enters, say, the activation date and clicks the 'Save' button.

Now all this data goes back to the server.  An update is issued and all the account data is overwritten for the relevant row.  "But wait!" come the cries.  What if some other user has updated the banking details?  No problem; we simply add an incrementing RowVersion on the record to indicate whether there is a concurrency violation.  Now here is an interesting bit.  Typically (well I used to do it like this), we simply reload the record and have the user do their action again.  This time the last RowVersion will come along for the ride and hopefully no other user gets to submit their data before we hit the server.  All is well.

Task-based implementation

Using the task-based approach the user would invoke the 'Activate Account' function.  The relevant data is retrieved from the database and displayed.  Once again the user enters the activation date and clicks the 'Save' button.  A command is issued containing the aggregate id and activation date and ends up in the handler for the command where a unit of work is started.  The Account aggregate root is loaded and the command handed to the aggregate to process.  Internally the status and activation date are set.  So only the part that has been altered now needs to be saved.  Should another user have updated the banking details it really doesn't matter since the two tasks do not interfere with one another.

Applying policy rules

So why not just persist the command directly, bypassing the aggregate root?

In order to keep the aggregate valid the command is applied to the AR and it can then ensure that it is valid, or you can pass it through some validation mechanism.  The act of applying the command may change the internal state to something that results in the validity being compromised.  So applying the changes externally would be unwise.

Persisting the result

Now the thing is that I don't currently fancy any ORM technology.  An ORM, though, would do the change tracking for your AR and upon committing the unit of work the relevant changes would be persisted.  However, when dealing directly with your own repository implementation I guess we'll require something like an ActivateAccount method on the repository to persist the relevant portions.

Not all conflicts go away

Although focusing on tasks may achieve some manner of isolation (since business folks don't typically perform the same tasks on the same entities) it may still be possible that two tasks alter common data.  In such cases a concurrency-checking scheme such as a version number may still be required.  Greg Young has mentioned something along the lines of saving the commands and then finding those that have been applied against the version number of the command currently being processed.  So if CommandA completed against version 1 then the version number increased to 2.  Now we also started CommandB against version 1 but how do we know whether we have a conflict?  The answer is to find the commands issued against our aggregate with version 1 and ask each if the current command conflicts with it.  Something like:

var conflict = false;
previousCommands.ForEach(command => { if (thisCommand.ConflictsWith(command)) conflict = true; }

if (!conflict)
{
   // do stuff
}