Entity vs. Value Object

Thursday, 19 August 2010 08:54 by ebenroux

This is another question that keeps popping up on the Domain-Driven Design group.  It seems to be a difficult task and some of the answers seem to complicate matters since they come from a technical context.

How do you use your object?

The same concept may very well be implemented as a value object in one context and as an entity in the next and depending on the type of object this distinction would be more, or less, pronounced.  So it is important to understand how you intend using the object.  If you are you interested in a very particular instance it is an entity and that's that.  An object like an Employee or an Invoice can immedsiately be identified as an entity since we are all familiar with these concepts and 'know' that we need to work with specific instances.  So we'll take something a bit more fluid like an Address

Now when would an Address need to be an entity?  Well, do we care about a specific instance?  Is our application (in our Bounded Context) interested in a particular address?

Example 1: Courier - Delivery Bounded Context

Let's imagine that we are couriers and when we receive a parcel we need to deliver it to a recipient at a particular address.  Since we specialise in same-day business delivery we frequently deliver to office blocks that have the same street address but may house many of our clients.  Here we care about a particlar Address and we link recipients to it.  The Address is an Entity.

Example 2: Courier - HR Bounded Context

In our courier company we also have an HR system so we store Employee objects.  Each employee has a home address stored as fields in the employee record in our database.  However, in our object model we have an Address object represented by the Employee.HomeAddress property (this is just for illustration so we won't split hairs as far as software design is concerned).  In this case it seems quite obvious that Address has to be a value object since it is purely a container.

So let's say this same Employee object can have a list of ways to contact the employee and we model a ContactMethod class.  In our data store we will have a one-to-many relationship between our Employee table and the ContactMethod table.  In fact, we go so far as to give ContactMethod an Id so that we can directly update the data in the database (for whatever reason).  ContactMethod would be aggregated with Employee so whenever we save the employee the contact methods are re-populated in the database (deleted and re-inserted).  The ContactMethod is still a value object.  Even though it may have its own life-cycle and identifier we do not care about a specifc instance in our application.  We will never, and can never, say "go and fetch me contact method... {what}".  So there is no way to uniquely identify a Value Object using the Ubiquitous Language for our domain even though it may have an identifier that is universally unique.  It is simply a synthetic key used for technical efficiency.

Immutable Value Objects

Some folks are of the opinion that value objects must be immutable.  This is not necessarily so.  As with our first address example an immutable value object would mean we need to create a new instance to make simple changes such as fixing a spelling mistake.  It is perfectly acceptable to use immutable objects but there is also no reason why we can't change the properties of the same object instance.  The only time that an immutable value object would be required is when the object instance is shared.  But the only time you would share a value object in this way is when you are implementing the flywieght pattern and those cases are pretty rare.



Aggregate Roots vs. Single Responsibility (and other issues)

Tuesday, 27 July 2010 10:34 by ebenroux

It is interesting to note how many questions there are around Aggregate Roots.  Every-so-often someone will post a question on the Domain-Driven Design Group regarding how to structure the Aggregate Roots.  Now, I may be no genius but there are too many questions for my liking.  It indicates that something is hard to understand; and when something is hard to understand it probably highlighting a larger problem.

Aggregate vs. Aggregate Root

"Cluster ENTITIES and VALUE OBJECTS into AGGREGATES and define boundaries around each.  Choose one ENTITY to be the root of the AGGREGATE, and control all access to the objects inside the boundary through the root." (Evans, p. 129)

In one of Eric's presentations he touches on this; stating the difference between an Aggregate and an Aggregate Root.  I have mentioned Eric's bunch of grapes example in a previous post so I will not re-hash it here.

The gang of four has two principles of object-oriented design:

  1. "Program to an interface, not an implementation." (Gamma et. al. p. 18)
  2. "Favor object composition over class inheritance." (Gamma et. al. p. 20)

So what's the point?  At first glance it seems that we are working with composition for both the Aggregate and the Aggregate Root.  On page 128 of Eric's blue book there is a class diagram modelling a car.  The car class is adorned with two stereotypes: <<Entity>> and <<Aggregate Root>>.

Single Responsibility?

So is an Aggregate Root called Car sticking to the Single Resonsibility Principle?  It is responsible for Car behaviour; but also for the consistency within the Aggregate since it now represents an Aggregate with the Car entity as the Root.  It seems as though the Car is doing more than it should and I think that this leads to many issues.

Could it be that the Car Aggregate Root concept is a specialisation of the Car entity?  So, following this reasoning it is actually inheritance.  The reason we do not see it is because it is flattened into the Car entity and, therefore, the car no longer adheres to SRP.  My reasoning could be flawed so I am open to persuasion.

Does the Aggregate Root change depending on the use case?

The problem the Aggregate Root concept is trying to solve is consistency.  It groups objects together to ensure that they are used as a unit.  When one looks at the philosophy behind Data-Context-Interaction (DCI) it appears as though the context is pushed into the root entity.  When a different context (use-case) enters the fray that also makes use of the same Aggregate Root it appears as though the Aggregate Root is changing.

There has been some discussion around the issue of the Aggregate Root changing depending on the use case, i.e. a different entity is regarded as the root depending on the use case.  Now some folks state that the Aggregate Root isn't really changing but the fact that it appears to be changing should be an indication that you are working with different Bounded Context.  Now this is probably true; especially since the word context make an appearance.

A quick note on Bounded Contexts:

Let's stay with the Car example.  Let's say we have a car rental company and we have our own workshop.  Now our car could do something along the lines of Car.ScheduleMaintenance(dependencies) and Car.MakeBooking(dependencies).

This is where issues start creeping into the design.  The maintenance management folks don't give two hoots about rentals and, likewise, the rental folks are not too interested in maintenance; they only want to know whether the car is available for rental.  Enter the Bounded Context (BC).  We have a Maintenance Management BC and Rental Administration BC.  Of course we would probably also need a Fleet Management BC with e.g Car.Commission() and Car.Decommission().

The particular Car is the same car in the real world.  Just look at the registration number.  However, the context within which it is used changes.  It is, in OO terms, a specialization of a Car class to incorporate the context.  This inevitably leads to data duplication of sorts since the data for each BC will probably be stored in different databases.

Assuming we view this as a problem, how could we solve this?  As proposed by the GoF we could try composition.  In DCI terms the context can aggregate the different entities.  I previously blogged about an aneamic use-case model and the context looks an awful lot like a use-case.  I have not played around with how to get these concepts into code but we'll get there.

Repositories return Aggregate Roots?

Now this one is rather weird.  I have no idea where this comes from.  For those that have the Domain-Driven Design book by Eric Evans it is a simple case of opening the front cover and having a look at the diagram printed on the inside where it clearly shows two arrows that point to Repositories.  One comes from Entities and the other from Aggregates (note: not Aggregate Root), like so:

  • [Entities] --- access with --> [Repositories]
  • [Aggregates] --- access with --> [Repositories]

The fact is that a repository always returns an entity whether or not that entity is an aggregate. So if folks want to call it an AR when there is no aggregation it probably will not restrict the design.

"Silo, where are you!"

Thursday, 27 May 2010 09:19 by ebenroux

I just read an interesting article.

The reason I say it is interesting is that silos have been denounced for quite some time now, and they should be.  Yet, here is someone that appears to be a proponent thereof:

"Silos are the only way to manage increasingly complex concepts"

Almost well said.  I would paraphrase and say:

"Bounded Contexts are the only way to manage increasingly complex concepts"

But bounded contexts do not solve complexity on their own.  You definitely need a sprinkling of communication between them.  A lack of communication is what leads to silos.  Communication is a broad term but in this sense is simply means we need a publish / subscribe mechanism between the bounded contexts so that relevant parties are notified when a particularly interesting event takes place.

Without such communication systems duplicate data that does not directly relate to the core of that system.  In this way systems get bloated, difficult to maintain, and complex to integrate with other systems.  Eventually someone will come up with the grand idea of a single, unified system to represent everything in the organistion.  Enter the über silo.  At this point we can say hello to a 3 to 5 year project and after the first 90% the second 90% can begin.

Don't do it!


Tuesday, 24 November 2009 07:12 by ebenroux

I have been struggling with many aspects of OO development over last couple of years, all in an effort to improve my software.  I have, along with others, been trapped in the DDD Aggregate Root thinking that appears to be everywhere.  It appears as though there is this opinion that ARs are the centre of the universe and I have come to the conclusion that it must have something to do with the consistency boundary afforded by an AR.  This seems to have become the central theme.  Almost as though it became necessary to define DDD in some structural sense.

So, all-in-all, the idea that an the AR changes from use-case to use-case is not true.  The consistency boundary does, however, change.  What contributed to my thinking seems to be the fact that everyone thinks that the only object that may be loaded by a Repository is an AR.  This too is a fallacy.

Any entity may be loaded from a Repository.

So what is an Aggregate Root?

The aggregate concept is not new.  The classic Order->OrderLine example comes to mind.  An OrderLine has no reason to exist without it's Order.    However, people tend to confuse ownership with aggregation.  I know I have.  One may manipulate an Order directly, even though it belongs to a Customer.  One would never manipulate an OrderLine directly.  So an AR boils down to how you manipulate your objects.  The consistency is a side effect since an AR only makes sense as a whole in the same way a class only makes sense as a whole.  Both should remain consistent.

So to manipulate an aggregate we nominate an entity within the aggregate to represent the aggregate.  This becomes the Aggregate Root.

But in many, if not most, cases this entity is only the representative.  Using Eric Evans' example in his 'What I learned...' presentation: an aggregate consisting of a Stem and a collection of Grape objects may have, as the root, the Stem class.  But this does not really represent the GrapeBunch aggregate properly.  In some instances one may probably want a GrapeBunch class and use composition to get to the aggregate.  Mr. Evans mentions that he has no real issue with doing this.  However, I feel too many aggregates end up as abstract concepts in the domain.  It may be that the defined Ubiquitous Language has missed the concept or that the domain experts even missed the aggregation themselves.

Aneamic Use-Case Model

It is my opinion that use-cases (or user stories, or whatever you want to call them) may not receive the necessary attention in our modelling.  Well, mine anyway.  I have been using a 'Tasks' layer but have been trying to move these into my entities and ARs.  This may have been a mistake.

I will be mentioning bits from the use case and sequence diagram Wikipedia articles.

Firstly, we need to see where a use-case fits in.  There are essentially two kinds of 'workflows' in any system: sequential and a state-machine.

Now looking at what a sequence diagram does:

"A sequence diagram shows, as parallel vertical lines ("lifelines"), different processes or objects that live simultaneously, and, as horizontal arrows, the messages exchanged between them, in the order in which they occur. This allows the specification of simple runtime scenarios in a graphical manner."

And what a use-case is:

  • "Each use case focuses on describing how to achieve a goal or task."
  • "Use cases should not be confused with the features of the system under consideration."
  • "A use case defines the interactions between external actors and the system under consideration to accomplish a goal."

A use-case appears to fit the idea of a sequential workflow.  It is completed in one step.  So we can call it an operation.  This operation takes place in response to a command from an actor in the system.  If may then publish an event.  These operations resemble transaction script [PoEAA] and may be why some folks choose to stay away from them since it is confused with an aneamic domain model.  However, they are, in sooth, operation script (also Fowler).  Now, I have been defining a 'Task' for my operations but since moving to DDD-thinking I have been trying to move these into my domain classes.  It isn't working.  I feel that use-cases need to be made explicit also.  They lie in-between task-based domain interaction and state-machines.  The other thing is that one operation may need to interact with another.

State Machines

The term workflow is most often associated with a state machine.  The finite state machine article on Wikipedia may be referenced for more information.

Workflow does not fit into the typical use-case definition and it is probably the reason why the term process is used quite often in business requirements documentation.


What I find interesting is that in any business domain of reasonable complexity one will find all these concepts hidden away.  Developing a computer-based solution that takes all of these factors into consideration is a monumental effort and in many cases is under-estimated.  What I have seen over the years appears to be a tendency to focus on specific technologies to try overcome this intricate mass.  Software such as BizTalk has been abused.

At the root of everything, though, is data.  We need data to represent the real world state.  This leads to a group of developers relying only on data manipulation to perform all these specialized areas.  But everything is built up from the data, so (bold is good):

  • Data Structures
    • Data Manipulation (Procedural Code / Transaction Script)
    • Behaviour (OO Code)
      • Entity-Based Interaction
      • Task-Based Interaction
        • Use-Case Modeling (Operation Script)
          • State Machines (Workflow / Saga)

And to top it all off we will add Service-Orientation.  SOA would wrap all of these.



Domain-Events / ORM (or lack thereof)

Friday, 16 October 2009 22:46 by ebenroux

Now I don't particularly fancy any ORM.  I can, however, see the usefulness of these things but I still don't like them.  One of the useful features is change-tracking.  So as you fiddle with your domain objects so the ORM keeps track and will commit the changes to the data store when called to do so.

Now let's say I create an order with it's order lines.  My use-case is such that I have to call Customer.AddOrder(order).  That is OK I guess but what about now storing the changes?  What were the changes?  For each use case I need to have my repository be aware of what to do.  Maybe my OrderRepository.Add(order) is clever enough to save the changes --- well, it better be if I am the one responsible for making the code work.  But what if adding the order changed some state in the Customer.  Perhaps an ActiveOrdersTotal?

This has bugged me for a while and made me consider an ORM on more than one occasion.  I usually fetch a cup of coffee and by the time I get back to my desk the urge has left me.  But today I read about Udi Dahan's domain events again when someone brought it up on the Domain-Driven Design group.  The particular person was using it to perform persistence.  It immediately made sense and I set about giving it a bash on some of my objects.  And I like it!

You can read Udi's post here.

What I like about it is that I can have something like this in my service bus command handler:

using (var uow = UnitOfWorkFactory.Create())
   var customer = CustomerRepository.Get(command.CustomerId);



The ProcessCommand could create the new order and add it to the internal active orders collection and also update the ActiveOrdersTotal.  So far nothing is saved and the domain state has changed.  Now the 'magic' happens.  After the internal AddOrder is called an OrderAdded domain event is raised along with a ActiveOrdersTotalChanged domain event.  The domain event handlers for these two events will then use the relevant repositories to persist the data.  Since the events are raised in the unit of work and are, therefore, within the same transaction scope so the changes adhere to the ACID properties.

Now I have thought about a situation previously where one may need to send an e-mail.  But what if the transaction fails and the e-mail has been sent.  Udi did mention this in his post, though.  The simple answer is to always have all operations be transactional.  So rather than sending the mail directly using some SMTP gateway one would rather use a service bus and send a SendEMailMessage to the e-mail processing node using a transactional queue, for instance.  In this way if the transaction fails the message will never be sent and the 'problem' is solved.

Categories:   Domain-Driven Design | ORM
Actions:   E-mail | Permalink | Comments (0) | Comment RSSRSS comment feed

Order.Total(), CQS [Meyer], and object semantics

Friday, 16 October 2009 22:24 by ebenroux

Right, so I have seen some examples of determining an order total by adding the totals for the order lines, and then applying tax, etc.  Firstly, however, we need to determine whether an order would actually ever do this.  Why?  Because an order represents an agreement of sorts and is a near-immutable object.  One typically starts out with a quote (or shopping cart).  This quote may have items that are on special.  We may even apply a certain discount (based on our client segmentation or a discretionary discount) to the quote.  There will probably never really be any rules around how many quotes a client may have or maximum amounts or the like.

Now once a client accepts a quote it becomes an order.  The quote can, for all intents and purposes, disappear from the transaction store since it is now history.  The order, on the other hand, is now an agreement that needs to be fulfilled.  Some items may be placed on back-order or be removed totally if stock is no longer available.  But the point is that the order now has it's own life cycle.  It may even be cancelled in it's entirety.  So why can we not 'edit' an order?  The problem lies in determining the validity.  Let's say that yesterday there was a special on lollipops and the client ended up ordering 100.  Today he phones and says that he would rather like to order 200.  What pricing would one use?  The simplest would be to give the client a new quote and create a new order for the next 100 lollipos.  The complete data used to quote the client is now available for this new order.

Now let's get to the Order.Total().  Now our order may still need to do this since we may place certain of the items on a BackOrder and need to recalculate the total based on the values stored in the remaining items.  A typical scenario is to run through the order items and add up the totals.  Now this, to me, seems to go against CQS [Meyer] since our query is changing the object state by changing the total.  "But wait!", I hear you say, "The total is a calculated value so it is not state."  This may be true when the order is the Aggregate Root for the use case, but what happens when we have a rule such as "A client may not have active orders totalling more than $5,000"?  Now I know with an ORM the order items may be lazy-loaded.  I do not use an ORM and lazy-loading borders on evil (IMO).  So in the Customer class may have an AddOrder method for this use-case.  But how does it get the total.  Well, the total has to be stored as state.

Changing the order total

OK, so we could have a method such as CalculateTotal() on the order.  But that would result in the order not being valid at all times since I could add an order line and call Total() before calling CalculateTotal().  So the answer is to perhaps call CalculateTotal() internally each time it needs to be called and to then keep the method private.

Aggregate Roots with a collection of another AR

Wednesday, 30 September 2009 08:43 by ebenroux

I have been wondering whether an aggregate root should ever contain a collection of another aggregate root.  Back to the Customer.  Is it OK to have a collection of Orders?  One could argue that a customer may have many thousands of orders after some time so it would be impractical.  However, such a collection (in the domain) would mean that the customer only has a collection of the open orders.  So in a real sense this list should never be all that big.  Using CQS one may even remove the completed orders from the domain store.

Even so, what purpose would this collection serve.  It seems, to me anyway, that the only time such a collection of ARs should be considered is when it falls within the consistency boundary of the containing AR.  So, as per DDD, an aggregate root represents a consistency boundary.  For example, an order with its contained order items needs to be consistent; and it seems rather obvious.

Then one comes across issues such as: "A bronze customer may not have more than 2 open orders at any one time and with a total of no more than $10,000".  Similar rules apply to other customer types.  A typical reaction is to have a reference to the customer in an order to get the type (bronze, etc.) and to be able to ask the OrderRepository for the total of all open orders for the customer.  That is one way to do it, Iguess.

However, suddenly the consistency boundary has moved.  It is now around the customer.  This means that one would no longer add an order willy-nilly but rather add the order to the customer.  So now we need an Orders collection for the customer.  This does not mean that an Order is no longer an aggregate.  This is also why some people reckon that the aggregate root changes depending on the use case.  It is possible that when applying an 'apply order discount' task that the order stays the AR since it doesn't affect the overall collection of orders within the customer boundary.  It also does not mean that we will not have an OrderRepository.  But it does mean that when applying the CreateOrderCommand in our domain that the customer will be loaded, the newly created order will be added to the customer, the consistency checked and if OK the order will be persisted using the OrderRepository.

Domain Task-Based Interaction

Tuesday, 22 September 2009 09:10 by ebenroux

After some interesting to-and-froing on the Domain-Driven Design group the penny dropped on task-based interaction.  Now this is something Udi Dahan seems to have been doing a while and it is actually *very* nifty.  Now I must admit that I thought I actually understood task-based vs. entity based interaction but it turns out that even having something like 'Activate Account' is not necessarily task-based.  Well, the UI would appear to be task-based to the user but how we implement this 'task' in the domain is where it gets interesting. 

For our scenario we will use a disconnected environment as is the case with any real-world software solution.

Entity-based implementation

Now the 'typical' way database interaction takes place, and the way I for one have been doing this, is as follows:  The user invokes the 'Activate Account' function through some menu.  The account is loaded from the table and all the account data is returned to the client.  The user then enters, say, the activation date and clicks the 'Save' button.

Now all this data goes back to the server.  An update is issued and all the account data is overwritten for the relevant row.  "But wait!" come the cries.  What if some other user has updated the banking details?  No problem; we simply add an incrementing RowVersion on the record to indicate whether there is a concurrency violation.  Now here is an interesting bit.  Typically (well I used to do it like this), we simply reload the record and have the user do their action again.  This time the last RowVersion will come along for the ride and hopefully no other user gets to submit their data before we hit the server.  All is well.

Task-based implementation

Using the task-based approach the user would invoke the 'Activate Account' function.  The relevant data is retrieved from the database and displayed.  Once again the user enters the activation date and clicks the 'Save' button.  A command is issued containing the aggregate id and activation date and ends up in the handler for the command where a unit of work is started.  The Account aggregate root is loaded and the command handed to the aggregate to process.  Internally the status and activation date are set.  So only the part that has been altered now needs to be saved.  Should another user have updated the banking details it really doesn't matter since the two tasks do not interfere with one another.

Applying policy rules

So why not just persist the command directly, bypassing the aggregate root?

In order to keep the aggregate valid the command is applied to the AR and it can then ensure that it is valid, or you can pass it through some validation mechanism.  The act of applying the command may change the internal state to something that results in the validity being compromised.  So applying the changes externally would be unwise.

Persisting the result

Now the thing is that I don't currently fancy any ORM technology.  An ORM, though, would do the change tracking for your AR and upon committing the unit of work the relevant changes would be persisted.  However, when dealing directly with your own repository implementation I guess we'll require something like an ActivateAccount method on the repository to persist the relevant portions.

Not all conflicts go away

Although focusing on tasks may achieve some manner of isolation (since business folks don't typically perform the same tasks on the same entities) it may still be possible that two tasks alter common data.  In such cases a concurrency-checking scheme such as a version number may still be required.  Greg Young has mentioned something along the lines of saving the commands and then finding those that have been applied against the version number of the command currently being processed.  So if CommandA completed against version 1 then the version number increased to 2.  Now we also started CommandB against version 1 but how do we know whether we have a conflict?  The answer is to find the commands issued against our aggregate with version 1 and ask each if the current command conflicts with it.  Something like:

var conflict = false;
previousCommands.ForEach(command => { if (thisCommand.ConflictsWith(command)) conflict = true; }

if (!conflict)
   // do stuff

Many-to-Many Aggregate Roots

Wednesday, 9 September 2009 07:53 by ebenroux

There has been some discussion around many-to-many relationships.  Here is what Udi Dahan has to say: DDD & Many to Many Object Relational Mapping

In his example Udi uses a Job and a JobBoard.  As he states the typical modeling for this situation ends up with the Job having a collection of JobBoards and a JobBoard having a collection of Jobs.  This is, as he states, somewhat problematic.  He then reduces the relationship to a JobBoard having a collection of Jobs and a Job having no relation to JobBoards.  This reduction of relationships is more-or-less what Eric Evans also states in his DDD book.

I would like to take another example: Order to Product.  One order can contain many products and a Product can contain many orders.  Yet we don't model it anywhere near an Order having a collection of products or a Product having a collection of orders.

Relationships represented as collections

In this example it may seem obvious since we have all been modeling Order objects for a hundred years.  Everyone should have the idea of an 'extended' association table since we have extra data that needs to be carried in the form of quantity and price.  So we end up with the OrderItem.  Funny that it is not called OrderProduct, yet the OrderItem refers to a product.  That is because there is a natural boundary around an order.  For some reason, these boudaries are not quite that apparent when working with something like a Job and a JobBoard.

Or am I totally off course here?

Aggregate Roots as self-contained units

Tuesday, 8 September 2009 16:23 by ebenroux

OK, I know Aggregate Roots (AR) are a consistency boundary but I believe that they need to do even more.  As Eric Evans mentions in [DDD] relationships need to be scaled down to simple forms or even eliminated since it leads to simpler software.  I agree.

I think we, as developers, have managed to paint ourselves into a corner over the years and since some of our techniques have been developed over a long period we just don't see it.  It's like the guy sitting on his porch with his dog lying next to him.  The dog is moaning and groaning.  A neighbour walks by and enquires about the dog.  So the chap answers:  "Ther is a nail poking through the floorboard and the dog is lying on top of it.".  So the neighbour asks: "Why doesn't the dog just move.".  And the answer: "It doesn't hurt enough."

When philosophizing about some of the software issues that bug me I like to think about a time when there were no computers.  How were these situations handled using manual systems?  Let's take an Order (again).  That piece of paper would contain absolutely all the information required about the order.  There would be no customer defined but maybe a customer number.  We used to have those 'traditional' primary and foreign keys in our databases.  Problem was, whenever we changed the key (for whatever reason) we needed to update the foreign keys, etc.  So how would this work in a manual system?  Probably a little note next to the number saying something like "new number C01-2009".  So no data was actually changed.

There proponents of CR as opposed to CRUD.  So data is only ever added and read.  However, the same idea can be achieved using CQS [Architecture].  Although there will be updates the entire history will remain so it is essentially CR with a latest snapshot.

Over time, then, we got around to the idea of surrogate keys.  But I think that this has lead to the higher coupling I mentioned in a previous post.  One will also often hear arguments along the line of requiring the customer object for discount information, etc. (Gold, Silver, Bronze clients).  But I reckon that *that* information simply serves as defaults.  Once the customer places an order and the customer is a gold customer at that point in time then the discount is assigned to the order and that's that.  Shoud the customer be downgraded the next day the original data remains in tact.

The same can be said of the items being ordered.  The actual products aren't *really* required in the object structure.  The items can be loaded with the relevant description and bar codes and SKU codes and the like and if ever a lookup is required there should be more than enough information to retrieve whatever is required.  Even database joins are possible.