The 'future' of programming

19 December 2016

After watching “Uncle” Bob Martin - “The Future of Programming” I got to thinking about the software industry again. I got to this video via AGILE HAS FAILED. A PEEK AT THE FUTURE OF PROGRAMMING and I’m pretty sure that was again via my daily CodeProject Newsletter.

Uncle Bob has been around for a while and has a fair bit of insight into the software world. There are a couple of things in his presentation that caught my attention. The reference to the following statement by Alan Turing was particularly interesting:

“In order to supply the machine with these problems we shall need a great number of mathematicians of ability.” - Alan Turing

Throughout Uncle Bob’s talk I was thinking about how much computing has actually changed. He mentioned how, in the early days, the “programmers” were actually engineers, mathematicians, or scientists. These folks were usually a bit older and didn’t need any management. They knew what they needed to do and got the job done. That may very well be true but there is also the size factor. Computers really could not do much. Programs were much smaller and the architecture of the system was constrained to such a degree that there probably would not necessarily be a team of tens or hundreds of folks. Working on a problem in isolation probably wasn’t uncommon.

My first computer was a Commodore VIC-20. Once you switched it on you roughly 3.5KB of RAM to write your BASIC program. Now if you go ahead and open Microsoft Word and save a blank document you’ll find that the size of the empty document is probably more than that. On my machine the size on the disk is 12KB. Things have changed.

Nowadays we have rather large software projects. So-mcuh-so that one probably would not come across many software projects that require only a single programmer. My very first professional programming job in 1995 was a contract to develop a Soil Conservation System at the Glen Agricultural Development Institute. I was the sole programmer using Visual Basic 3.0 and Microsoft Access 1.0. There may have been some Crystal Reports in there too. The point is that I had very little guidance and completed the project alone. Back to today and any significant software project is going to require multiple disciplines. This means that there is going to be more communication between people. We need to have various bits interact with each other. This has led to some monstrous siloed implementations over the years where we have a high degree of coupling and we could end up grinding to a halt. Identifying the interactions and defining the overall architecture is something that, if we get it wrong, can have dire consequences for our product.

On mathematicians and ability

Do we really need mathematicians? I don’t think so. There are many aspects to software development that are not necessarily mathematical in nature. Being a mathematical genius probably isn’t going to hurt. I think the more important bit about Turing’s statement is the ability factor.

How do we measure ability? Will university degrees tell us much? How about all the certifications that are available? None of these are any true indication of ability. The noun Ability is defined as:

  • possession of the means or skill to do something.
  • talent, skill, or proficiency in a particular area.

Uncle Bob ends his talk by saying that if we, as programmers, do not regulate ourselves then governments will eventually do so.

Perhaps.

Unfortunately it is not possible to regulate, or manage, ability into programmers. You either have it or you don’t. Some programmers strive to improve themselves whilst others are happy to just take home a salary. Some programmers are fine with taking the responsibility for their decisions and actions while others stand back and are content following set tasks without any decision-making involved.

However, there are various skills, and levels of skill, required when developing software. Who gets to decide what the regulations will look like? It is true that business folks do not understand technology to the degree that they can measure outcomes or even compare programmers to each other. This is a problem.

I have come to the conclusion that much of what we do as programmers falls in the realm of tacit knowledgte. It is almost impossible to convince another programmer, of any level of ability, that one design or approach may be superior to another. If tacit knowledge is difficult to transfer then measuring it will be equally difficult.

Is agile dead?

Also interesting is that Uncle Bobs has conceeded that the agile movement is, for the most part, dead.

Something that was created by a group of technical gurus has been annexed by business. When scrum came to the fore it immediately seemed as though it was trying to introduce some structure to agile. I guess that was the beginning of the end.

Regulation

I highly doubt whether regulation will ever be an option. I have seen governance in companies that is a total farce. Any regulation will result in increased costs and a whole new business built around the regulators and assessors and people of that ilk.

This will not improve the quality of software.

We need more people of ability. That is one thing we cannot escape.


JavaScript modules for C# developers

09 July 2016

I have decided to make use of DoneJS for my web development. In turn, DoneJS makes use of a host of other technologies. The one that drew me to this environment was CanJS as I had spent some 3 years using Ember.js whilst working for a former employer and I didn’t quite like it. After some investigation I decided on CanJS and that led me to DoneJS.

JavaScript is a rather odd language and the ecosystem is rather huge, as with the .Net environment. There are just so many choices. As a result I have largely ignored anything that did not directly relate to my work. As such I ended up missing the entire JavaScript module train. In the upcoming EcmaScript 6 JavaScript Modules are pretty well defined. Now, if you are familiar with JavaScript modules you may as well ignore this post. If you need some background reading you can read this in-depth article.

Dependencies in C#

Although it isn’t strictly required to use dependency inversion when dealing with dependencies in C# I thought that I would take this angle to illustrate some more differences between C# and the JavaScript world.

When adhering to the dependency inverson principle of the SOLID principles one would depend on an abstraction. That abstraction is typically an interface:

public interface IRandomNumberGenerator
{
	int Next(int minimum, int maximum);
}

We could then make use of this abstraction in some class without worrying too much as to how it is going to be implemented:

public class RandomNumberConsumer : IRandomNumberConsumer
{
	private readonly IRandomNumberGenerator _randomNumberGenerator;

	public RandomNumberConsumer(IRandomNumberGenerator randomNumberGenerator)
	{
		_randomNumberGenerator = randomNumberGenerator;
	}

	public bool Probability()
	{
		return _randomNumberGenerator.Next(0, 100) > 50;
	}
}

We have inverted the dependency by not depending on a specific implementation. It is possible to have multiple implementations with each working in a slightly different way. For instance:

public class DefaultRandomNumberGenerator : IRandomNumberGenerator
{
	private static readonly Random _random = new Random(DateTime.Now.Millisecond);

	public int Next(int minimum, int maximum)
	{
		return _random.Next(minimum, maximum);
	}
}

public class DoubleRandomNumberGenerator : IRandomNumberGenerator
{
	private static readonly Random _random = new Random(DateTime.Now.Millisecond);

	public int Next(int minimum, int maximum)
	{
		return (_random.Next(minimum, maximum) + _random.Next(minimum, maximum)) / 2;
	}
}

Making use of an interface for our dependency inversion also means that we can test the interaction by specifying known values that result in a predictable result:

[TestFixture]
public class RandomNumberConsumerFixture
{
	[Test]
	public void Should_return_true_for_values_above_50()
	{
		var generator = new Mock<IRandomNumberGenerator>();

		generator.Setup(m => m.Next(It.IsAny<int>(), It.IsAny<int>())).Returns(60);

		var consumer = new RandomNumberConsumer(generator.Object);

		Assert.IsTrue(consumer.Probability());
	}

	[Test]
	public void Should_return_false_for_values_below_51()
	{
		var generator = new Mock<IRandomNumberGenerator>();

		generator.Setup(m => m.Next(It.IsAny<int>(), It.IsAny<int>())).Returns(30);

		var consumer = new RandomNumberConsumer(generator.Object);

		Assert.IsFalse(consumer.Probability());
	}
}

We could then make use of a dependency injection container to map a specific implementation to the required interface. To make things simpler I have added an IRandomNumberConsumer interface as well, even though most DI containers should be able to register concrete types without an interface. In logical terms we would then register our components in our container as follows:

var container = new MyDependencyContainerOfChoice();

container.WhenAskingForInterfaceType<IRandomNumberGenerator>().ReturnAnInstanceOf<DoubleRandomNumberGenerator>();
container.WhenAskingForInterfaceType<IRandomNumberConsumer>().ReturnAnInstanceOf<RandomNumberConsumer>();

Console.WriteLine(container.GetImplementationOf<IRandomNumberConsumer>().Probability());

The container will see that the RandomNumberConsumer implementation of the IRandomNumberConsumer requires an instance of IRandomNumberGenerator and will locate the type that should be returned and inject it into the RandomNumberConsumer.

This, of course, brings us to how any required instance is created. Typically a container provides a default lifestyle for the instance. In most cases the container will return a singleton so that each request for the implementation will return the same instance. However, we may instruct the container to act a bit like a factory and set the lifestyle to Transient in order for the container to return a new instance of the requested implementation each time we request it.

These dependencies can exist in one or more files within one or more dependencies. When we need a dependency we will simply reference the relevant assembly, either directly or using NuGet.

Dependencies in JavaScript

JavaScript does not work like this. Is is a dynamically typed language that is also weakly typed.

This means that, in essence, there can be no dependency inversion. It is not possible to define an abstraction to depend on. You would simply depend on the implementation and with some luck the implemetation that you selected would implement all the required public interface bits.

There is a certain elegance in the way JavaScript works in that you can override just about everything. Execute the following in the console of your browser:

alert('hello world!');

The result is somewhat predictable. And now this:

window.alert = function() {}

Let’s give that hello world business another go, shall we:

alert('hello world!');

Well, that escalated quickly! Absolutely nothing happens and we have managed to break some pretty fundamental JavaScript functionality.

However, this can come in handy when we really do need to swap out some functionality on an object. In the following code we will reproduce the C# code to a certain degree.

RandomNumberGenerator = function() {
	this.next = function(minimum, maximum) {
		return Math.floor(Math.random()*(maximum-minimum+1)+minimum);
	}
}

RandomNumberConsumer = function(randomNumberGenerator) {
	this._randomNumberGenerator = randomNumberGenerator;
	
	this.probability = function() {
		return this._randomNumberGenerator.next(0,100) > 50;
	}
}

// to test we'll create testing assertions
assertTrue = function(assertion) {
	if (assertion) {
		console.log("OK!")
		return;
	}
	
	throw new Error("Assertion failed.")
}

assertFalse = function(assertion) {
	if (!assertion) {
		console.log("OK!")
		return;
	}
	
	throw new Error("Assertion failed.")
}

// now to test we'll use a "mock"
var mock = { next: function() { return 60; } };
var consumer = new RandomNumberConsumer(mock);

assertTrue(consumer.probability());

mock.next = function() { return 30; };

assertFalse(consumer.probability());

We can achieve the same outcome in JavaScript that we had in C# but it works in a very different way. There is no dependency injection container since there is no interface-to-implementation mapping required, or even possible. In this way, a dynamic language has no need for a dependency injection container and it doesn’t even make sense to have one.

However, we do still have dependencies. There are no assemblies since JavaScript is interpreted. This means that we ever only have source files that we work with. Therefore, in order to use a dependency we have to use the source code file containing the code we wish to make use of. This is where we end up with a whole host of <script> tags. It is also important that the script tags be ordered correctly to represent the dependency graph correctly since we cannot make use of a dependency unless the code for that dependency has been executed in the JavaScript environment.

If you have ever done any JavaScript development you will know about the <script> tag. In this article I will use the script tag to synchronously execute external JavaScript by using the src attribute. There are some variations on the use of the <script> tag but they are beyond the scope of this article.

Our dependencies are usually added in an html file. Since the scripts are loaded immediately, and block any rendering, they should be placed at the bottom of the page just before the closing </body> tag. On a side note, we need to add any stylesheets to the top of the page in the <head> tag to apply styling to the page while we wait for those <script> tags to load:

<html>
<head>
	<link href='css/bootstrap-responsive.css?{cache-buster}' rel='stylesheet' type='text/css' />
	<link href='css/site.css?{cache-buster}' rel='stylesheet' type='text/css' />
	<link href='css/datepicker.css?{cache-buster}' rel='stylesheet' type='text/css' />
	<link href='css/another-one.css?{cache-buster}' rel='stylesheet' type='text/css' />
</head>
<body>	
	<script src='js/site/configuration.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/jQuery/jquery.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/Moment/moment.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/LawnChair/lawnchair.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/LawnChair/plugins/aggregation.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/LawnChair/plugins/callbacks.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/LawnChair/plugins/pagination.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/LawnChair/plugins/query.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/i18Next/i18next.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/Bootstrap/bootstrap.js?{cache-buster}' type='text/javascript'></script>
	<script src='js/external/Bootstrap/bootstrap-datepicker.js?{cache-buster}' type='text/javascript'></script>
	
	<script src='js/site/app.js?{cache-buster}' type='text/javascript'></script>
</body>	
</html>

For many web applications this would be fine since we have a couple of “global” dependencies that we expect to be available in every bit of JavaScript that we execute. However, eventually we will run into a nightmare w.r.t. dependency ordering and how one goes about bundling all the files. In case you are not aware of it: it is faster to download one bigger resource (script, css, image) than many smaller resources.

This means that we are going to need some mechanism to merge all these files we depend on and then minify then into the least number of files possible. This, in itself, may be somewhat of a challenge.

When dealing with many JavaScript files, such as when on develops a single page application, the number of files can be quite substantial. With Visual Sutdio one can drag a JavaScript file into another and the following reference entry will be placed at the top of the file that is referening the other:

/// <reference path="relative-path-to/this-dependency.js" />

var value = UseDependency('value');

It is important to note that this reference entry is purely informational. Visual Studio may be able to use it to determine some intellisense but other than that there is no tooling that I am aware of that makes use of this entry. I have developed a bundler for a previous project that uses these entries to build up a dependency graph. This means that the entries have to be correct and circular references are prevented by the tooling.

Why use modules?

As we have seen above absolutely all dependencies are globally scoped. This means that when the files execute something is defined globally. All variables in JavaScript (ES5 / current as at July 2016) are scoped either globally or by function:

// this declaration...
var shuttle = {};

// ...is the same as this
window.shuttle = {};

Anything, therefore, that is not defined in a function gets attached to the global window object. In contrast, anything defined anywhere (yes, anywhere) in a function is scoped to the function:

window.shuttle = {};

window.shuttle.sayHello = function(to) {
	var localTo = to.toUpperCase();

	alert('hello ' + localTo + '!');
	
	for(var i = 0; i < 2; i++) {
		console.log('variable "i" is hoisted to the top of the function.');
	}
}

You may be wondering why this is relevant. When importing, for example, jQuery using a <script> tag the library assigns itself to variable $ which we have seen is now equivalent to window.$. This is where things get interesting when another library decides that it, too, would like to use the $ variable. There is nothing in JavaScript that is going to prevent that. There are ways around this but modules can help us here since the module, as a whole, is encapsulated.

A JavaScript module is a singleton and is represented by a file. The module can import other modules and export functionality in a variety of ways. This means that our dependencies are still file-based since we have not type information. The following is an example of importing jquery as a module:

import $ from 'jquery';

$('div').css('font-weight', 'bold');

You may be thinking: “What’s the big deal?” and if this is all you are going to do then you are definitely not going to find much return on investment.

However, consider the following:

import $old from 'jquery-1.8';
import $new from 'jquery-2.0';

$old('div').css('font-weight', 'bold');
$new('div').css('font-weight', 'bold');

I am using the two jquery version as an example since the chances of actually requiring both version in a real system is rather slim. However, it does demonstrate that it is somewhat simpler than having to use <script> tags. You cannot use this file directly in a browser without having the code transpiled; else you would probably receive an error since todays browsers do not yet implement this syntax. A JavaScript transpiler takes source that is not pure JavaScript and changes it to be pure JavaScript.

Logically (implementations are going to vary), a transpiler may take the above code and inspect it for import statements. It will find the import $old from 'jquery-1.8'; code and check an internal registry hash for the jquery-1.8 identifier and return the module or, if necessary, first load it:

if (!window.MODULES['jquery-1.8']) {
	this._fetchModule('jquery-1.8.js');
}

return window.MODULES['jquery-1.8'];

Now it can assign the functionality returned by the module to the variable you want. The eventual, proper, JavaScript code could resemble this:

var $old = window.moduleLoader.Get('jquery-1.8');
var $new = window.moduleLoader.Get('jquery-2.0');

$old('div').css('font-weight', 'bold');
$new('div').css('font-weight', 'bold');

The beauty of this is that the above JavaScript would be in a function as well. This means that the variables are locally scoped to the function and do not pollute the global (window) namespace.

Module dependencies

It may still not appear that big of a deal but once we get to module dependencies things may start to look a bit clearer.

Traditionally we would either have rather large chunks of JavaScript files or we would have to manage the depend out-of-band since tooling for this, especially in the Visual Studio / .Net space, is rather lacking.

app.js:

import a from 'dependency-a';

a.doSomethingA();

dependency-a.js:

import b from 'dependency-b';

var a = {
	doSomethingA: function() {
		b.doSomethingB();
	}
}

export default a;

dependency-b.js:

var b = {
	doSomethingB: function() {
		alert('doing... something...');
	}
}

export default b;

Another quick point to remember is that each module that requires another should import that module since, even though modules are singletons, they are no longer global. You have to assign it to a variable and, being singletons, they are only loaded once.

From the above it is rather easy, given the correct tooling, to build a dependency graph and then merge and minify the files.

Node / NPM

If you are au fait with node/npm you can skip this section.

There has been a major shift for web development tooling to Node.js along with the npmjs. It probably makes sense given that node is cross platform. Npm is a package manager for node and JavaScript. It can be somewhat odd to use npm for front-end JavaScript as all the code is stored in a sub-folder called node_modules. However, not all modules are node.js modules but packages targeting the tooling side of things most certainly are node modules. The node and browser JavaScript files are, therefore, mingled. This can take some getting used to.

Npm is somewhat like NuGet and, as such, needs some place to store the package dependencies. When using NuGet the dependencies are stored in the packages.config file and it has the following structure:

<?xml version="1.0" encoding="utf-8"?>
<packages>
  <package id="Shuttle.Esb" version="6.0.0" targetFramework="net45" />
  <package id="Shuttle.Esb.RabbitMQ" version="6.0.1" targetFramework="net45" />
</packages>

You will notice that the packages.config contains only the dependency information. The npm package information is stored in a file called package.json and has the following basic structure:

{
  "name": "nodejs",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC"
}

You can get this basic structure by executing the following in a console and just pressing enter to answer all questions:

npm init

You will realise that there is more to this structure than just dependencies. Let’s add dependencies by running the following in a console:

npm install steal --save
npm install steal-tools --save-dev

The package.json now looks like this:

{
  "name": "nodejs",
  "version": "1.0.0",
  "description": "",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "steal": "^0.16.23"
  },
  "devDependencies": {
    "steal-tools": "^0.16.5"
  }
}

The --save argument tells npm to add the dependency to the dependencies hash and keep track of it. The --save-dev keeps track of the dependency in the devDependencies hash. The difference between these two is that the normal dependencies refer to packages that your application requires. This is akin to the packages in NuGet. However, the devDependencies refer to packages you need for tooling. There isn’t really anything in .Net that relates to this although you could think of something like msbuild as a devDependencies candidate. The devDependencies will, therefore, not be distributed/packaged with your application.

The main thing to understand is that the package.json does more than just track dependencies. However, when someone else checks out your repository and runs npm install the package.json file will be used to download the requisite packages.

Note to Windows users: the node_modules folder can become very deep and you may run into path length errors at some stage when using node/npm. There is a node package called flatten-packages that will sort you out. [update]: I have been informed by Matthew Phillips from Bitovi that the long file name issue has been resolved in npm version 3, and that npm version 3 comes with node version 5.

StealJS

This brings me to StealJS:

Futuristic JavaScript dependency loader and builder. Speeds up application load times. Works with ES6, CommonJS, AMD, CSS, LESS and more. Simplifies modular workflows.

As you can tell from the above, StealJS enables loading modules that have been implemented using a variety of approaches. I would suggest using the EcmaScript 6 module format for any future development.

There are two parts to steal:

  • The module loader that is included in your application
  • The bundler (steal-tools) that creates release/production versions of your application and related dependencies

Steal also allows the progressive loading of modules using code at any point. This is useful when you only determine the module to load at runtime.

You will find all the information you need to use steal on the site and you can get help on DoneJS forum.


Let's work overtime

31 March 2015

So yesterday (30 March 2015) I go to a large insurance company for an interview. Some high-level technical questions arise. I could tell that the interviewer is buring to get to something. Then he mentions that he has had to put in 300 hour months and whether I would decline a project on that basis.

Well.

Let me think.

So we have seen countless examples of why working overtime does not help. For those still wondering about this simply search for “overtime counterproductive”.

In South Africa it is rather difficult to get hold of decent developers. So instead of employing a productive developer to work “normal” time and actually add value it seems to be better to employ anyone willing to work 300 hour months.

This was one of the shorter interviews in my career.

I really do not get this obsession with overtime. If working odd hours is part of your job that’s something different.

I was not interested in the job as it would leave me with very little liberty.


Windows 8.1 Freezing/Stuttering/Buzzing

21 February 2015

For the last couple of weeks (as at 21 February 2015) my Dell Inspiron 17R SE has been periodically “freezing” for about half a second with a buzzing sound when there is audio.

After some Internet searching I came across this latency checker:

DPC Latency Checker

Every-so-often a red bar would appear when the machine “freezes”, with the buzzing sound, indicating some latency.

Some more searching led me to the xperf tool and I followed [this post](http://www.sysnative.com/forums/windows-7- -windows-vista-tutorials/5721-how-to-diagnose-and-fix-high-dpc-latency-issues-with-wpa-windows-windows-vista-7-8-a.html):
xperf -on DiagEasy

I then waited for the red bar to appear in the latency checker and then executed this (in my d:\xperf folder):

xperf -d trace.etl

I then double-clicked the trace.etl file in Windows Explorer and identified the Dell Data Vault as the culprit:

Dell Data Vault

Some more searching then brought me to this Dell community page. I tried disabling the Dell Diag Control Device * but that didn’t help and then, as indicated further down, I disabled the * Dell Data Vault and * Dell Data Vault Wizard* services.

That did the trick. Quite a frustrating exercise and I would have expected better from Dell :(


ReSharper Open Source

29 October 2014

Yesterday I received my first free ReSharper Open Source license from JetBrains for use with shuttle-esb.

I am totally thrilled by this as I have been using ReSharper for a good number of years and find it absolutely amazing that they assist open source software in this way.

So a big thank you to JetBrains!


NServiceBus V5 Configuration

29 October 2014

Since I am active on StackOverflow I came across this question: How to reconfigure SQLTransport based NServicebus in Asp.net Web API? The answer caught my attention. It seems as though V5 of NServiceBus has been redesigned to make the configuration less static. The reason I find it interesting is that shuttle-esb has been designed like this from the start. Shuttle has also had a Pipeline concept from very early on and NServiceBus seem to be toying with the idea of a “chain of responsibility”. NServiceBus is what got me started on the Shuttle journey and Udi Dahan (creator of NServiceBus...

Read Post

Why we stopped using SignalR

08 October 2014

My current development team has been making use of SignalR for communication since I came on board 1 February 2013. However, we have been replacing the communication infrastructure bit-by-bit as we ran into issues. A while back we upgraded SignalR to version 2.1.1 and it just didn’t seem to work. We rolled back to version 2.0.3 to get things going again. It then dawned on us that we had removed so much of SignalR that we really did not need it at all. It was then removed after consultation with our product owners and the architecture folks. Our main issues:...

Read Post

MSMQ and TransactionScope

07 March 2012

It turns out that when using TransactionScope you should be creating the MessageQueue instance inside the TransactionScope: using(var scope = new TransactionScope()) using(var queue = new MessageQueue(path, true)) { // timeout should be higher for remote queues // transaction type should be None for non-transactional queues var message = queue.Receive(TimeSpan.FromMilliseconds(0), MessageQueueTransactionType.Automatic); // ...normal processing... scope.Complete(); } So MSMQ behaves the same way that a database connection does in that one has to create the database connection inside the scope.

Read Post

SOA is not going away

05 December 2010

Sometimes it is difficult to explain things that appear to be extremely simple. I guess this is where tacit knowledge comes into the picture. It may very well be that service orientation falls into this category. Let’s say that I wanted to pay a bill 50 years ago by sending my creditor a cheque by mail. I also expect a letter in reply stating that my cheque has been received. I firstly place my cheque, along with some relevant information, in an envelope and write the address of the creditor to send the mail to on the front. I could...

Read Post

Bite-Size Chunks

04 November 2010

I came across this article on ZDNet: The bigger the system, the greater the chance of failure Now, I am all for “bite-size chunks” and Roger Sessions does make a lot of sense. However, simply breaking a system into smaller parts is only part of the solution. The greater problem is the degree of coupling within the system. A monolithic system is inevitably highly coupled. This makes the system fragile and complex. The fragility becomes apparent when a “simple” change in one part of the system has high ramifications in another part. The complexity stems from the number of states...

Read Post