On Frameworks

Warning: This post is some kind of ranting stream of consciousness to get some thoughts off my chest. Expect incoherent thoughts, polemic exaggeration and factual errors. I am the only one to blame.

I keep recommending one "unpublished" book by Chuck Moore to my friends: Programming A Problem-Oriented-Language. As most good stuff around computers and their operation it is from the 70ies and not everyone shares my enthusiasm about it. It describes the ideas Mr. Moore had (or has) about Forth's Gestalt as means to describe programmer's intentions in a more natural and terse way.

Besides the ideas about Forth there is an intriguing driving force behind it, which is mentioned as The basic principle. Now you will have heard this a million times before. It is called:

Keep it Simple

The usual reasoning behind reads like this: Humans have a limit in the number of things one can grasp at a given point in time. The more code you write the more complicated it gets the less you can keep it in your head. And there the bugs enter cripple your code, make it slow and awkward to read and maintain.

This has been taught to quite a few generations of programmers now that it may be hold up as a True Fact in our consensus narrative, the agreement on which we all think how software should be written.

Software I face daily however always seems to originate from a strange parallel universe: Frameworks; Software that allows you to interact with or build a specific type of software. Graphical user interfaces, blog engines, OS distributions, platform independent code, you name it.

And businesses keep building their products on top of these frameworks. Quite a number of programmers are assigned to customize, tender and feed frameworks. Meanwhile others work around quirks of particular implementation details or bugs in the 'works. The next room works on actual features for their solution to an actual problem customers want to pay the company for.

After a while these frameworks tend to get "rings" abstractions layered around each other like growth rings on trees. More and more people are drawn into the maintenance of the code base until someone will trigger a rewrite of the framework, supposedly easier and better adapted to the current requirements and of course less a chore to be maintained... Then the cycle repeats.

Why is that? Well, let's go back to the 70ies to Mr. Moore. His basic principle has got a corollary:

Do not speculate

This is also still part of the agreed on best practices when designing software. Write for current needs, don't implement features in anticipation of a future use case that most likely will never happen, at least in the imagined form. Especially the pointy haired bosses understand this as a precaution not to waste the company's money. Still the code bases grow beyond repair and understanding. I think this is related to the last corollary of Mr. Moore's basic principle, which I haven't found anywhere else so far:

Do it yourself

Mr. Moore explains it like this:

Now we get down the the nitty-gritty. This is our first clash with
the establishment. The conventional approach, enforced to a greater
or lesser extent, is that you shall use a standard subroutine. I say
that you should write your own subroutines.
Before you can write your own subroutine, you have to know how. This
means, to be practical, that you have written it before; which makes
it difficult to get started. But give it a try. After writing the
same subroutine a dozen times on as many computers and languages,
you'll be pretty good at it.

(As it is from the 70ies it is all about subroutines, but you can replace that with APIs, modules, ruby gems, libraries or whatever applies to your domain of programming)

And from my experiences in programming I have to agree. If one has not implemented an algorithm all by himself he does not grok it or understand it in every aspect there is. Of course this can be applied to every other skill or field of work but in programming there seems to be an aversion at work against implementing algorithms yourself: They are insecure, implementations are error prone, have less users and therefore are untested.

I think the other side of the story is: Programmers don't fully understand the problem, don't have (or get) time to learn about the problem properly and reuse some code found on the Internet or approved of by someone else.

While there are good reasons to reuse proven, fully debugged code for certain areas, such as cryptography this reasoning seems to have gotten overgeneralized. It now applies to every field of programming. Existing (hopefully working) code is preferred over planned (not yet working) code. While this is sound reasoning when you want to minimize development costs it does come at a price. And it's name is abstraction.

To make larger programs manageable programmers apply the principles of information hiding and abstraction of low level details. This is done by providing models of the inner workings to the user (another programmer). These models follow a design idea to simplify (hopefully) the complexity of the problem which this particular piece of software should solve.

However the user's model of the problem domain may not be identical to the original author's. Which means in practice that the library does not "fit" nicely into the existing program, because either its data model or its control flow follows a different idea about the problem.

So what programmers do next is building yet another abstraction on top to hide these differences and to make the library to fit in nicely again. One can easily imagine that this will increase the number of abstraction build in for compatibility as more and more external components are "reused" or "refitted".

As a consequence programmers will forget or get the wrong image of how the underlying code really works. All assumptions on performance and behavior with certain inputs are off. If lucky these have been documented but they rarely are.

This lack of understanding has been the main reason that people think a piece of code has become unmaintainable (that and changed requirements making it "unfit" as a whole for the new class of problems). This happens mostly when teams change or whole generations of programmers leave a project or the company. The new programmers don't understand the original idea behind the code and tend to work against its original intentions (or design).

So one may say that, "well, while this explains the growth of software, how does this relate to frameworks? Isn't this what everyone does? Programs tend to grow with or without the use of frameworks right?"

One reason Chuck Moore has been successfully putting his basic principle to work is the way Forth actually works: bottom up. A small set of tools, words in Forth lingo, are combined together to resemble an application. One could say (and Mr. Moore does) that you create a new programming language tailored for solving problems in your problem domain.

The quality of the program is thus the direct result of the quality of the used building blocks. Programmers that will write a lot of systems will reuse their building blocks over and over again but since they are small, they are easily adapted or rewritten if they don't fit the goal of the current program.

This might seem scary to the manager of these programmers, as since these building blocks are small and tiny everything seems to be rewritten from scratch. This is the point where marching orders from above calling for "code re-usability" will be coming down the line of command. For these people frameworks offer a safe haven to long for.

Frameworks make all the tools used visible and explicit by defining them in understandable chunks or abstractions. As mentioned above each abstraction is geared towards a certain model view imposed on the world and their surroundings. So your program does not quite fit this model? You will have to work around it. Your framework hides the details of a tool it offers from you so you cannot adjust the tiny bit that's missing for your needs? Good luck! When it's and open source framework you will be able to adjust the component or even derive your own perfectly suited but now you will have become an expert in yet another framework!

So frameworks tend to mutate into a melting pot of different ideas by programmer fighting different problems and with different needs and models of their problem domain.

Carefully tending these frameworks might take care of the worst but it leaves the programmer kind of vulnerable. When the framework will change in unexpected ways, maybe overnight, will your software still work? How much effort will it be to adapt to new changes?

After all you will only know this for sure when you have done it all by yourself.

Keep it simple. Do not speculate. Do it yourself

Thank you, Mr. Moore.

Code on this site is licensed under a 2 clause BSD license, everything else unless noted otherwise is licensed under a CreativeCommonsAttribution-ShareAlike3.0UnportedLicense