When a painter wants to create a picture (product), he takes a brush (tool) and paints it. Clear as a day. But what does a painter actually do when he takes a pencil and paints a sketch first?
The sketch is created in the same way as the final image, it is an image in itself but in reality it is a tool that helps the author to create the final image. It is a mechanical extension of his imagination.
This ambiguity of tools is common. When a house is being built, it is necessary to first build a scaffolding, which in itself requires construction skill, it would also perform some of the functions of the building, but it is pulled down before completion. It only allowed masons to lay bricks higher than they could reach.
Unfortunately, what is visible at first glance in the physical world is much more obscure in the thought world. Whenever there is mental work, it is subconsciously assumed to be work on the final product. And not only in programming. Whether you open a text editor to write a document, an IDE to write an algorithm, or a database to write queries, it is assumed by management that the result is intended for consumption by the customer, or that it is at least part of such a result.
It is often forgotten that on the way to the result, it is necessary to produce auxiliary constructions in the mind of the author himself. The text editor is not only used to compose documents, but also to organize and sort the notes that will be in the document. Likewise, a programming language, in addition to its primary purpose, helps to recognize abstractions, or a database helps to discover relationships in data.
Why am I breaking this down? Because whether we expect value from working with the tool for the author or the customer affects absolutely everything. What tools do we choose, what is important about the result and what, with whom and how we share. For example, for a customer document, typography, corporate branding, a change table, a formal review process, simply anything that ensures a professional feel are important. On the other hand, high information density, constant collaborative availability for reading and writing, ease of making changes, annotations, simply everything that helps the creation process is important.
The first requirements are best met by Word on a shared repository, the second by the Markdown text editor in Git. It is also possible to similarly compare application servers and REPL or BI systems and Jupyter notebooks against each other. The stumbling block is whether, when and how to switch from one mode to another. Of course, tools that allow a smooth transition from initial ideas to a sophisticated result and back again for the next iteration would be ideal. But anyone who has ever tried to work with Word and its record of changes or, on the other hand, with the generation of .doc files from markdown, knows that this is a utopia that can only be approached from one side or the other by various sets of compromises - One Note, Google apps, AsciiDoc, mind maps…
In real life, the solution to this problem does not lie in technology. Painters don’t worry that drawing a sketch is wasted work if they won’t sell it and the resulting image will be on another canvas. And the construction manager doesn’t get mad at the workers because their scaffolding is uninhabitable. The equivalent of such nonsense happens desperately often in software development. As with any frustration, it is more realistic to change the expectation, not the outcome.
Abstraction needs to be discovered, not designed
It can be argued that algorithmization is, unlike writing free text, an engineering activity. So, although the parallel between the image and the sketch may still be a little valid for text editors, it is no longer for programming languages and environments, because there one should work within the constraints of a clear plan and verified design patterns.
Such an assumption is based on the idea of creating software as an act of design, i.e. that the author formulates a problem with the help of abstractions that he arbitrarily chooses. But in my opinion, this is the root of most problems with overengineering and other discrepancies between the developed solution and reality. It is much more rewarding to think of software development as an act of discovery in which we try to uncover the abstractions that are inherently present in the problem in the same way as painter tries to see the essence of beauty of the scene through his sketch.
An exemplary synthesis of all ideas so far is the understanding of the task of refactoring. A naive point of view may consider it as working on a product with the aim of increasing its quality and thus its price. However, the real meaning was perfectly summed up for me by Kent Beck:
My code can't be tidier than my thinking. The purpose of my tidying is to clarify my thinking by manipulating the code. The code ends up better, but because I understand more, not because I somehow forced it to be better in spite of my confusion.— Kent Beck (@KentBeck) June 28, 2019
Refactoring is a tool that, by assisted externalization of internal thought processes, enables us to apply the abilities of our brain in an issue many times beyond its capacity. It is both a sketch and a scaffolding for thinking in one.
I only realized the full scope and importance of this concept when I read the [category theory book][book]. My intellect was grossly inadequate for this, and I had no one around to advise me. Over time, I converged to formulating the questions I had in Haskel, and since Haskel is a category in itself, I let its compiler answer me. He either agreed with me or explained what I didn’t understand.
It was a strange feeling to change the mental image of a programming language from an unthinking servant to an understanding teacher.