Code generators do our hard work. But we don’t need them anymore.
A few years ago working in the industry I took part on the demo of a programming product in which a *wizard *or *code generator *was used. The presenter boasted of writing less code, leaving the machine to write the text for her. Based on the templates that she was declaring.
The seduction argument consisted of writing less code. Today we know that this approach is completely wrong.
If we have to tell the machine what the meta code is, it is because we are not being declarative enough.
There are several reasons:
a) We have not yet reached the level of abstraction, and we have to teach the interpreter to make syntactic manipulations on our models.
If there is a certain redundancy, the usual approach would be to model that redundancy by searching in the real world against the abstract concept we should match.
b) Some code generators are based on the (almost always incorrect) excuse that generated code is faster than the more declarative one used when including an indirection or abstraction. The performance argument almost always loses its support against a real benchmark. As a counterpart we can argue that to read or debug a function in general, the bunch of computer generated code confuses the novice programmer (and the experienced one wastes lost of time).
Currently, human time is more valuable and scarce than machine time.
c) Some programming languages impose arbitrary type restrictions that are only saved with intermediate code generation such as Java generics or templates in c++. These are language limitations and not model restrictions.
Non-declarative languages like C++ or Java offer us with tools like Templates or Generics that look different but are similar.
d) The interpreter uses meta-programming to resolve these hints, although this is discouraged, among other reasons, for leaving obscure references, almost impossible to refactor and easily erasable by mistake.
e) The code generated with Wizards or with meta-programming is much darker, much more difficult to follow and violates the principle of failing fast because it does not usually know how to defend itself against possible invalid constructions.
We have the choice not to use it if we want to build good software models.
f) The code generated by the wizards, templates and automatic code generation tools is of very low quality. It is not usually documented and generates coupling as it is a repetitive cut&paste. In case of finding an error in one of the generated fragments and, due to said coupling, we will have to correct it in multiple places violating the DRY principle.
g) The generated code has little added value, and often encourages bad design practices like automatic *setters and getters *generation.
h) Finally, if computers can make inferences about the repetition of our design patterns, it is because we are not contributing the creative part (and for now not replicable) and we will be out of work in the short-term.
Given the symptom of finding a pattern that is repeated in our code, the diagnosis is almost always the same:
There is an abstraction or generalization in the domain of the problem that we have not yet discovered.
The trivial solution is to go, with the help of our favorite domain expert, to discover what is the real-world concept that encompasses repetition or pattern that the wizard or template did not allow us to discover.
Dynamic code generators were a fad at a time when productivity was measured based on the lines of code generated.
To avoid them, we must stay true to the rule of bijection and look for abstractions within the real world in the problem domain.
Part of the objective of this series of articles is to generate spaces for debate and discussion on software design.
We look forward to comments and suggestions on this article.
This article is also available in Spanish here.