DEV Community

Onorio Catenacci
Onorio Catenacci

Posted on

i = i + 1

My father ran a job shop over in Fraser, Michigan. In this job shop he had some computer numerical control (CNC) machines. These were machines that had been purchased from a Swiss firm and although they worked pretty well, like any machine they needed occasional maintenance. And my father, like any smart business man, was loath to spend money he didn't need to spend so he'd try to figure out problems with the CNC by himself before contacting the vendor's paid and expensive tech support service.

He had in his shop one person who fancied himself a bit knowledgeable on the subject of computers and so he volunteered to take a look at the problem and see if he could solve it. I gather from what my dad told me afterward that this machine had its instructions coded in something akin to BASIC. This employee studied the instructions and finally thought he'd found the issue. He pointed to the source on a line reading something like i = i + 1 and told my father that must be the problem because there was no way that could be correct.

My father related this to me and I, of course, informed him that wasn't meant to be read as an algebraic statement; it's meant to be read as an instruction. Take the value in i, add one to it and then store that value back into the memory location where i is.

Now having said all that I can certainly understand why my dad's employee thought it must be the problem. Algrebraically the statement would be nonsense. Most of us who write software for a living have gotten so accustomed to this idiom (or the C/C++/C#/Java idiom of i++;) that it doesn't give us a moment's pause to see it any more.

Why Do We Accept This As Normal?

So to my mind the question naturally arises--when so many of us (and indeed the inventors of most programming languages of note) have a good background in mathematics, why do we accept this sort of abberation of expression as being perfectly normal and needed?

At one point given limited memory it likely made a lot of sense to allow this pattern that I've heard referred to as "destructive assignment". Since I'll use that term a few more times from here on out I'll refer to it as DA. When you've only got 64Kb of memory then conserving memory is a major engineering concern. The alternative (writing the value to a different memory location and binding it to another name) would quickly become a problem in a tight memory situation. But unless someone is doing embedded software development I'd think the issue of tight memory constraints is no longer a main consideration in how we write software.

Also not everyone does accept it as normal. There are a few programming languages which distinguish assignment from equality testing. Pascal uses := for assignment and = for equality testing. C and languages derived from it use = for assignment and == for equality testing (a situation that has caused some very tricky bugs in C code over the years). i := i + 1 is a very visible way to show we're not trying to assert equality.

Even more to the point DA does contradict our understanding of mathematics. If I have an equation 4 = x + 2 there's only one value that will make that statement true. While I don't know what x is (let's pretend that the answer isn't trivial to arrive at) it's an unknown value not a variable. It doesn't vary at all. In that sense calling something which supports DA a "variable" is actually more correct than we may initially realize.

Modeling Reality?

The people I know who are the biggest proponents for OOP usually tell me they prefer it because it allows them to code in a way that mirrors reality. That's an excellent goal for sure. We certainly want to create software that closely mirrors the problem domain we're tackling. And it's true, that is a benefit of OOP. But in what reality can an unknown take on arbitrary values? For any algebraic equation you can name (or at least any I can think of) there's a limited set of values which make the equation true.

In fairness I am conflating two ideas. OOP isn't the same as imperative programming; DA is a feature of the imperative programming paradigm. That said though, so much OOP is so closely aligned with imperative programming that they often seem synonomous. And pretty much anytime someone codes a loop in any imperative language they code a loop counter (or their colleagues give them grief). Absent newer approaches (like LINQ which, coincidentally, is modelled on the functional programming paradigm) if you code a loop that needs to execute a definite number of times without a loop counter all you're going to do is cause a lot of head-scratching and perhaps anger among your colleagues.

Now having done both functional and imperative programming I can tell you that there are good and bad aspects to both paradigms. But one thing that is a big accidental complexity in the imperative paradigm (at least to my mind) is DA. If a developer uses DA carefully and intentionally then it's no problem. But all of us get under deadline pressure and carefully checking things is one of the first things we jettison. We assume everything will proceed on the happy path.

And there's another very important accidental complexity associated with DA. Most developers working in software these days will not be aware of all the issues that manual memory management caused for developers in the past. And part of the problem with manual memory management was DA. In the days of manual memory management it was a common practice to store memory addresses in memory (C's pointers). When I can easily overwrite a memory address with another one and not only does the compiler not warn me--it is actually expected behavior--the potential for buffer overruns and other hard to trace bugs is greatly increased.

And even if you are very careful to insure you don't do DA unless it's absolutely justified your colleagues may need to interact with the software artifacts you provide and they may not be as careful as you are in insuring they only use DA when absolutely needed. I mean why should we rely on software developer discipline when we can push the issue to the compiler where it really belongs? Let the compiler check us so we don't have to worry about having missed something.

It's Past Time To Move On From Destructive Assignment

There's a whole class of errors which can be laid right at the feet of DA. One can make off-by-one errors occur in functional languages (which lack destructive assignment). It is, however, considerably more difficult to have it occur due to the fact that DA simply isn't allowed and the compiler will raise an error if you try to do so. In fact running a loop for a fixed number of times actually becomes slightly more work in the functional paradigm. Off-by-one errors are sadly quite common in languages that allow DA.

C. A. R. Hoare once refered to nulls as his "Billion Dollar Mistake". And, no doubt nulls and null checking seem to have added far more complexity than they've removed. But DA is also responsible for a lot of accidental complexity and there's not really a good reason to allow it going forward. We can't fix all the legacy code in the world that relies on this but we can learn better ways of coding to remove this accidental complexity.

Oldest comments (0)