Here goes another article about null
<TL;DR>
It seems to me that including nullability in type systems is an obvious technical step forward in operational security and cost effectiveness. The market is either lagging behind or I am overestimating its value. Perhaps I am part of a daring but naive minority that gives too much credit to safer type systems.
</TL;DR>
Null safety has become quite fashionable in mainstream platforms. Languages such as Swift and Kotlin is endorsed by Apple and Google for use in their platforms. Rust is all the hype in systems programming and WebAssembly. C# received non-nullable types in its 8th edition and TypeScript is gaining traction as a replacement for JavaScript. The Dart team is working hard to make a similar journey in what they call Non-Null By Default(NNBD).
At the same time, I have encountered development teams where this topic is not spoken of, almost as if this null problem that so many language creators is working on is a non-issue. Many developers go about in their Java/C++/JavaScript code base, adding value to their company as intended, not adopting this null-safe fad. Null safe languages is rarely in the top 10 when measuring most used/most popular languages such as TIOBE index and Github Octoverse, with the exception of C# (not measuring the usage of C#8 and onward) and TypeScript (measured in open source projects).
"We manage to build software very well without null safety, thank you very much." - Mr Straw Man, for the sake of this argument
There is a million reasons not to solve this theoretical issue, and I sympathize with a lot of them. Introducing new technical decisions is always a risk, and there has to be equal or larger benefit balancing out the risk-reward scale. It might be a good idea to compare this paradigm leap to paradigm shifts of similar magnitude. The object-oriented paradigm brought a nominal type system which has been widely adopted and brings many benefits when statically typed:
// Java
class Cat {
void meow() {}
}
class Dog {
void bark() {}
}
Dog doggie = new Dog();
doggie.meow(); // Compiler error
This example might be obvious and intuitive as to why it will not, and should not, work. It is hard to quantify how much value the nominal type system has brought to businesses, but the popularity is statistically clear. The object-oriented languages has won the market with a landslide.
It is harder to reason about, and explain why this should be able to compile:
// Java
Dog doggie = null;
doggie.bark(); // Compiles but explodes at runtime
There is seemingly no good reason for this to be allowed, but it just is the way most popular languages work. It can seem a bit disorderly for a newcomer that a type system made to protect the programmer from common mistakes allows for this "shadow" type system. An extra dimension, where you as a programmer has to act as a compiler and find these programmatic landmines. Landmines that plague applications in production everyday, since humans are not as competent as compilers at finding type system errors.
"But null
is a value and not a type, and should be evaluated at runtime", someone might add. True in a sense, but most type systems does not allow an instance of Cat
to be assigned to a Dog
-typed variable. I would argue the instantiated Cat
is a value, at least when assigned to a Cat
-typed variable.
Tony Hoare says it is a bad idea, and it is his invention
Most people have heard or read Tony Hoare declaring null references as his "billion dollar mistake", and it deserves to be repeated. There is a reason why so many languages deals with this. A weak but colorful analogy would be firearms, which in most cases has what is called a safety catch to somewhat reduce the risk of an accidental discharge. It is questionable whether a safety catch is a good idea when the user needs to fire the weapon as fast as possible in a threatening situation. Still most, if not all, military and police weapons implement some kind of safety catch. This could seem unproductive if the human factor were not accounted for, but it is indubitably a good idea when considering the users, probably stressed and maybe even sleep deprived.
When push comes to shove we need to compensate the human factor through the safety of their tools. A classic risk vs consequence analysis has to be made before deciding on the tool to be used by the many.
This leaves me wondering about the rationale behind the choice of a programming language without a null safety catch. Is the probability of a programmer forgetting a manual null check dismissably low, or is the eventual consequences considered tolerable?
Not all accidents can be prevented
Stray bullets still fire despite weapon safeties and applications crash or break even when implemented with type-safe languages. Why do language creators still bother creating these complex compiler heuristics, offering us these implied senses of security?
One could argue it gives the programmer a peace of mind while focusing on the main task at hand. Why doing cognitive work that can be offloaded to a compiler, says the pragmatist. The compiler can never catch all bugs, so it is better to check it properly yourself, the cynic would argue.
Return of investment (ROI) is a central concept to businesses, and technical decisions is no exempt to this reasoning. If the price of the safety catch quadruples the price of the gun, an argument can be made to just train the soldier to be more careful. If the price of development would rise in disproportion to the gains of security by choosing a type-safe language, it would be no wonder why Java and C++ is still popular. It is however not as easy to measure alternative development costs as it is to measure the costs of different weapon configurations.
Top comments (3)
Tony Hoare not only called it a mistake, but also brillantly explained why
As you can see
null
itself isn't a problem, the issue is that it was not integrated in the type system to ensure that all references are absolutely safe, with checking performed automatically by the compiler.Adding null-safety to the type system is just a logical extension of the original vision.
If you like the idea of having a static type system that does automatically lots of checks, then you have no reason to not also want null-safety integrated in the type system.
If you prefer using no static type system and use Javascript, then OK.
I wrote about Tony Hoare here: dev.to/jmfayard/android-s-billion-...
Great article there, and I wholeheartedly agree!
What I fail to understand is the industry attitude towards this integration of null into the type system. It seems to be apathy at large, which begs the question I asked in the article: Does it add anything to businesses? Not that I can answer the question myself, I just have to consider the fact that null-unaware type systems is not being abandoned in a significant hurry.
Kotlin, Swift, typescript, GraphQL, F#, Eiffel have it, even Java via the
@Nullable
annotations. Not a bad start!