A few days ago I had to search to find out what the SOLID Principles were. Did I learn those back in college? Then, I got to the Liskov Substitution Principle and began frantically trying to recall those Computer Science classes all those years ago.
Once the panic and stress faded away, I decided to gather my thoughts and write about what I see as the basic "practical" principles and their application.
This will be followed at some point with a more "code centric" article on Practical Front-End Principles.
When writing code, developers have to take into account: the organization they are working for, the team(s) they work with, as well as the general and specific problem(s) they are trying to solve. No principle or practice is applicable for every situation or scenario.
- Is the code properly?
- How do we write testable code?
- How do we write clean or good code?
This principle can be applied all throughout our lives. However, it is truly necessary in all but the most trivial programming projects.
It starts early on when the scope of the project is being defined. When the scope is simplified enough, take it to another level (simplify more); scope creep is inevitable, so start as small as possible.
But even after development has begun, keep it simple. Complex code takes longer to design and write, is more prone to bugs and errors. Additionally, it is harder to modify complex code down the road.
Remember that perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away.
This is a principle within software development aimed at handling the duplication of code (targeting repeated patterns within the code), replacing it with abstractions or using data normalization to avoid redundancy.
The D.R.Y. principle states that "every piece of knowledge must have a single, unambiguous, authoritative representation within a system." When the D.R.Y. principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements. Additionally, elements that are logically related all change predictably and uniformly, and are thus kept in sync.
This means, "don't solve problems you do not have."
The first argument for Y.A.G.N.I. is that while there is an assumed need for a feature, it is likely that we are be wrong. After all the context of Agile Methods is accepting that requirements change.
Most developers love to be clever. In fact, they probably got into the field because they loved being clever. Developers love writing small, streamlined code that ran faster than anyone thought it could and at the same time confounded all those who read it.
The issue is that the world of development is changing rapidly. A fundamental shift started happening years ago that has really solidified. The shift is that the complexity in software has moved from the smaller parts of the applications to the design of the application as a whole, and so the top priority for almost all code should be in how easy it is to read and write, because the systems that they are involved in have become so much more complex.
But as developer these same rules apply, no longer do we get credit for writing a routine that parses five (5) percent faster, especially not if it makes the code harder to read, longer to write, longer to debug, or harder to test. This type of optimization (for the most part) is just a pointless exercise, often written for nothing more than the developer’s ego.
Software entities (classes, modules, functions, etc.) should have responsibility over a single part of the software functionality and that responsibility should be entirely encapsulated by the class.
Software entities should be open for extension, but closed for modification"; that is, such an entity can allow its behavior to be extended without modifying its source code.
The principle defines that objects of a superclass shall be replaceable with objects of its subclasses without breaking the application. That requires the objects of your subclasses to behave in the same way as the objects of your superclass.
An overridden method of a subclass needs to accept the same input parameter values as the method of the superclass. That means the code can implement less restrictive validation rules, but cannot enforce stricter ones in the subclass. Otherwise, any code that calls this method on an object of the superclass might cause an exception, if it gets called with an object of the subclass.
Similar rules apply to the return value of the method. The return value of a method of the subclass needs to comply with the same rules as the return value of the method of the superclass.
- Method signatures must match (must take the same parameters).
- The preconditions for any method cannot be greater than that of its parent.
- Any inherited method should not have more conditionals that change the return of that method, such as throwing an Exception.
- Post conditions must be at least equal to that of its parent.
- Inherited methods should return the same type as that of its parent
- Exception types must match
- If a method is designed to return a specific exception in the event of an error, the same condition in the inherited method must return a the same exception.
No client should be forced to depend on methods it does not use. Interfaces that are very large should be split into smaller and more specific ones so that clients will only have to know about the methods that are of interest to them. This principle is intended to keep a system decoupled and thus easier to refactor, change, and redeploy.
When following this principle, the conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details. The principle states:
- High-level modules should not depend on low-level modules. Both should depend on abstractions (e.g. interfaces).
- Abstractions should not depend on details. Details (concrete implementations) should depend on abstractions.
A design principle for separating a computer program into distinct sections, so that each section addresses a separate concern. A concern is a set of information that affects the code of a computer program. Code that embodies this principle well is called a modular program. Modularity, and hence separation of concerns, is achieved by encapsulating information inside a section of code that has a well-defined interface.
Separation of concerns results in higher degrees of freedom for some aspect of the program's design, deployment, or usage. Common among these is higher degrees of freedom for simplification and maintenance of code. When concerns are well-separated, there are higher degrees of freedom for module reuse as well as independent development and upgrade. Because modules hide the details of their concerns behind interfaces, increased freedom results to later improve or modify a single concern's section of code without having to know the details of other sections, and without having to make corresponding changes to those sections. Modules can also expose different versions of an interface, which increases the freedom to upgrade a complex system in piecemeal fashion without interim loss of functionality.
Separation of concerns is a form of abstraction. As with most abstractions, interfaces must be added and there is generally more net code to be executed. So despite the many benefits of well separated concerns, there is often an associated execution penalty.
The degree of interdependence between software modules; a measure of how closely connected two routines or modules are; the strength of the relationships between modules.
Coupling is usually contrasted with cohesion. Low coupling often correlates with high cohesion, and vice versa. Low coupling is often a sign of a well-structured computer system and a good design, and when combined with high cohesion, supports the general goals of high readability and maintainability.
Tightly coupled systems tend to exhibit the following developmental characteristics, which are often seen as disadvantages:
- A change in one module usually forces a ripple effect of changes in other modules.
- Assembly of modules might require more effort and/or time due to the increased inter-module dependency.
- A particular module might be harder to reuse and/or test because dependent modules must be included.
This refers to the degree to which the elements inside a module belong together. In one sense, it is a measure of the strength of relationship between the methods and data of a class and some unifying purpose or concept served by that class. In another sense, it is a measure of the strength of relationship between the class’s methods and data themselves.
Cohesion is an ordinal type of measurement and is usually described as “high cohesion” or “low cohesion”. Modules with high cohesion tend to be preferable, because high cohesion is associated with several desirable traits of software including robustness, reliability, reusability, and understandability. In contrast, low cohesion is associated with undesirable traits such as being difficult to maintain, test, reuse, or even understand.
Cohesion is often contrasted with coupling, a different concept. High cohesion often correlates with loose coupling, and vice versa.
Cohesion is increased if:
- The functionality embedded in a class, accessed through its methods, have much in common.
- Methods carry out a small number of related activities, by avoiding coarsely grained or unrelated sets of data.
Advantages of high cohesion (or "strong cohesion") are:
- Reduced module complexity (they are simpler, having fewer operations).
- Increased system maintainability, because logical changes in the domain affect fewer modules, and because changes in one module require fewer changes in other modules.
- Increased module reusability, because application developers will find the component they need more easily among the cohesive set of operations provided by the module.
While in principle a module can have perfect cohesion by only consisting of a single, atomic element – having a single function, for example – in practice complex tasks are not expressible by a single, simple element. Thus a single-element module has an element that either is too complicated, in order to accomplish a task, or is too narrow, and thus tightly coupled to other modules. Thus cohesion is balanced with both unit complexity and coupling.
Developers are expensive and seem to be in short supply. One of the biggest challenges they have is making sure they are making good use of their time. There is a continuous balancing act taking place when looking at how much time to allocate for performance tuning and optimization.
The performance and scalability of an application is important. Developers need to make sure they are building the right feature set first.
Self documenting source code follows naming conventions and structured programming conventions that enable use of the system without prior specific knowledge.
Commonly stated objectives for self-documenting systems include ...
- Make source code easier to read and understand.
- Minimize the effort required to maintain or extend legacy systems.
- Reduce the need for users and developers of a system to consult secondary documentation sources such as code comments or software manuals.
While the SOLID principles give direction to development practices, I don't believe it should ever be all we use to ensure we are writing good, clean (as well as flexible and maintainable) code. There is much more we should be aware of and need to pay attention to as developers.
- Nothing Is Set In Stone
- Keep It Simple, Stupid (K.I.S.S.)
- Don't Repeat Yourself (D.R.Y.)
- You Aren't Going to Need It (T.A.G.N.I.)
- Write Clean Code, not Clever Code
- Solid Principles
Separation of Concerns
Avoid Premature Optimization
Write Self Documenting Code
Are there other things that should be added to this list ... but these are the principles and patterns that I use on a regular basis, even if I have to dig deep to remember the pattern behind the description.