As a field, software development moves fast. New technologies come and go, methodologies become popular and then decline, style and aesthetic preferences change, and user expectations rise as fast as technologists can deliver.
This backdrop can be misleading. Sometimes, things don't change. It's important to take the occasional opportunity to reflect on how we got where we are today. Here's a few general software development practices that, at a fundamental level, have remained constant through the years:
Even back through the 1960s, developers at early tech companies like Xerox were pushing for the increased abstraction and modularization that have allowed us to develop the large-scale software systems we have today. This push eventually gave rise to the higher-order machine languages like C, then bytecode languages like Java, and on to dynamically-interpreted languages like Python. These languages allow developers to produce more functionality with less fuss around implementation details like memory management and data structure.
The major programming paradigms come into play here as well. Structured programming, object-oriented programming, and functional programming all have roots going back to the 1960s. Structured programming promotes reuse of logic through control flow ('if' statements and 'while' loops). Object-orientation modularizes and encapsulates data and algorithms beneath easy-to-understand, interchangeable interfaces. Functional programming promotes reuse through deterministic behavior and minimization of side effects - it's safe to reuse code if you know you can just throw out any part of the result you don't need.
The current trend seems toward polyglot environments - breaking apart software systems into modules written in separate languages and paradigms depending on suitability to domain, but interoperable through standard protocols. The rise of open source is also a good example of code reuse, where basic foundational technologies can be constructed and shared by the larger community, allowing projects to focus more fully on their differentiating value.
The author of The Mythical Man-Month, Fred Brooks, has stated that one of the hardest lessons learned through his experience with the OS/360 project was the importance of a consistent architectural vision. Initially lacking one, the team found that the codebase became too difficult to understand and reason about.
Not every project is on the scale of OS/360, but the basic importance still applies. In general, it's better to have a clear and consistent vision of a piece of software before jumping into coding. It doesn't have to be a formal process resulting in a mountain of documentation, but every change should start with an understanding of its general scope and purpose within the larger system.
Personally, I try to keep SOLID principles in mind before starting any implementation. It doesn't need to be dogmatic, because engineering is all about context and tradeoffs, but it's important to have a common language to use in order to understand, analyze, and discuss design decisions. This also goes a long way toward maintaining quality in the long-term, especially for large codebases and distributed teams.
Bell Labs was using SCCS to manage their software revision history back in the early 1970s. Proper version control is essential for managing the integration of complex changes - again, especially true for large codebases and distributed teams. It also plays an important role in quality control, allowing developers to pin the first reported occurrence of an issue to a general timeframe in version control, which can then be inspected to help debugging.
Today, Git has become the default version control system. Its distributed model makes it easy to fork existing projects for custom changes, and provides a powerful and convenient mechanism for contributors to request their changes be merged back to the master branch for official distribution. Whatever source control system works for your team, be sure you have a sane way to track how, when, and why your system is changing over time.
Traditionally, QA processes were borrowed from existing engineering disciplines, which had originally developed them with an eye toward physical systems. Large organizations would produce detailed documents outlining specific manual testing procedures, which were carried out by teams of trained, dedicated testers.
Today, there's a definite trend toward minimizing manual QA activities, especially with the rise of developer-driven automated testing. There's still a place for manual QA. For one, it's important that software be tested by someone other than the author. Secondly, it's good to have a non-developer sit down with the software - what's intuitive to a software developer familiar with the project isn't necessarily intuitive to the end user.
Automated testing is generally good practice, but only to the extent that developing and maintaining the tests actually contribute to end user value, and couldn't more effectively be performed manually (if less frequently). Either way, testing is and will continue to be an essential practice in the field.
For a variety of reasons, software developers have a tendency to forget this one. That's unfortunate, given its special importance. Ultimately, software ends up in the hands of end users, who are generally using it for things that aren't software development. If you're not talking to your users, you don't know what you're trying to build.
Early in software development, users were specialists who worked in-house, making communication relatively easy. With the advent of microcomputers and mass market software in the 1980s, large companies developed teams of customer relations experts to keep programmers informed of end user needs.
Today, with the dominance of Agile processes and their derivatives, it's common for developers to talk directly to end users. Ideally apps have multiple built-in feedback mechanisms, and social platforms are made accessible for communities to form around a given technology. However you do it, you should definitely stay in touch with your users.
Software is a special animal. Its extreme flexibility, ease of distribution, and near-zero marginal cost makes it possible to build and deploy systems so massively complex that no single human can possibly understand them in full. This has been an identified problem since the very early years of the practice - limitations in software development come down to intelligent management of complexity, and abstracting out components to a scale that individual contributors can understand and reason about.
All of these practices - code reuse, design and architecture, source control, testing and quality assurance, and regularly talking to end users - are essential to the production of good, working software. They've been in place since the infancy of software development, and if I'm allowed to speculate, will continue to be practiced for as long as people are writing software.
This post was originally published to CheckGit I/O.