Background
I wrote this as the final report for CS 427 at UIUC which I took as part of the Master of Computer Science program. This review is based on the 20th Anniversary Edition.
The essay format follows a summary of each section and my personal software engineering experiences. Please enjoy!
Credit: The image is generated by Dall-E 2.
Section 1: A Pragmatic Philosophy
Summary
The authors of The Pragmatic Programmer, David Thomas and Andrew Hunt, begin the book by poignantly stating, this book is about you. They define a pragmatic programmer as someone who takes responsibility for their actions. Their definition of a pragmatic programmer leads to the first conclusion of the book: you have agency. Having agency means that you can take action. For example, if you work in a bad environment, try to improve it. Or, if you want to work remotely, ask for that perk. It is easy to forget that we can always try to make changes. That does not mean our changes will always be successful, but a pragmatic person will at least try.
Next, Mr. Thomas and Mr. Hunt dive into their second topic, humorously titled: The Cat Ate My Source Code. The key to this topic is that a programmers job is to provide solutions, not excuses. By discovering problems, providing findings, and developing solutions, you can earn team trust, create better products, and advance your career.
Part of the job of a pragmatic programmer is to avoid what the authors dub software entropy. More widely known as technical debt, software entropy emerges as engineers develop a lax attitude toward the quality of their codebase. Tech debt happens once broken windows occur in the code. A broken window exists in code wherever there are poor designs or mediocre code. As soon as some substandard code emerges, future developers are more likely to feel alright adding more poor-quality code. Avoiding technical debt growth requires ensuring it does not ever appear in the first place. More simply put, a culture of high quality perpetuates itself, and a culture of low quality will lead to more low-quality software. The parallel suggestion is software can be just good enough. It does not have to be perfect. Handling every bug or edge case might not be necessary to hit the requirements, and it can add unnecessary complexity and effort to a project.
The last tidbit of information in this chapter is to invest in your knowledge portfolio. Similar to investments in stocks and bonds, the authors advise investing wisely in your technical skills. Maintaining steady investments in new technologies, diversifying your knowledge base, and getting in early (or buying low in investing parlance) are pivotal for advancing your career and becoming a more effective engineer.
Experience
In my final semester of college, I worked on a comprehensive React application for a web development course. The project went end to end, from idea generation to demonstrating a final product. Along the way, my group realized many of our pitched ideas were not feasible as we had imagined. However, we had already started much of the implementation. To still meet project requirements, we altered many of our initial ideas. The sudden change of goals left the code base in a sorry state because we originally designed it with different use cases in mind. The project ran for one semester, and our group was only four large, so we stuck bandages on our code and limped the repository to the finish line.
I think there are two main reasons why we accumulated so much technical debt. First, we drastically altered what we were trying to do. Shifting requirements are routine and lead to engineering dead ends. Second, we knew this would only be a one-semester project, so we quickly fell prey to what the authors called broken window syndrome. Our focus as college students was to minimize effort and maximize grades. Frankly, that optimization problem does not lend itself well to writing good code.
One perk of working on this React project was I picked up frontend development skills with an in-demand skill. That was a decisive investment in my knowledge portfolio which has paid dividends. I currently work as a developer using React nearly every day. Perhaps my Senior year group did not create a great project, but I grew my knowledge portfolio, and I have used that ever since.
Section 2: A Pragmatic Approach
Summary
The second section of The Pragmatic Programmer dives deeper into the elements of good design and how programmers can bring high-quality design into their daily roles. First, the authors claim that a vital element of good design is that it is easy to change. Being easy to change is more important than other elements because it is fundamental to meet many requirements. In later topics, the authors will address many elements of good design; however, they posit that all these other design principles are motivated by the easy-to-change principle.
Second, the authors reveal their views on one of the most common software engineering principles: DRY or dont repeat yourself. DRY is important because having a single source of truth will make maintenance easier and bugs easier to find. There are many techniques, like functions and object-oriented programming, that assist in living out DRY principles. A companion to DRY is coupling. Highly coupled code will cause changes in one location to have effects elsewhere. To avoid coupling code should be orthogonal or independent. Ideally, code will be more reversible when design, DRY, and orthogonality are combined. Reversibility is desirable because business requirements are fluid, and mistakes happen. If the system is too rigid, it will be difficult to retrace your steps if required.
The last topic in this section is on estimating. Estimating time to completion is an essential component of project management in the business world. Estimation is challenging because projects are so complicated and specialized. The authors advise breaking each system into components and estimating the time to completion for each piece. A technique they recommend is PERT. PERT tasks have optimistic, likely, and pessimistic estimates that construct an overall project estimate. Even with good estimation techniques, it is almost impossible to give accurate dates, so avoiding the question and saying, Ill get back to you, is often the best technique.
Experience
There are many desirable outcomes that good design can strive to achieve. These outcomes include reliability, cost, simplicity, flexibility, and many more. In my experience, flexibility is the number one goal of good software design.
I work on a user-facing application in my job as a software engineer. We must build our applications exactly to comparables from our business partners. The comparables, or comps as we colloquially call them, are designed to fit our brand promise. Our business partners put these comparables through many rounds of review and are constantly changing. As the engineer implementing these specifications, the code I am writing must be flexible and extensible enough to adjust to morphing requirements.
I believe that second to flexibility, simplicity is the next most important trait of quality software design. Simplicity is fundamental to onboarding new employees, scaling applications, adding new features, and ensuring reliability. After all, if an application is too complex, how will anyone know how to maintain it?
Key to the idea of simple software development is the DRY principle. DRY forces simplicity because there will be less code overall. One of the main enemies of DRY is the copy-paste functionality which makes it too easy to duplicate code. I find that when I am using Ctrl-C and Ctrl-V I often violate DRY. I have also found that to not repeat myself, I must think harder. When I think more deeply about a problem, I arrive at better solutions. I interact with the DRY principle every day as a professional software engineer at a large company. Codebases at work easily number in the tens of thousands of lines. To make that scale manageable, and adhere to DRY, we use the MVCS pattern.
MVCS stands for model, view, controller, and service. Under MVCS, the model represents some data stored in an object, the view is a user-interactable layer, the controller routes requests, and services handle all the logic. Splitting the codebase up into these four chunkable ideas improves simplicity. Additionally, a well-designed service and model can often be reused by multiple controllers adhering to the DRY principle. When I first onboarded, this entire paradigm was brand new to me. I had more experience with haphazardly throwing code together (an approach that had worked well in college). However, I have come to appreciate working with a formal paradigm. MVCS improves the code quality and makes my job easier.
Section 3: The Basic Tools
Summary
The third section of The Pragmatic Programmer builds on the idea that good tools extend what the human brain can do. According to the authors, programmers, like any craftsman, rely on tools. First among these tools is the powerfully simple plain text. The authors' plain text love comes from its human readability. Any data format that is human readable can be used long after the original application that used it is sunset or the developer who created it is gone. That sort of permanence is the friend of a developer trying to understand and maintain others' code.
The next tool is the old, reliable shell. This section brings the interesting acronyms WYSIWYG, or what you see is what you get, and WYSIAYG, or what you see is all you get. The authors explain that GUIs exhibit both WYSIWYG and WYSIAYG. It is simultaneously their paramount advantage and disadvantage. On the other hand, shells are harder to learn but unbelievably powerful. Stitching their simple instructions together can complete an infinite array of actions. The authors advise never to forget the power of the shell, even in our modern GUI age.
At last, the authors broach one of the most zealous parts of programming; the editor. Mr. Thomas and Mr. Hunt instruct that the editor you use does not matter. Instead, focus on improving the efficiency of your skills with your chosen editor. Over time learning the shortcuts and features of a given platform will improve your productivity and allow you to write more code or write code faster.
Another crucial component of modern software engineering is version control. Or, as the authors describe it: the giant undo key. Version control allows you to track the history of a project, manage multiple versions, and coordinate development across a team. Version control systems often come with an external repository like GitHub or Bitbucket, where you can backup versions, share work, and set up automation. For all these reasons, version control should track every project. As an engineer, you will thank yourself for using version control.
Experience
For most of my programming history, I have been a Java developer. In the seven years I have been writing Java code, I have used a variety of editors. However, most commonly, I stick to using IntelliJ. IntelliJ is a tremendously powerful integrated development environment. The debugging features, test runners, Git integrations, and plugins have saved me a lot of time and drive most of my workflow. However, all that power comes with a lot of complexity. Every feature requires a button which makes for a cluttered GUI, and the number of keyboard shortcuts rivals the number of stars in the sky. Marching along that learning curve has been arduous.
When I started programming, I considered myself a println warrior. I stuck to adding print statements whenever I needed to debug, but this proved untenable once I started working with larger projects. The IntelliJ debugger was a major skill I needed to dedicate time to understand. When I started using it, I was not as productive as I had been with my original print statement method, but my skills have since grown, and now my debugging skills far outstrip those early days. This experience corresponds with Messrs Thomas and Hunts advice to Achieve Editor Fluency. I spent too much time ignoring the powerful features lying just a few clicks away. My purposeful ignorance led to years of lower productivity. Estimating is difficult, but I would feel confident to guess that achieving my editor fluency was an order of magnitude increase in my debugging skills. It was not even hard to learn. It just required a little dedicated effort.
Section 4: Pragmatic Paranoia
Summary
In the fourth section, the authors expound on the idea that no software is perfect. The pragmatic programmer recognizes software flaws and uses their skills to adjust. One tool for adjusting is contract design. Designing software by contract is when behavior is defined before programming begins. When constructing the contract two parties will agree on valid scenarios. The key to making contracts that are feasible to implement is strictly defining what inputs you will accept and promising little in return. Contract design should be a give-and-take where the parties come to the minimal requirements necessary to implement the project.
Next, recognizing that software is imperfect, the authors lay out their position on fault tolerance. They fall squarely into the let the program crash camp. The authors steelman argument is that a crashed program does far less harm than a crippled program. A faulty program could do dangerous things like write bad data to the database or needlessly consume expensive computing resources. Additionally, an engineer is apt to notice a crashed program sooner than other bugs. Noticing errors early and fast is crucial to speeding up any development cycle.
The final pivotal topic of the chapter is to take small steps. The authors instruct to take small steps at the code and design level. At the code level, writing a small code chunk and a unit test together results in higher-quality code with fewer errors. At a higher level, we can only design so far into the future. New technologies, requirements, or black swan events will completely alter the ideal design. By and large, these are all unpredictable events, so there is no use expending effort trying to design your way around distant corners.
Experience
When I started learning to drive, my dad advised me to be a defensive driver. Similar to that advice, The Pragmatic Programmer says that pragmatic programmers build defenses against their own mistakes. Programmers must be aware that other programmers are not perfect, just like drivers must always be aware that other drivers are flawed. Assuming others make mistakes, a programmer can develop more fault-tolerant solutions. However, fault tolerance is only valuable up to a limited point.
My job involves handling sensitive user information. For legal and business reasons, we must be extremely careful and accurate when collecting and moving that data. A security vulnerability or corrupt user information in our systems would be a terrible outcome for the business and our customers. We have built into our work systems a measure of fault tolerance. However, when that fails, we show our users a technical difficulty page. The technical difficulty page simply says, "We are sorry. Please try again later." Showing the technical difficulty page is the equivalent of the program throwing up its arms and saying I cannot process this. I like to think of it as a graceful crash. By showing a sorry page, we are being forthright with our users and protecting the business.
Section 5: Bend, or Break
Summary
In Chapter 5, the book explores flexible writing code. The world is rapidly changing, and software systems must be malleable enough to adjust. The first way to add flexibility is through decoupling at the code level. Decoupling is similar to the previously mentioned orthogonality. Ideally, code should be local and only affect internal structures. Decoupling code makes it easier to change in the future.
Next, the authors write about events. Events are newly available information. For example, a mouse click, a keyboard press, or a Google search are all events. The authors first describe using finite state machines to model events and what should happen for each event. Then they describe various event handling patterns like observer, pubsub, streams, and reactive programming. These techniques are all useful for managing and processing events.
The last topic in this section is configuration. Programs typically have constants that are required but have fungible values. For example, feature flags, logging levels, API keys, and ports are all common configuration targets. Whenever there are values that can be abstracted out, it is worthwhile to put these into separate configuration files. Config files are written in dedicated config languages like YAML or JSON. They can also be hosted externally, which minimizes application deployments or restarts. Rather than needing to make a code change to update simple values, configuration files centrally host data and make it easier to change, thus improving program flexibility.
Experience
I have often thought that flexibility is the antithesis of code. Computers are crude instruments. They are only capable of memory and rapidly computing simple calculations. To do anything, computers must explicitly be programmed to the tiniest command. Since their nature is so rigid, any flexibility must be engineered which is a demanding problem. Two ways I have experienced designing for flexibility are with message queues and configuration servers.
At work, I rely heavily on RabbitMQ software. We use RabbitMQ as a message broker, or in pragmatic programming terms, for event management. When users complete nearly any action, it goes into a RabbitMQ queue. The queue is then processed by other services whenever they have resources available. For example, when a user completes a form, that information is published in a queue. Then all the subscribers can pop that queue and use the data at their leisure. If any service ever goes down (and they often do go down, as evidenced by the repeated paging of my phone in the middle of the night), the queue will still collect the information. Then it can be emptied by the program whenever it comes back online. Maintaining these queues supplies a good balance of flexibility and performance, and they support many of the systems I regularly interact with.
Configuration files are another common characteristic of my job. We use a microservices architecture with dozens of separate programs providing overall functionality. If configurations change then it would be a pain to manually update each service. Instead, we have another program humbly called the configuration server. The config server maintains all the configurations for every service we have. Whenever we change these values we can update them in just one place, and the configuration server will take care of making sure every program has the most up-to-date configurations. The configuration server is a massive time saver.
Section 6: Concurrency
Summary
A majority of modern software is happening concurrently at some level. Concurrency is acting like two or more items of code are running together, while parallelism is when code items are running at the same time. Concurrency and parallelism are common ways of increasing program performance and better using a computer's resources. The Pragmatic Programmer says analyzing workflows is a must to improve concurrency. One form of analysis is drawing out activities. Drawing activities makes it possible to see what possibilities exist for leveraging concurrency.
Once concurrency comes into play, then shared data is also an open question. The authors strongly advise against sharing any state between concurrent code. They give the example of ordering a slice of pie. If one customer orders the last slice of pie, then all the other customers will be disappointed. It would be better to have a different piece of pie available to every customer. Similarly, data can be replicated or put in a queue. Concurrent code should never directly share data. That will lead to random failures and race conditions.
Experience
In my first semester of graduate school, I took a networking course. In the networking class, I had to build a router from scratch. The assignment was graded based on how well the solution scaled on large networks, so performance was critical. I used an efficient implementation of Djikstras for pathfinding, kept the code simple, and wrote in C. However, none of that gave me quite the performance required.
I ended up using five threads to handle all of the tasks. Unfortunately, multithreading this assignment was a minefield of bugs. Threads can execute at various speeds depending on the whims of the kernel; therefore, knowing the order of execution of code cannot be deterministic. Indeterminism leads to the punishing aspect of debugging multiple threads: bugs happen randomly. Many of my bugs arose from sharing data. The authors wisdom to avoid sharing data is prescient. It is all too easy to forget to manage data correctly because the human brain was not built for parallelism. This project was my widest foray into parallel programming. In future endeavors, I would be wise to remember The Pragmatic Programmers advise against sharing data. It will save me countless hours of hair-pulling.
Section 7: While You Are Coding
Summary
The seventh section of the book lays out an overview of considerations to remember while coding. First and foremost, the authors advise against programming through coincidence. If you write code that unexpectedly works, do not continue happily onward. Stop to ensure you understand why that code is working. Understanding code is key to building larger systems and crucial to debugging. The way to avoid accidental programming is to think beforehand. While staring at a blank editor is scary, there is nothing wrong with organizing your thoughts first. Building a plan and a mental model will lead to better quality and a deeper understanding.
One skill that many developers forget is algorithm speed. Big-O notation is the practice of estimating how fast a given algorithm will run with arbitrary input. Naive solutions to many problems are commonly not the most efficient. Depending on the application, using Big-O analysis to find a better solution can be appropriate.
Perhaps more widely applicable is refactoring and testing. The authors define refactoring to mean restructuring existing code. Testing and refactoring go hand in hand because testing informs what code needs refactoring and simplification. Refactoring should not change the behavior of the code. Instead, refactoring should improve the code to ease future maintenance. One aspect of refactoring is renaming. Renaming is necessary because the codebase will drift over time. Eventually, the names will no longer align with the actual use case.
Finally, this section touches on five principles to improve program security. The principles are: minimizing surface area, least privilege, secure defaults, encryption, and security updates. Categorically, surface area and security updates are about preventing bad actors from accessing the program. Least privilege and secure defaults limit reach once a hacker is inside. And encryption will minimize any damage done by a hack.
Experience
As a programmer, I have spent countless hours sitting at a keyboard coding away. It is one of my favorite activities. My enthusiasm to dive into code has led me into traps more times than I would care to admit. I do not think this is unique to me. I think most programmers prefer to dive straight in. Diving straight into coding goes against the authors advice to program on purpose. Two conscious programming techniques I have been exploring are TDD and BDD.
TDD or test-driven development involves writing tests before coding implementation. The test defines the smallest unit of work and gives insight into the minimally required implementation. Since the test comes first, it will initially fail. TDD helps me stop and think about the code I am writing. I will admit TDD is a tough ideal to strive for because writing tests is not my favorite activity. Tests are boring and often tedious. Despite the monotony that comes with the testing territory, I still think it is an important part of my thinking process.
BDD or behavior-driven development focuses on connecting the requirements of the business to code. In my job, I implement BDD with Cucumber. Cucumber tests look like English sentences and are some of my favorite pieces of code to write. I like writing cucumber tests because they feel conversational and mimic how users interact with the program. Combining TDD and BDD forces me to think harder and leads to better code.
Section 8: Before the Project
Summary
In the eighth section, The Pragmatic Programmer describes the start of a project. The authors strongly dislike what they call the requirements myth. They view the world as too messy and complex to define complete requirements at the start. According to the book, requirements are learned in a feedback loop between programmers and their partners.
An aspect of a feedback loop is interactions. There are two types of interactions that the book focuses on developers to users and developers with developers. Developers must interact with their users to know what those users want. Parallelly, developers must interact with other developers to solve tricky problems.
Common techniques for developer collaboration are pair programming and mob programming. Pair programming occurs when two developers work together. One engineer will have their hands on the keyboard typing out code. The typist focuses on nitty-gritty details like syntax. The other developer will watch and contribute higher-level ideas. The book emphasizes this approach and its popularity because it puts more mental power into solving problems.
The second developer collaboration approach is mob programming. Mob programming is an extension of pair programming. The general thought process is if two developers can solve problems better together, scale that up to N number of stakeholders. The book says to think of mob programming as tight collaboration with live coding involved.
Experience
In every project-based course I took in school, I heard about the requirements gathering phase. I completed a user interface design course, a web development course, and a software engineering course where requirements gathering was a dedicated field of study. Many of the projects of these courses mandated a requirements step where groups defined what they would build. Since I learned about requirements gathering in many classes, it felt like a fact. I thought this approach seemed reasonable because planning was feasible due to the defined rubrics and limited durations.
The Pragmatic Programmer takes a contrarian approach to requirements gathering. It declares that requirements are a myth because no one knows what they want. Unlike school, where there are many one-off assignments, work endeavors are long-running projects that can sometimes last for years or longer. Knowing all of the requirements beforehand is just not going to happen. The book's opinion matches what I experience in the work world. The business associates that I regularly interact with have an idea of what they want, and they provide comparables to describe the UI and UX. However, these often change and can even change based on seeing the implementation. After seeing requirements in the real world, I wholeheartedly agree with the authors approach. School teaches requirements gathering too rigidly. Designing a project is about experimentation and feeling out what works.
Section 9: Pragmatic Projects
Summary
Finally, in the last chapter, the authors dive into the relationship between project management and software engineering teams. Messrs Thomas and Hunt do not seem to adhere to a particular framework of team management like Agile. Instead, they opt for a higher-level view of how teams should work together. In large brush strokes, they paint the picture of small teams complete with a range of skills from software engineering to quality assurance to databases and operations. When engineering teams contain a breadth of skills, they become fully functional and deliver finished work independently. Ideally, a team would be long-standing with a portfolio of collective knowledge. Long-standing teammates are better communicators and can refine their processes, thus becoming more efficient over time.
The Pragmatic Programmer concludes with a holistic picture of the importance of being pragmatic. Automation, version control, continuous delivery, testing, and more are not ends alone. They are just the means to the goal of delivering more code. Even delivering more code is not the true end. The end goal of all engineering efficiency is to delight users. Software engineers can delight people because their fundamental goal is to solve their problems. Any engineer solving problems for people should take pride in that and own their accomplishments. Ultimately, solving problems for people makes a pragmatic programmer.
Experience
I am only a recent graduate, so the corpus of my work remains limited. However, I appreciate the authors' flexibility on what can be a dogmatic industry. The software engineering community likes to throw its entire weight behind trends. At every company I have been at, I constantly hear about Agile. David Thomas and Andrew Hunt are more freewheeling than that. They recognize that engineering is about improving lives. I appreciate their refreshing attitude because I got into software engineering to help people. Becoming a pragmatic programmer is one way to get better at that goal.
Originally published at https://blog.seancoughlin.me.
Top comments (0)