What conventional wisdom in software is wrong?

ben profile image Ben Halpern ・1 min read

And was it always wrong or has it become wrong due to a change in the ecosystem?


Editor guide

That two similar-looking pieces of code must alway be abstracted out, because DRY.


The mistake here: people assume they need to DRY up repeated code. This is wrong. They need to DRY up repeated knowledge in the software.


Seems profound here and like this could be highly useful. Could you maybe expand on how to identify the difference between knowledge and repeated code?


I usually do feel like I want to DRY things up once I hit 3 or 4 times though, at least if the block of code is at least like 10 lines long


I think a useful heuristic is 3. If you see the same thing implemented 3 times, it's worth considering, at least, if it's time to refactor and DRY it up


That you always have to put extremly verbose and meaningful variable names. In general yes!

But it depends on the scope of the variable.

There is nothing wrong to use variable names like 'el' or 'num' in lambda functions like e.g. foreach(el => ...)


It all depends on your language's conventions. I've often seen these be standard, many coming from mathematics itself:

  • i, j: indices, loop control variables (no lifetime beyond context).
  • x, y, n: temporary variables in algorithm (no lifetime beyond algorithm)
  • n, num: temporary "number", such as an accumulator or the current value when iterating over numbers (no lifetime beyond context).
  • v, val: temporary value, usually from iteration (no lifetime beyond context)
  • x, y, z: coordinates (meaning from context).
  • len: length
  • iter, it: iterator

Beyond that, you should probably write real names.


That depends on what the loop, temp var (etc.) does. Sometimes having a meaningless name like that is perfectly fine, something you need something more descriptive as it may help read what's being done more easily (I just coded a case like that this morning).


yes, though it is good for other programmers' benefit, when you are making a program on your own, don't be afraid to put in the worst names possible. Like Var1. As long as you know, that's good.


I agree with the above point, If I'm running users.each do |u| in Ruby, I think that u is pretty communicative, even if it's shorthand.

But I'd personally avoid var1 or meaningless x, y, z, even if it's just your own thing. That one second to come up with a somewhat meaningful variable still seems useful even for your own train of thought.

Yeah, when it comes to big projects, good variable names are useful,

I use x only as an argument for lambda when mapping over number array, seems kinda OK for that.


But it also increases mental complexity,
because you always have to think about the future of your project.
"Will this project become bigger in the future?"
"Will there be some new team members in the future?"
"Will I make it public in the future?"

And will the loop or lambda body be extended, larger and more complex?

I had to refactor code like that in the past few months. It's no fun having to backtrack to see where variables e or p come from and what they mean


I've cursed myself so many times for doing that, that I don't do it anymore.


Why not? I'd rather see a meaningful variable name that describes its purpose rather than something generic that I have to figure out on my own.


That your programming skills outweigh the importance of your communication and collaboration skills.


This. So much this.

Communication and collaboration skills are the difference between being an asset to a team, and being a hazard.


"Never comment your code."

The truth is, you should never restate your implementation in comments. That, in fact, is almost always what proponents of this "wisdom" will cite!

However, while your implementation and naming should "self-comment" what the code is doing, it cannot explain why you did it, and more often than not, it fails to explain the abstract goal (or business logic). That's what comments are for: describe WHY, not WHAT.


I completely agree. I work for a company that makes software for Dutch healthcare providers. A lot of financial processing which we automate is based on legal standards. When implementing these rules we always add a comment pointing to the legal documents that it implements. This saves a lot of "why did you do this" questions.


That's the kind of comment I write these days. I'll go the extra mile if a weird hack is needed (e.g., to circumvent a bug in a framework or library, stuff like that). Throw in the best names I can come of with for classes, methods, properties, variables, etc.


One of our team's accepted code review standards is that comments should not need to be refactored when code is refactored. Keeps comments focused on the 'why', not 'what'.


Not a bad rule of thumb (keeps it language agnostic), although I don't know that it's altogether avoidable. In any case, if the two ever do fall out of sync, both should be carefully reviewed, as that's usually a signal there's a logic bug therein.


One of my favourite things in Go is the enforcement of good comments as documentation by the language.


That an academic degree in computer science is needed for software developers. Worst: that said degree makes one a better developer (or a developer at all, for that matter).


Most CS degrees aren't solely focused on software development. They are rooted in theory, data structures & algorithms (and the mathematics beneath them), and whatever electives are offered by the institution (today a lot of them are AI/Data Science intro courses). In my program, we have two classes where we build big projects, and in the end, there's a capstone project, but I have no idea how developers function in the wild. I feel like a lot of CS students live under a rock (at least I do :D )


Academia is a long lost cause for producing really only two things - academics or good worker bees. Only life can produce educated thoughtful and well-informed people. As one of the comments mentioned Ai and data science, that's what industry wants not what you want. If you're going to waste money on a degree, waste a lot of money and take the classes that you want, then force yourself to do the minimum requirements for the degree but only because it's handy to help you get a job not because it has anything to do with education.


This is probably a hot take, but when people say "choose the best tool for the job". There are so many tools and libraries out there that you aren't gaining much by sifting through all of them to find "the best" one out there.

More often then not, I see engineers get into "analysis paralysis" and end up taking a long time to make a decision when several of those options would be more than suitable.

Sure there are cases where the options aren't so plentiful, but this is definitely becoming the exception rather than the rule.


I think what's often meant is "don't listen to fad-based admonishments to use X or avoid Y." Of course, that's quite lost in "choose the best tool".

I tend to adjust that aphorism myself:

Every tool has a valid use case.

Use what works, popular opinion be darned.


I see this very frequently in common everyday engineering decisions. A recent one I saw was "Which config management tool should we use between Salt, Puppet, and Chef?". Others I see, "which static code analyzer...", "which framework...", "which data store...", "which log aggregation tool...", "which apm...".

There's definitely an element of "avoid the fad" in the conversation, but where I mostly see people get stuck is deciding between the more time-tested and stable options. Each option might be valid, but they get stuck on "find the best".

At that point, I like to ask them "Well, which one do(es) you(r team) know best?"

If that fails, and if the basic feature lists of each provides no insight, then flip a coin.

Sorry to reply twice, but I have to wonder...

How many people, by saying "use the best tools for the job", are actually only saying "use my favorite tools for the job"? (After all, "my favorite tools are always the best.")

I think this happens a lot and it definitely hurts the conversation.

I like your comment about flipping a coin :) It bothers some folks since it doesn't seem like a rigorous argument, but at that point you don't need one.


That titles mean anything. Junior, Mid-level, Senior. I’ve met Seniors who don’t know basic things and I’ve met Juniors who blow me away. It all comes down to the person’s ability to solve problems, learn and know when to ask for help if they’re stuck.


It's exactly the title based recruiting that companies tend to do that's keeping me from getting a new job at the moment. "You're a junior, but we're looking for a mid-level / senior type of person". Okay, but you have no idea what I know and what I can do... you can't simply base that off of an X amount of years someone has been doing it. You gotta look into what this person can actually do, not how much time he's been sitting behind a desk at company X.


I feel you there. I’ve been dealing with that for a while now. Luckily I’ve been interviewing with a company that is focusing on my work and not titles. Final interview is next Tuesday! Just keep building things and keep pushing! You’ll find something eventually.

That's amazing! I hope you get the job! Yeah, I've decided to start doing more stuff outside of my current job to enrich my portfolio. Probably do some freelance stuff so it's not just for a potential future employer.

Yeah that’s your best bet in my experience. Build so much stuff that they can’t ignore you! Freelance will definitely help you gain more credibility.


Java Slogan: Write once, run anywhere.


I think it was (that's from my memory way back in 1994 or so):

"Write once, run everywhere"

Which quickly turned into:

"Write once, test everywhere" because of all the different VM implementations. That was back in the Java 1.0.1 days.


Write once, run anywhere... As long as the client has java installed and the versions are up to date... And the client has 50000 gbs of ram.


That $mylanguage is better than $yourlanguage.
Programming languages are tools that are meant for specific purposes. One language might be suited better than another in this scenario, but maybe not in another.


That's why I like to say "this is my personal favorite language" because I'm not trying to make claims about it being objectively better, but I enjoy working in it and with the scenarios it's best suited for.


The _ convention should depend on your language's scoping rules. In Python, where there is no formal private scope, the _ convention is reasonable. However, in C++ or Java, where such scope is explicitly declared, the _ basically becomes noise, in the same manner as Systems Hungarian notation.


"It's industry-standard". This is often used to try and end debate, but the "industry-standard" for most things seems to have a shorter and shorter lifecycle as new and emergent technologies make them obsolete.


If it's a phrase you have to use to explain a solution, then you're doing it wrong. Even if it's an "industry standard" for a good reason, the underlying reason for it matters more than the fact it's an "industry standard".


Devs think that's it's a good practice to share and follow good practices.

But in fact you shouldn't copy a solution from $bigCompany unless you are sure that you have exactly the same problem.

And you usually don't have the same problem than Facebook.

so here is my advice for advice givers:

I know that you have good intentions, want to help and are excited by the solution you have found to your problem.

But do realize that your readers may have totally different problems, so explain first what is your context and the problem you needed to solve.

That will help your readers a lot to understand whether your advice is meant for them and to see what is the trade-off.

Dan Abramov did this really well


"umm, let's see, that'll only take a week"...

I've never; in 30 years, seen a pre-determined due date happen.
Even Agile cannot predict due dates for software. Yet everybody is doing it.


Yes and no, a pre determined due date from a project manager, definitely not. From an experienced dev, some of the time.


Experienced developers can do it, but only in two week cycles.


Does "we tried this already" count as conventional wisdom? 😝


Yes, it's a problem because, what they don't say is when they tried it. This is the same fallacy as 'we've always done it this way'


This one hurts 😂 😭

Especially when there's no documentation to review what was "tried" or what the results were.


The concept of "soft-skills" versus "hard-skills".

Inter-personal communication is the most important skill any developer/engineer/manager could have. Being able to effectively and efficiently communicate the needs of clients to technical requirements is crucial to team and product success. You can write the most elegant and clever code, but if no one can read it, or benefit from it, it's useless.


One of the more pervasive issues with modern software development is domain coupling. Most tools for software development encourage baking the domain directly into source code. This encourages us to develop software which is less flexible than it needs to be.

The original driver for this pattern was databases and their inflexible structures. Database schema along with object oriented software encouraged data structures to be modeled in code. This became accepted orthodoxy in terms of how to build software. The ER diagram was often the first artifact of development, and everything else flows from it.

Several years ago I rejected the orthodoxy and began writing business applications based on schemaless data storage and zero domain coupling to software. This has made the software far more flexible and adaptable for a wide variety of applications.

The downside is that many development tools and libraries are built around fixed schema and domain binding. For example, GraphQL libraries assumed a static schema once deployed. I have to develop a mechanism to dynamically update the schema at runtime. I no longer use object relational mapping as there are no domain objects. There are in fact still data objects, but they are not tied to the domain of the application.

The real benefit of this approach is that it gives power back to the users so they are in the drivers seat. In a majority of cases domain binding is premature, a consequence of a unintentional acceptance of domain binding without thought because modern tools assume it.


Very interesting. Do you have an article (or anything like that) with more detailed description of the approach?


In 2013 I started a project code named 'Gravity'. It is a process automation system. In 2015 it was released as open source under the GPL.


At another client I found that it was also useful, despite the use case being radically different to the original use case. We began making modifications to the code, but in a private repository, not the public one. Since then it has undergone major changes, but I have not yet been able to get it released.

I have made some videos about it.

I do not suggest trying to install Gravity because the dependencies are hideous. The current version is light years ahead so I'm trying to get the current client to release the modifications back to the public repo. There is a demo system on the devcentre.org site.

I know this is a bit of a sob story and not terribly useful. Once I get it released I will be making some detailed demonstrations and tutorials.

All this said, the basic principle of domain decoupling can be applied. It doesn't mean not having a domain, just expelling it from code and into runtime configuration. This data structures, views, filtering, screen designs, widget configuration, data transformations all become runtime configs.

What this means is that you can have multiple 'applications' running on the same environment which deals with totally different domains, but running on the same servers. Rather than having domain tied microservices all servers are the same and can serve any request.

Use a back end database that is schemaless such as MongoDB. When I began I used JackRabbit JCR which was initially fine, but it forced me into some pretty nasty work arounds to various implications to using the JCR. MongoDB has been way better in a variety of ways, but the code on the public repo is still using JackRabbit.

Sorry I can't do better in terms of doing a real show and tell.


Same here, I'm curious to see what it looks like in practice.


A whole bunch:

1) OO vs Functional vs Imperative vs ...etc.

Why it's wrong, because in the real world where you are paid by a company to write code or fix code to get the bottom line tightened, which solution you employ is tertiary so long as it can be done: a) under budget. b)under time ... everything after that is a free choice variable.

It's always been wrong.

2) Monolithic vs Services ...

Why it's wrong, because in the real world there are classes of problems that are best served using one architectural approach versus the other. This choice is weakly bounded by 1) above by the way.

It's been wrong since computers became powerful enough and distributed enough so that small units of redundant code could be executed in a per process function way as a solution option for various types of programming problems. Yes, services can answer a host of scalability under load conditions that were not optimally addressed by a non parallel / monolithic architectural approach but not all problems are amenable to that approach and often 1) gets in the way on implementing them.

3) Knowing patterns means knowing how to code...

Why it's wrong, some patterns are highly language specific ..which makes them paradigm specific which makes them not fully generalizable. Even within a given language and paradigm it's possible to code properly but not know a thing about what some one else calls the pattern you are applying to solve the problem. I call this the Good Will Hunting position...it's possible to be very adept at Engineering without being an acolyte to the gospel of subjective proselytizing of names given to the "ways of programming". Conversely it's possible to know all the names and not be able to identify where in your design you should be using them (that's where the art of programming comes in as well).

It's always been wrong.

4) waterfall vs agile vs ....etc.

Why it's wrong. Pretty much the same reasons described in 1) above. How you manage the building of code will be constrained by your problem scope, time and cost limitations. It additionally has the legacy code versus novel code constraint...I assure you that if Jeff Dean and Sanjay Ghemwat were doing daily stand-up's and filling out task cards and doing iteration planning meetings while they were inventing the MapReduce algorithm it would have obliterated their ability to actually innovate. Thus R&D code building is much better served by a non agile approach like waterfall.

It's always been wrong.


Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.

Regex is an amazing tool and like any other tool it can be abused or used to help you.


That waterfall is always wrong and agile is always right.

Waterfall is a fantastic methodology for big, long-lasting projects. Waterfall dictates that you gather requirements first. Then you design. Then you implement. Then you test. Then you maintain.

And you do it sequentially.

Waterfall is how cities are planned, bridges are constructed, and buildings are ... built.

A big criticism of waterfall is that it doesn't work for software because software changes too fast. "Clients don't know their requirements, but they want to start work right away". Great... try building your house that way.

Medium to large sized companies don't change out software every 2 years. I've seen several Fortune 500 companies hold on to a piece of software for ten years because it was so critical to their business. I worked with two companies that had * twenty year* contracts with a software provider.

Things that need to last a long time need to be planned in advance. Developing as you discover requirements introduces fragility that many on the Fortune 500 and 100 list aren't willing to tolerate.

Big companies don't like to throw out their CMS every time they redesign their site(s). So a waterfall model makes a lot of sense for the CMS.

but, waterfall for the website itself? That's nuts! That website will look stale in 2 years, easily. There'll be entirely new frameworks in 18 months time and there'll be a new version of ECMAScript in a year.

So Agile works well with rapidly changing technologies that need to meet external market demands.

What's become a problem is that a CMS should be waterfall, a website should be agile, and when you're launching a brand new platform, trying to make one be the other causes problems. You either get a garbage CMS that has to be thrown out, or a website that was outdated by the time it made it out the door.


That ids and classes are CSS features. They're HTML features, for describing the semantics of the page content. It wasn't always wrong. Their purpose got subverted when web page construction got split between content authors and designers, and designers found that CSS provided no good hooks to hang their styles off.


Can't they be both? CSS is pretty tightly coupled with HTML; is it meaningful/useful to make a distinction about which language they "belong" to? For that matter, what use are classes strictly within HTML, with no reference to CSS or JS? While ids are used by ARIA attributes and in some special cases like <label for="id-of-input-field">, afaik classes don't serve any such purpose. But I love to learn, so am I wrong?


A few years ago I went through the HTML5 (then draft) spec and counted the uses of the id attribute. I found 11 (actually I found 12, but 1 was subsequently removed), only one of which was for use in coupling DOM elements to CSS. As well as Labels and Aria, there's coupling form controls to their forms, fragment identifiers in URLs, connecting table cells to their headers, and so on.

Classes have indeed fewer uses. But even so, besides CSS coupling, they are used in JavaScript via GetElementsByClassName() and querySelectorAll(). They're also used in microformats.

You've probably noticed that I refer to coupling to CSS. That's because I dispute the idea that HTML and CSS are tightly coupled. They're coupled entirely through Selectors (bar some special case behaviours in the overflow and background properties). But most relevant here is the use of id and classes in Selectors. And this is the point - if you look at recent Selectors specs, e.g. the Level 4 spec, they're not called "CSS Selectors", they're just "Selectors".

Selectors are designed to be the binding between DOM elements (for both HTML and XML documents) and other languages including both JavaScript and CSS. The Selectors spec even includes the snapshot profile which is specifically for non CSS uses. Only that non-CSS use contains the :has() pseudo-class.

So it is my contention that HTML and CSS declaration blocks are separate. Selectors provides the binding between the semantics world of HTML and the styling world of CSS, helping enforce the separation of concerns that inspired the invention of CSS, replacing the styling elements and attributes that were creeping into HTML. Selectors use id and classes, but so do other aspects of the web ecosystem. CSS uses Selectors but so does JavaScript.


That teams make decisions based on technical information.

Our decision making is more tribal than technical. I think the problem is that we can forget that when we talk to each other. We say, "We chose because we value and ." I think the truth is more often than you were "raised" on , or you think your career improves if you move to the tribe.


"Don't reinvent the wheel"

You need to always be in control of your system. If you're using a framework, always make sure it is not in control of your code, make sure you separate things tied to a framework from your business rules. If you can't, rewrite parts of the framework that you need, and always be in the position of being able to ditch the framework if you have to.

How long will the framework support last? How long will your product last? Those are the questions that should always be on your mind.


That code will be easier to understand if it has many short classes/methods/functions (as opposed to fewer, longer ones).

Not true!

Merely breaking up a problem arbitrarily doesn't make it any easier to reason about, in fact, probably the reverse.

Breaking up a problem in ways that make it easier to solve is what we really want! 🙂

This might lead to many short modules, or it might lead to some short and some long modules. It depends on the nature of the problem.

But simply trying to reduce number of lines of code, number of methods, etc. is completely missing the point!


Servers are long-running pieces of hardware that need constant maintenance. That premise with modern-day cloud computing is now dead. Servers are ephemeral and systems should be designed to account for that.


That software is permanent; if it solves a problem well, variations will be asked and paid for. If it does not solve the problem well, it will be replaced.

A historical example: Lotus 1-2-3 was an awesome chunk of code. It got replaced/obsoleted when Excel came out because Excel solved the problem of "I need a spreadsheet" better. It wasn't Microsoft's marketing muscle, it was that Excel, particularly its macro language and functions, were better than 1-2-3.


I guess that there's a solution to any problem, if only the developer is good enough. Maybe they need to be 11x. I bet you former employee could have done it.

Some problems (generally to do with UX) don't have a solution, and that's ok. The way forward is to move backwards up the decision tree and try a completely different approach. We all do this every day in our lives as developers on a small scale, but the concept goes out the window when it's a big project feature that's had input from a lot of people.

I definitely prefer this-> in C++, every time.


"Good, Fast, Cheap... pick two"

Paying a lot of money does not mean your app will be better or faster


That throwing more and more software at human problems will solve those problems.

Staff not filling in their end-of-year reports on time? Automated email reminder! Staff still not filling them in? Add a second reminder for staff who haven't done it! Admin forgot to update the deadline date for the staff to fill in their reports? Add a reminder to the admin to set the deadline for the reminder! Admin was off sick when the reminder was sent so didn't set it again? Add a fall-back email address in case no-one picked up the first reminder! Fall-back email wasn't filled in correctly so the email reminding them that the reminder hadn't been picked up? Add another......

Soon enough you've got meetings with 50 stakeholders and 10,000 lines of code with 1000 different states, paths and flags all to do what could have been done by someone just checking 'who hasn't filled out their report yet? Ok, email Bob and Jane'.


Anything absolute really, meaning any statement containing the word "always" or "never".

As an example:
Sure, global variables are generally bad. But if you just pick one or two that make sense for your application and they provide a lot of convenience, then maybe just roll with it!

Event bus in vue, service container resolver(ioc) on the backend, etc.


Frameworks slow development down in the long run.


The right tool for the job. People mostly choose what it is allowed to be used in the confines of the workplace, or what they already know, not actually what is best.


Pull requests in non open source projects are code quality theatre.


That the perfect code/way to code exists.


That functional programming is hard/magic/mysterious.

That you have to have tons of open source experience to succeed. Both in your career and in open source projects


That you need a CS degree. 😀


It is, in ways we'll recognize and decry in 10 years