DEV Community

Cover image for Which principle or saying is wrong and/or misused?
Ben Halpern
Ben Halpern

Posted on

Which principle or saying is wrong and/or misused?

Software development gets passed down as an oral and written history of mistakes and learnings — and we wind up with a lot of "rules of thumb". Some of them are not as universally useful as some make them out to be. What are they?

Top comments (26)

Collapse
 
ben profile image
Ben Halpern

I'll start by saying that DRY (don't repeat yourself) is not entirely untrue, but an over-simplification to the point of harm as a principle.

Some conflicting ideas with DRY is the law of leaky abstractions and the rule of three which both definitely encourage skepticism of mismanaged attempts at DRY.

I think DRY means well, but IMO is often used harmfully as an idea.

Collapse
 
peerreynders profile image
peerreynders

"Every piece of knowledge must have a single, unambiguous, authoritative representation within a system."

Collapse
 
deciduously profile image
Ben Lovy

Repetition is always better than the wrong abstraction.

Collapse
 
maestromac profile image
Mac Siri

It took me a while to realize this but 100%

Collapse
 
sherrydays profile image
Sherry Day

YAGNI is a good idea to have in mind, but I see it being used by grumpy programmers who want to win an argument.

"You aren't gonna need it" (YAGNI) is a principle which arose from extreme programming (XP) that states a programmer should not add functionality until deemed necessary. XP co-founder Ron Jeffries has written: "Always implement things when you actually need them, never when you just foresee that you need them." Other forms of the phrase include "You aren't going to need it" (YAGTNI) and "You ain't gonna need it" (YAGNI).

You can't just pull from a rule of thumb to win an argument against your PM. Have a real conversation.

Collapse
 
etienneburdet profile image
Etienne Burdet

DRY has been mentionned, so here would be the second: "When you have a hammer, everthing is a nail".

Some new tech actually needs an exploratory phase where, yes, you have to consider everything as a nail—until you figure out what is and what isn't. It's easy to come 10 years after everything has been figured out and say "You never needed React/Blockchain/Saas etc.".

But without prior knowledge, you need to hammer blindly at some point. Who would have though we would have email in the browser for sending large files? Well, have with your FTP then…

Collapse
 
ben profile image
Ben Halpern

That's a really interesting take. I'd say some of the problem arises when so many people are trying to profit too early during the exploratory phase. A lot of hammer salesmen selling in to all the wrong markets and seeking a quick markup on their hammer investment.

Collapse
 
etienneburdet profile image
Etienne Burdet

Yeah, I agree indeed, some of this marketing bs is tiring indeed. But a lot of stuff is done because randomly try things for no reason. The hammer was most likely invented before the nail… :p

Collapse
 
alohci profile image
Nicholas Stimpson

Maybe "Avoid Premature Optimisation". Like all these principles, they're well meaning and well founded but traps lurk within. It's easy to reach a stage where retrofitting the optimisation by the time that it's proved that it's actually needed is WAY harder than if it had just been planned in from the start.

Collapse
 
leob profile image
leob

Good point ... I think you need to think about it and plan for it, but not always implement it right away.

Collapse
 
savvasstephnds profile image
Savvas Stephanides

"Clean code".

People assume that the process for "clean code" is "code should be clean from the moment you try to make it work to the end". No. The very principle of clean code is "make it work, even if the code is crap. Then, once it works as you'd expect, then change it to make it clean"

Collapse
 
booboboston profile image
Bobo Brussels

This literally doesn't answer the question, but a really tremendous principle I was thinking about recently is "Principle of least surprise" — it's not prescriptive enough to be overbearing, but really has empathy for other developers and/or users baked in.

Collapse
 
jeremyf profile image
Jeremy Friesen

That is one of my goto principles. As I'm reading code, if I'm surprised, I make a note and maybe a quick refactor.

The following is an example of a recent "eliminate surprise" refactor.

Re-arranging method to be less surprising #17128

What type of PR is this? (check all applicable)

  • [x] Refactor

Description

Prior to this commit we hand an unless code block with a return then we set an instance variable.

On a quick scan I didn't notice the return but saw the render followed later by the instance variable.

This change is logically the same, my hope is that it's just a bit more legible.

Related Tickets & Documents

Only because I was in the neighborhood for

  • forem/forem#17119
  • forem/forem#17076

QA Instructions, Screenshots, Recordings

None.

UI accessibility concerns?

None.

Added/updated tests?

  • [x] No, and this is why: this was a legibility change over a functional change.

[Forem core team only] How will this change be communicated?

  • [x] I will share this change internally with the appropriate teams
Collapse
 
peerreynders profile image
peerreynders

What an audience finds astonishing relates their background and general familiarity. So the principle only works relative to an implied audience which makes it somewhat subjective.

Collapse
 
jonrandy profile image
Jon Randy 🎖️

"Code should be written so that the most junior developer can understand it."

What utter BS

Collapse
 
peerreynders profile image
peerreynders

In the extreme it's important in environments where "developer fungibility" is valued.

In the best case it's motivated by the reasonable desire to minimize the bus factor in the worst case it's a sign of a culture of assembly line coding.

That said, if you have trouble understanding code you wrote three months ago perhaps it's time to dial things down a bit - it can be a tricky balance.

Collapse
 
dan_watkins profile image
Daniel Watkins

The saying is utterly dependent on the quality of your most junior developer :-)

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

All of them. As part of the human learning process, we all tend to take something that worked out well in one scenario and try it on everything. In the small and the large. That's when you see posts extolling only the virtues of a new (to the author) tech or strategy. Examples: DRY, microservices. Then many people try it and are plagued by undiscovered downsides. Then they post articles condemning it. Eventually we gain a cultural understanding of where it fits and where it doesn't. That's what the Gartner hype cycle is meant to measure. And often the corpus of articles on a given topic indicates where we are with it.

Collapse
 
peerreynders profile image
peerreynders

Single Responsiblity Principle (SRP):

I do not think it means what you think it means

"Gather together those things that change for the same reason, and separate those things that change for different reasons… a subsystem, module, class, or even a function, should not have more than one reason to change."

Kevlin Henney Commentary

Collapse
 
liviufromendtest profile image
Liviu Lupei • Edited

The End-to-End (E2E) Testing term is used incorrectly.

Technically, that process involves testing from the perspective of a real user.
For example, automating a scenario where a user clicks on buttons and writes text in inputs.

That's why all the components get tested in that process (from the UI to the database).

If you're using a hack, it's no longer E2E Testing, because a real user would not do that.

A common example is when you're testing a scenario that involves multiple browser tabs (e.g. SSO Login scenario).

There are some libraries out there that cannot test in multiple browser tabs (such as Cypress), so in order to automate that scenario, you would have to pass the credentials in the header or remove the target="_blank" attribute from the element that you're clicking.

That involves a hack, and that means your test no longer mimics the exact behavior of a real user.

Another one from the testing world: Accessibility Testing
Most folks think that involves checking if your elements have the title attribute (for screen readers) and if the fonts and colors are friendly for users with visual deficiencies.

But Accessibility Testing actually just means making sure that your web application works for as many users as possible.

The major mistake here is that folks forget to include cross-browser testing in this category.

So, you might have 0.01% users who need screen readers, but you actually have 20% users who are on Safari and 15% who are on Firefox and maybe even some on Internet Explorer.

Collapse
 
phlash profile image
Phil Ashby

For completeness, I'll add: Unit Testing

Commonly misunderstood as proving a part of a system works, and thus the system will work, so it can be deployed, without that difficult E2E stuff! The problem here is similar to hacking E2E tests, the isolated unit under test is unlikely to experience the same stimuli as it would in reality / as part of the whole. IMO, Unit Testing is entirely for team members to provide assurance / confidence that they haven't obviously broken something while making changes, without having to run a full E2E suite locally.

IMO, prefer Consumer Contract Tests that provide a set of tests, defined by the consumer of a component, that express the behaviour they expect from it. Popular between more autonomous development teams, especially in a microservices environment that permits independent deployment of services/components.

Collapse
 
miguelmj profile image
MiguelMJ

Don't reinvent the wheel.
I know that it probably exists a package or library that does it better, faster, it's tested and maintained... but what if I don't want a new dependency? What if the library introduces more bloat than I want to accept? What if I'm trying to learn?
I think it's acceptable to reinvent the wheel when you don't like the wheels you find.

Collapse
 
perpetual_education profile image
perpetual . education

Yes. Pretty sure that if "The wheel" is "websites" - that we need be reinvestigating them a bit.

Collapse
 
j_mplourde profile image
Jean-Michel Plourde

The Dunning-Kruger effect is often misinterpreted and not well understood. The results of the original study are criticised for being wrong in their calculation/interpretation and the subsequent buzz it created and all these citations contributed to solidify the myth.

This McGill article is a great read.

Collapse
 
webbureaucrat profile image
webbureaucrat

"There's always a catch" / "There are always technical tradeoffs" / "Faster, better, cheaper: pick two" This is true most of the time, but it's important to understand that in technology occasionally someone really does just build a better mousetrap, and it's really important to look for times when that happens because when it happens it means the other options are dead-end technologies.

I'm in discussions at work like, "should we keep putting dozens of apps on one managed dedicated instance or should we adopt containers?" There's really no serious conversation to be had there.

Collapse
 
perpetual_education profile image
perpetual . education

HTML Validation and Lighthouse scores and all of the accessibility best practices don't mean that your site is usable.