loading...
Cover image for Flexible code considered harmful

Flexible code considered harmful

codingunicorn profile image Coding Unicorn πŸ¦„ Updated on ・1 min read

Alt Text

🧠 The biggest mistake programmers make is writing flexible and abstract code. Some of us believe that writing flexible and abstract code helps the system evolve fast. We write interfaces, abstract classes, frameworks, and platforms, assuming that they help us fulfill future requirements faster.
β €
Open-Closed Principle suggests that we should be able to extend the behavior of a system without having to modify that system. It's the most dangerous and widely misunderstood programming principle I am aware of.

πŸ”₯ In theory, it's a good idea. But there is a caveat. All those extension points introduce extra complexity. Complexity makes the system harder to understand and harder to charge. What's worse, our abstractions are usually wrong, because often we design them up-front, before the actual flexibility is needed. According to Sandi Metz:

Duplication is better than the wrong abstraction.

There is a paradox in software design titled "Use-Reuse Paradox":

What's easy to use is hard to reuse. What's easy to reuse is hard to use.

Flexible and abstract and flexible code is hard to use and also hard to understand. It slows us down. Keep in mind that speed is achieved by writing simple and direct code with as few abstractions as possible.
β €
πŸ’‘ Resist the temptation to write flexible code. Write dumb and straightforward code by default. Add flexibility only when necessary.

Agree/disagree?


dev.events

Posted on by:

codingunicorn profile

Coding Unicorn πŸ¦„

@codingunicorn

Hi!✌(β—•β€Ώ-)✌ My name is Julia (also known as coding_unicorn on Instagram). I'm a full-stack developer specializing in Java and JavaScript.

Discussion

pic
Editor guide
 

You have a false premise. Sometimes making code extensible increases complexity, sometimes it reduces complexity.

It comes down to design. For example if you ask me to write a calendar to display the holidays in 2019, I could hard could images for each month with all the holidays in the code. My code would be none extensible, and suddenly become incredibly complex as soon as you ask for the most minor modification,, such as to display different holidays per country.

Or I could write really good code for displaying a single month and a list of holidays for that month. I then just pass in parameters for each month of 2019. It turns out my implementation of the original requirements is simpler and extending it for new requirements does not overly complicate things.

So you premise making extensible code is always more complicated is simply false. The truth is some things will make the code overly complicated, some will actually make it simpler.

 

I took it as a caution against early optimization. Unless you're building something that explicitly calls for reuse, spending time on accounting for unknown future requirements can actually cause more work when you have to undo it later.

There was a point made about dealing with extensibility once requirements are known, if you have to use the same calendar in two places, you already have those requirements.

 

With experience comes the intuition to know when abstraction is needed and at what level, without first having that 3rd or 4th requirement that will inevitably come.
I worked with so many people who insisted on not "future-proofing" their code, only to end up eating everyone's time to completely rewrite their horrible, single-use code or hack it horribly to make it work for more than 1 scenario.
Anyone blindly spouting YAGNI is a mere programmer in my book, not an engineer.

That comes down to the intent of the code. With experience come habits that lead to easily extensible design that can be readily modified to meet new requirements. But to that end, at what point are you over engineering your code? Should systems be written to be infinitely modifiable to meet any requirement, or should you write rapid code that meets the needs of the now and can be easily replaced with a more extensible design? Personally I fall into the second camp. When you write a POC or the first lines of a new system, code quality, while important, takes a back seat. That's not too say you shouldn't write quality code and use lessons learned, but when you are spending hours trying to design a highly extensible and robust system when you could have banged it out in thirty minutes, you are wasting time over optimizing for what ifs.

It boils down to a singular postulate, just make it work then refactor.

Experience gives you insight on what not to do, and that can be invaluable in simply not writing yourself into a corner for future use, you don't even need to guess at what the future holds if you aren't locking yourself into early rigidity.

Again, if the requirements aren't to write a reusable module, you probably shouldn't be doing it quite yet. If you think you should be, you likely need to renegotiate the requirements to include it.

A POC is throw away code, and should never be used as the basis of a real product by modifying it to add features. It's like the paper and wood model that an architect builds. The real building includes numerous engineering and design consideration that are not in the POC model.

It's the same with good software. Quality and consideration for potential future use cases must be baked in. Why? Software tends to live much longer than expected and be used in unexpected ways, and you're probably not going to be around (or remember what you did and why you did it) when users come back asking for enhancements. Every single codebase I've ever come across where the devs only coded for current requirements without considering the effect of time has been an absolute mess to deal with, and impossible the alter for new/changed requirements.

I would give this a thousand thumbs up if I could.

Unfortunately, I have seen too many POCs ending up in production, too many devs saying "we will never have to touch this again" -- and boom! its back on MY desk and I have to deal with the mess. Nothing lasts longer than a provisional solution.

So yes, I rather err on the side of SOLID and all these principles. If once in a while I over-engineer code slightly because it is actually never touched (extended, fixed) again, then so be it. The vast majority of cases go the other way.

 

A hard-coded calendar isn’t very extensible but is the easiest to understand. All the months are written right there instead of importing objects and classes with month name data saved elsewhere. There is always complexity in abstraction. I took her point to mean that if you start extending before you completely understand what you will need your program to do, you have added needless complexity, which is self-defeating at best.

 

A too straightforward approach would almost always be too verbose hence hard to read.

When it comes to over-abstracting, I guess the devil is in the details. Some level of future-proofing is almost always needed. The clients do not expect that adding a feature after initial development can take so much time (when your first version is very rigid).

 

100% agree, this article sounds more like the butt-hurt of Junior programmer who thought they were all that and found out they really weren't. Clearly there's a lack of exposure to enterprise development than real experience shown here.

 

I've been a software engineer for more than 25 years and have consistently found the worst code to work in to be code that has been over-engineered, over-abstracted or unnecessarily complicated. If @codingunicorn is a Junior (which I doubt from her insights) then she has already learned an important lesson that it can take years to learn. The point isn't to never abstract it is to not abstract too early.

 

It is the absolute type statement of youth. It takes decades of experience for most developers to gain enough experience when they should simply go for the simplest solution they can think of, and when they should give deep thought as to how the code can be extensible, and if it should be...

The bits of extensible code that always seems to work out I'd when you notice you are coding the same thing multiple times, and decide to instead just create a util or helper method. The types that are really a bad idea is when you are doing a quick pov, and spend lots of time solving problems for a 5% use cases and a framework for the next part of the project. As the point of a pic is to try things and learn and then later design based on what you learned. Much the same way a mechanical engineer designs a prototype to learn how it will fail.

Most software projects are somewhere between. There are bits that should flexible and bits you just want ant anything that works.

 

That's because it's written by a marketer's fake persona; I would bet money the pic is a stock photo / model. Just look at the whole-ass blog.

Would be an awesome stunt, though πŸ˜‚

 

Firstly embedding data is a quick and dirty fix which immediately limits useful lifetime of code. Also, if you are tasked with building a 2019 calendar, you will be asked to produce a 2020 update. It would be so much easier and more elegant to just replace a control/data file than rewrite the sucker. If the "black box" is well designed and documented, there is no reason to ever open it! For instance i wrote a UPS, FEDEX, etc billing parser/accounting info extractor 25 years ago that is still running strong. It has been recompiled for new platforms but never reengineered. The control file is tweaked every so often to keep up with input and ourput changes.

 

I have a simple tactics that keeps things really stupid!

It's a cousin of dependancy injection.

In your calendar example, when I need to get specific holidays, I would call a non existing method like: getMonthHolidays(1)

Then I would ponder from where this method has to come from. Is it a private method OR a first order method ?

That means that I am always writing some stupid code at different level of a abstraction.

This also leads to easy to understand, maintain and extend code.

I had a CS curriculum and that wasn't even taught as a quick win. That's sad!

 

I think you'd still use standard oo principals to code the calendar. You just might forgo adding interfaces to every utility, creating a factory to determine which type of calendar (Gregorian, Hessidic, Klingon, etc), create an adapter to wrap your ui components, a service bus to notify for click messages, etc.

 

This, exactly.

 

Hello. I'm a solutions architect with 20 years in the industry. If designed correctly abstraction is very useful and powerful. With that being said, I have found that most developers don't understand abstraction. That 80% of developers are unable to think abstractly and if they try to design/code abstractly they'll achieve what you outlined in your post. Final point: abstraction is not bad it just shouldn't be done by developers that don't know how to do it.

 

good point. one should not take "code writers" for developers or developers for software engineers. while abstraction works for everything, it's not actually for everyone. even more, abstract design becomes a pain when done by code writers or developers lacking real live implementation experience

 

Abstraction and indirection are different. Abstractions simplify. Indirection adds complexity. Most of the indirection I'm seeing is useless. Always 1 implementation in the factory. Why have the factory? Well, it's good design. We "could" switch out the ORM with anything now! Most of the time it's not needed and even when it is, actually making the switch still requires changes to many other parts of the code. Delaying optimization, in my experience, has usually been a much cleaner approach. When you implement the needed indirection, you know the whole problem you need to solve.

 
[deleted]
 

I think you are reading things into my comment that are not there. You may want to reread it.

Really? I think you were very clear. "I have found that most developers don't understand abstraction. That 80% of developers are unable to think abstractly and if they try to design/code abstractly they'll achieve what you outlined in your post."

In other words, you're smarter than 80% of developers.

Consider that our job is to make the user and subsequent deverlopers feel smart. They should be able to understand our UI and our code without having to be experts at anything.

First off you are reading things into my comment. Never did I propose that I am smarter then 80%.

Your assumption that all abstractions are hard to follow is wrong. If designed and implemented correctly they are easy to follow and use which was my main point. If this was not the case we would not see so many frameworks in our industry.

I'm sorry you're taking this personally. Your primary point seemed silly and arrogant to me. Don't defend it. Edit it.

And yes, I agree that abstractions can be useful, but mostly they're abstract. Hence the word. I've been programming professionally for about 40 years and have mentored lots and lots of developers. I don't think abstractions, in general, make for better software. Or even less software. Programmers who love them tend to rubber stamp them all over the place causing them self and others to write lots of extra code around the abstractions to make them useful.

Human language is already OO - tactile, tangible nouns and verbs. If you listen to the business, not the tech, you'll hear a highly refined already refactored business model with users, stories, nouns and verbs already well defined. Any code we write that doesn't reflect that model is going to be unintuitive and is ultimately added complexity. Our job is to solve business problems, not create them...aka add as little complexity as possible.

About... "If this was not the case we would not see so many frameworks in our industry" is a function of creativity, not evidence of a cultural commitment to simplicity. Yes, most engineers preach about simplicity, but don't actually produce it. We have so many frameworks because software people are creative and love to produce "solutions" for imaginary problems, not because they're committed to less code.

For example, what makes a good musician? A good musician is someone with a large capacity for music. When we have a capacity for something we tend to have more of it. We enjoy it. And we often create it. Technical people have a capacity for complexity. We enjoy it. And we often can't help but create it. And so, we have lots of frameworks that go in and out of vogue every year and will for the foreseeable future. This is evidence of our creativity and capacity for complexity, not of a commitment to simplicity.

 

Totally agree.

There are developers who think things are complex when they are just incapable of thinking beyond the lowest level of coding.

 

To me, the right answer is simply use the right style of programming where warranted. Saying the Open/Closed principle is problematic discounts 25 years of proving it a wonderful principal where needed.

 

Exactly. Any principle not fully understood or without greater context will be wrong some amount of time. And so the next generation comes, misunderstands, makes some mistakes, adds some bits, then arrives back at the same conclusion. The age-old cycle.

 

So true, take Angular or React. We use it everyday but few of us know the Internals. None of us knew it at all until we spent substantial time learning it. Any good reusable code takes time to learn.

Ironically, that's exactly a major reason I prefer React, it's easier to understand as it isn't as prescriptive. There are very simple look alikes that are easy. React itself is a bit harder, but not really harder than just the rxjs library angular is using by itself.

Agreed, Rxjs takes a long time to grasp, but is worth the pain because asynchronous push notifications are almost alwys better than pull constructs, even asynchronous pull requests.

I prefer redux/flux or graphql myself.

 

Open/closed is often difficult to grasp. I like to sum it up as its backwards compatible and not limited for new features. However sometimes no amount of planning will ever allow you to know what future features you'll need, and this is why you need to follow Robert C Martin's rule of "classes should be small, the second rule is they should be smaller than that"

 

C# made open/closed constructs simple. Just use a static method using the this keyword for first param. Javascript and typescript both support the protoype extension method.

The power of doing this is all good. Some naysayers claim that finding the source of these methods is a problem. To that argument, following covention over configuration is all that's needed. The convention of putting all extension methods in a unique namespace is all we need. Intellisense and go to definition takes us to the source.

 

Totally agree! This is an easy one for us devs to trip over. I still find myself doing "speculative programming" after years of being a professional dev.
Two major things I've noticed when this happens.

  1. The requirements/concepts of the feature are usually not complete or fully understood.
  2. The work that is done is usually redone because my assumptions were incorrect as you had mentioned in your article.

This is certainly a tricky aspect of development and takes real thought and discipline to avoid "gold plating" code that will look beautiful but never be used. This code is equivalent to writing features that were never requested.

To avoid doing this I personally try to balance the desire to be the overlying flexible system with the YAGNI principle

 
 
 

I agree. The balance is key. Systems are complex and designing for future-proofing often requires a qualified definition of what that means for the projects in scope and the constraints, technical and business, in play. I often ask myself how I can manage the technical debt of something I design rather than if there will be any debt at all.

 

I like to always start simple when it's a new problem and coalesce similar functionality as time passes. This idea really only works when you're willing to spend time refactoring the same code over and over (which I don't see as a waste of time when done correctly).
That said when an abstraction is obvious I absolutely begin with it.

 

I have a co worker that is a big fan of the rule of three. It's a good general rule of thumb to follow and I find it helps summize the same point I think you made here.

 

It's about finding a sweet spot, and not pursuing extremes

 

I’ve spent way too much time trying to manage others poorly written inflexible code. A minor change to requirements, say a change to a service account breaks everything. Taking hours or days to fix. Usually end up rewriting, and simplifying poor design, by improving flexibility.

Bad code design is ok for inexperienced developers. Hopefully these developers move beyond this. If not, they call developers like myself to rewrite code properly. I’ve spent decades mentoring inexperienced developers on avoiding bad habits and developing good design.

Can do the project quick and cheap or do the job right. Never a lack of work building and documenting flexible code.

 

Problem is people prematurely optimize and premature optimization is the root of all evil. When you optimize prematurely you create leaky abstraction which will require you dive into the internals. At that point you have lost the advantage of creating that abstraction.

 

Really good abstractions holds strong for most use cases. Remember last time you had to deal with quirks of document.createElement in the browser.

 

I definitely agree that it's hard to get the right abstraction.

Here's a great talk from Sebastian Markbage from the React team

The cover for the video says it all "No abstraction > wrong abstraction"

And here's the great blog post about the wrong abstraction by Sandi Metz which you referred to.

 

I find this assertion misleading.
It's also true that no code is better than wrong code. It's always better to do nothing than to do evil, the same way that zero is always larger than any negative number. But where does it gets us? Zero is not enough. We need to do good, positive work. We need good abstractions that allow our code to be flexible. Having inflexible code that has enormous cost to every tiny change is useless. We live in fast moving and dynamic world. The only constant is change.

 

For sure we need to write code and ship things and iterate over them. The main premise is, finding the right abstraction is difficult or can be difficult, so don't just create them because it makes things DRY.

I agree. DRY is a very good principle, but it should not be followed blindly. Sometimes, it's not really DRY, it looks the same, but it has two complete different purposes, and if we leave it alone, the two so called "duplicates" will evolve on two completely different paths, and change for completely different reasons. Sometimes, the cost of creating a dependency to share the code outweighs the benefit of reusing the code. usually I follow the WET principle instead: Write Everything Two times, but not three.

 

Thanks for the video!

 

This is something I have been noticing a lot particularly referencing interfaces: A tendency to write an interface "because we want to implement against an interface".

My answer as a software architect is always the same: Until you need it, the best interface is the implementation you already wrote. Particularly with fast-changing applications, you'll quickly find that you cannot - and must not - anticipate use cases for next month. You'll almost always have to redesign either way, and it is much easier to do when you know - rather than guess - the new requirements.

 

Depending on your language you may need to always develop against interfaces so you can use DI to unit test your code.

 

I can't tell you how many times I've listened to arguments like this from people too inexperienced to see how their code will evolve. It's gotten to the point where I view YAGNI as a bad word. Even though I completely agree with the concept in principle, it is often overapplied by conservative developers.

Sure, for new and inexperienced developers, overgeneralization is risky. They don't have a full grasp on the patterns and concepts that they need to properly design simple and flexible code

However, once a developer had some modicum of skill, domain expertise and business understanding, they should always be expected to develop flexible, scalable code.

Occasionally, experienced developers will find themselves on newer, research-oriented projects. This path may necessitate writing less-flexible code - but the purpose here is not to go to production. It is to learn and gain insight into new technologies and patterns.

If we all held ourselves to the standards described in this article, we'd have no shortage of jobs, but we'd also have no shortage of technical issues with the software we write (and use).

Always strive to write the most clear and flexible code you can, unless you are attempting to learn some new technologies or prototype within time constraints. Your teammates and successors will thank you.

 

Your assumption that flexible code must be complex is wrong. Good abstractions are easier to grasp than zero abstractions sphagetti code.
It is true that finding the right abstraction is hard. The solution is not giving up on abstractions, the solution is learning how to abstract correctly.
At some point, changing inflexible code is so hard that the project grounds to a halt and all the devs cry to throw the code base to the trash and start everything anew. The only way to keep large codebases alive is making them flexible.
That's why there is 'soft' in software.

 

Complexity makes the system harder to understand and harder to charge. What's worse, our abstractions are usually wrong, because often we design them up-front, before the actual flexibility is needed.

This goes way beyond programming; it's a problem in engineering, architecture, law and politics too. Systems are created by dedicated, committed people with a clear vision of what is needed, but over time maintenance and further development are delegated to others without the same commitment or skills. The result is always the same; a steady degradation of the system. We eventually reach the point where not even the original builder is able to rectify the mistakes made; the complexity has become too great. Every participant in the project leaves their own flavor of change, with no explanation of where, how or why it differs from the original master plan.

We can argue forever about method and about which currently-fashionable magic bullet will save the day but the root cause remains, as do its effects. It's quite simple; a system will always break down once its level of complexity exceeds that at which it can be understood by those tasked with its maintenance. This is not a problem of software; it's a problem of being human.

If there is a solution - and I'm not saying there is - it's to design for the lowest common denominator. Or to accept that systems will always need to be rebuilt from the ground up, way before they reach their intended lifespan.

 

God this is so wrong. I'm getting tired of this new wave of Gen Z'ers learning to write s**t procedural PHP/etc code and thinking they've discovered some new secret to "good" programming. Sorry, but no. Just stop, please.

 

Haha it's so true. When you write code you're always thinking "how could I make this more generic so I could potentially re-use it" but in reality you're just evolving an ever-more complex code-base (guilty!).

 

This just more or less sounds like a problem for novice programmers. The proliferation of languages and technologies has not changed the fact that knowing how to write syntax and knowing how to program are two separate skills.

Advocating the limitation or elimination of reusable code sounds like the battle cry of the under-experienced.

(this is not meant to sound harsh or judgmental - just an observation)

 

Refactor early and often but not before necessary. An abstraction can wrap a concrete implementation after a similar but unique use case develops. That's the time to add the complexity. Leveraging internal tools, documentation, and style standards as well as familiarizing common abstraction patterns will help fill in the complexity gaps.

 

I think the most important thing to keep in mind is that there are no absolutes in software development.

Computer science is the art of trade-offs.

How many people are working in the codebase? One, a dozen or thousands of Engineers?

Are you working on a long-lived, super complex SaaS product with thousands of features and hundreds or thousands of developers or a one-off project for a small shop?

Abstractions can be great if they are well thought out and make code easier to change.

I think the problem being talked about here is trying to over-solve for relatively simple problems. Big, long-lived, complex systems needs abstractions and make it easier to manage change.

Keep things simple if you can and solve for the current problem at hand without impacting the ability to manage change in the future.

But at a certain level of scale and complexity, you will need abstractions to manage the system.

Abstractions should make it easier to reason about code by encapsulating commonly used boilerplate so you can focus on writing clean, easy to understand business logic.

 

Computer science is simply the science of computation. It is a mathematical discipline that has little to do with actual programming. Computer programming is an engineering discipline and should be treated as such.

 

The science of computation is literally balancing tradeoffs of space and time complexity. Memory usage or speed. I don't understand this comment at all.

Computer science has little to do with programming for low-scale, simple projects. You don't need computer science to update the CSS on a website.

Computer science takes a backseat with simple systems because hardware has advanced to a point that efficiency doesn't matter much if you just have a couple users.

However, Big O notation, correct use of data structures and an understanding of core CS fundamentals is important if you work on systems that have millions or billions of users and require high-throughput.

For the first 7 or so years of my development career, I shared the same sentiment that my CS degree was a waste of time. Then you work on a really complex system that has a lot of throughput and extremely demanding SLAs and you realize that CS does have a part to play in modern software engineering.

I'm saying that CS is the why but engineering is the how. I've been at this for 20 year and have an MS in EE.

 

Flexible and modular code solves two big problems:

1) The first problem it solves it probably the most obvious. Putting some more time upfront will undoubtedly save much more time down the road if you know more development will come. Nothing is worse than needing to change a whole system that has already running in a production environment and needing to maintain the same data contract. Designing your code upfront to be modular will allow you to add to or refactor much easier when the time comes.

2) The second problem is probably the most forgotten, and that is testing. It really bothers me to not hear testing being talked about more. Modular code will make testing a much more pleasant experience. Following concepts like inversion of control and utilizing interfaces will make testing simple. You will be able to test the parts of code you care about and not have to worry about how you are going to test with your 3rd party integrations. You would instead be able to mock or stub out code that doesn't need tested.

In the end, it is always good to have some sort of code modularity and code flexibility to save you headaches, or a other developer's headache, down the road. Not only will you be able to test, or test better, you will make adding new features and such much easier when the time comes.

 

Have to agree... I fight a lot against introducing levels of complexity and patterns that aren't needed, or at least not yet.

You can create abstractions that make life easier in the life of the project. Abstract complexity behind easy to use interface. Though sometimes you're better off doing it "the hard way"

 

YAGNI is a principle for programmers, not engineers, and arguments for it are usually invalid strawman arguments. Good abstraction are hard, yes. But that's your job as an engineer - to THINK hard about how your designs hold up over time, not to take the easy way out and create horrible tech dept that others that come after you have to clean up.

 

Engineer needs to know what to include, but shouldn't consider what not to include? Doesn't add up.

YAGNI isn't YOLO, and calling out programmers according to your arbitrary developer hierarchy doesn't actually help you or anyone else. Every acronym making their rounds in tech blogs is going to be adopted by the reader at their own level of experience, and applied to their own situation differently.

Yes, an engineer, does in fact have to know when not to use things. No, it's not a horrible thing to abstract, as and where needed, but not for the sake of abstraction itself.

The entire premise of an MVP is inherently accepting tech debt, that can still be minimized (again based on experience), in favor of moving quickly towards a given end, in which case your job is going to depend on knowing which corners you can cut and which you need to support at all costs. In so doing, you can accomplish what no amount of extra planning will get you, if your highest priority is fast delivery of a specific product.

 

Made an account here just to post on this. I don't understand how some people are supposedly "20 years in the industry" don't inherently see the value in this kind of post. Perhaps they were working on simple systems for those years.

Nothing complex can be made well without this principle, imo (I'm a game engine programmer, I guess, if you want to judge me that way). Programmers are the only engineers who think you can redesign a system purely in its external fashion, while changing its internals, without incurring cost if you "just abstract super good!". In reality, complexity adds cost, it adds bugs, and it adds confusion. Imagine if a building architect told you that he wanted to change how the internals of a skyscraper's foundation worked, such that it could support any building on top of it, yet it would still maintain the same external interface. I can tell you right now, that would be an impressive feat, but even if it could be done (I kind of doubt it), it would require the external world (the building) to bend to the need of its foundation.

Equally, our code ends up bending to the needs of our poor systems too often. The cost to that is development time, but maybe equally importantly, actual performance loss. How much software is slow for our elegant abstractions (which maybe aren't that elegant)? imo -- copy pasting is the business.

Also having a Pornhub/Github shirt is :chefs-kiss:

 

This post title smells awfully like click bait. Flexibility is great and it's what allows any non-trivial system to evolve without requiring a rewrite. Over-engineering is the bad thing you're looking for here. But those two things should not be confused.

 

I think there is a balance to be found. As a developer who works closely with the product team I have found that it is possible to find the balance between flexibility and the easy of use (code wise). If it's something you may need by the next quarter you might as well bake it now and document it well so you can get started right away when you need it. If it's going to be over 6 months or not in the plans yet, then don't do it. That being said there are a few caveats to consider. First, I think that works well with smaller teams who do not change members often. Second, the documentation is absolutely crucial. Heck, I need a refresher on what I was doing after a long weekend... Third, you MUST watch out for over engineering. You may have a good balance in mind but get excited and abandon the balance w/o realizing it. Fourth and as important, testing is absolutely crucial. In complex systems if you don't have automated testing, it becomes easy to make a change that breaks something else.

 

Strongly disagree, abstractions to leverage the intended process is the core principle of programming itself, dumb straightforward code has its benefits in some specific use cases and emergencies for quick response but good code is always flexible and simple to understand.

Flexibility isn't opposite of simplicity

 

Almost every commenter on your post is wrong and they all have mostly failed to see what's actually happening.

You are correctly identifying a problem, but your view of the problem is blurry. Yes the abstractions you talk about do increase complexity and this complexity also reduces development speed but this is not the main culprit that reduces development speed. You think that the "flexibility" trade off isn't worth it, but this not the full explanation.

Believe it or not, the "abstractions" you are talking about are actually Less flexible. They reduce flexibility and increase complexity. There is no tradeoff, it is fundamentally bad.

A common experience people have is that they will create a seemingly flexible design in their code only to find out that an additional feature request has rendered their modular code completely un-flexible. They end up doing a huge refactor or a huge hack to incorporate the feature. This happens time and time again and is the antithesis of modular code. Many people go on making all these "flexible" abstractions and attribute it to just the nature of programming and intrinsic deficiencies in "design."

While most people think that this blurry area of "design" is where the problem lies, in actuality, the problem you are describing has a concrete and almost scientific/logical reason for existing. It has very little to do with your "design" choices. In fact, if you don't realize what is going on, all your "designs" will always have the potential to have this issue with complexity and lack of modularity regardless of your skill/experience.

Again, no blurry segways into "bad design examples" the reason is highly concrete and I will try to make this as clear as I possibly can.

It appears to you that Abstractions are the culprit and this is true to a certain extent. However, this is not the root of the issue. The root of the issue is the kinds of abstractions that people use. You specialize in Java, and therefore must use OOP and OOP Design patterns. This is the heart of the problem. The very nature of OOP makes it so that it is fundamentally less modular then most other forms of programming. So, if the root paradigm is less modular, then, by extension, the Design Patterns and all the abstractions that sit on top of OOP are also less modular.

Why is OOP less modular? The explanation is simple, because OOP forces the unionization of two smaller primitives: function and state into a larger primitive called an object. Smaller primitives are more flexible than larger primitives. Thus OOP is less modular by its core nature.

A program is made out of pipes and fluid. Pipes are functions, data is fluid. You must keep these components separate and small like lego pieces in order for your program to have maximum modularity. You then build your program by composing these primitives together like lego pieces. Small pipes fit together to form larger compositions until you eventually build a network of pipes that data flows through like fluid.

With Objects the analogy changes: pipes are glued together with other pipes to form primitives and the data (aka fluid) is glued to the bundles pipes themselves as a union called a "pipe network" (this unionization is the object in OOP). Methods mutate state and Objects themselves can flow through pipes. So essentially, following the analogy, you have a "pipe network" where the state of the pipes are always mutating, (for example: pipes that are constantly changing in diameter) and you have other (pipe networks) flowing through the "pipe network" to augment the overall "pipe network" with new additions of other (pipe networks).

Other words for (pipe networks) flowing through "pipe networks" is: "dependency injection" or "object composition." Most Design patterns are some variation of previous mentioned patterns and therefore all suffer from the same issues.

Needless to say, nobody builds pipe networks like an Object Oriented program because the complexity is unnecessarily high and such a pipe network is not "flexible" to modification. For maximum flexibility I need access to the smallest pipe primitives, but in OOP I only have access to a bundle of pipes that are constantly mutating (an object). Objects are the primitive unit of composition in OOP, when in actuality the primitive unit should be the smallest pipe segment. Think about it, you don't build lego models with bricks glued together in arbitrary shapes. You build walls by composing bricks. The bricks are your unit of composition and the wall while composeable is not a primitive and therefore can be decomposed back into bricks or half-walls.... meanwhile an object cannot be decomposed, it is forever a mashup of lego bricks glued together because in OOP that object/lego wall is your primitive.

What is the programming primitive analogous to the smallest pipe segment or singular lego brick? The function.

How do I combine functions like lego bricks? Read on:

The object was created with a certain goal, but the creators of OOP did not anticipate the cost. What the object does is reduce complexity by allowing you to think of multiple concepts as a single object. However, objects reduce complexity at the cost of modularity/flexibility.

This trade off can be completely avoided by using functions as legos. The abstraction that "composes" functions is called "Function Composition." This is how functions can combine like lego bricks. You can look it up on google.

The paradigm that forces you to use "function composition" is called "functional programming." In functional programming your functions are pure and can never change state and your data flows through a composition of functions chained together with one taking an input and feeding it's output to the next segment of the pipe. Fitting fixed pipe segments together to form a unchanging pipe network is a one to one analogy to functional programming and composition.

Mind you, the style isn't perfect. You can still send "pipes" through your pipe network with first class functions and the paradigm breaks down at IO. You can also have functions return functions or essentially pipes that spawn new and varying pipe segments to add to the overall pipe network. Treating functions as data creates an isomorphism that is identical to unionizing data and functions into objects and therefore creates much of the same issues that exist by default in OOP... so use the techniques of first class functions in functional programming sparingly.

You will note that I said the FP paradigm breaks down at IO. This is exactly where much of the complexity with react and redux arises. The IO loop... if your web page was just a single static render then all of your react components can be pure functions and your App would be perfect modular and elegant... but because the User must interact with the page and change the state of the page this forms an IO loop which pure functions do not fit well with. Hence the development of awkward patterns like FRP and Redux to deal with these issues. The benefits (and weaknesses) of FP are hard to see with React, Redux and JS as the front end is not the best application of FP. You really need to dive into haskell to learn more about what it's all about.

Also note that you are not the only person to notice this issue with abstractions. A very popular language was created by a Famous computer scientist Rob Pike, with your very philosophy of writing simple and straightforward programs in mind. That language is called Go, and Rob Pike eliminated classes/objects from the language to exactly follow your idea in your post. Essentially Go-lang is going back from Java to C style programs where abstractions weren't as prominent. Rob Pike correctly identified the core problem as OOP. You sort of saw the same thing he did but in a more hazy way.

However, I don't entirely agree with Rob Pike. While less abstractions are better then JAVA, I believe that JAVA is bad not because of too much abstraction, but because it is the WRONG abstraction. Rob is a genius, but he has had little experience with functional programs, and while Go is better then Java, the way forward is not to eliminate abstractions but to use the RIGHT abstractions. I believe functional programming is the right and also best possible abstraction. However, I do not believe functional programming is the "perfect" abstraction, such perfection may not even exist.

Also A Segway on the commenters that responded to your post:

What most of these commenters are arguing for is that you should of picked the right objects for your design at inception. They blame "bad design" and say you should have of glued all your lego bricks into objects that are "flexible" enough to handle all future feature requests.

This is impossible unless you can see the future.

Rather to handle the future, you just don't glue the bricks together. Build the walls, but allow for the ability to decompose the wall into a smaller wall to form other primitives. If you realize that part of your lego wall can be reused to build some other concept you can always break it down to that fraction of a wall that fits what you need, you don't have to break it down all the way back to a single lego. This is maximum flexibility, maximum code reuse, with none of the over-complexity and lack of modularity of Object Oriented Programming.

Wow I wrote a lot. Hopefully that was educational. I'm going to save this for a repost as a future blog entry.

 

Flexibility isn't boolean. f(i) is flexible and abstract - I can change i, and the behavior is abstracted behind f(). I've never worked on a project whose team agreed to follow this "Open-Closed Principle" so I can only guess what flexibility and abstractions you find troubling. A code example would help illustrate the mental overhead and maintenance costs that are frequently overlooked or discounted.

 

Please study the open-closed principle again more carefully. It applies to software entities (primarily classes), not systems on general. It is primarily about encouraging subclassing in the OOP sense as a means of extending functionality, in preference to modifying functionality.

 

Still doesn't make much sense to me.

Modifying functionality is OK. Providing extension points comes with a cost. Be it "simple" subclassing or system-level design.

 

You're missing the point. This principle is all about lower-level OOP, class-based design. Don't attempt to overgeneralize it. In class-based designs, you don't need to do anything special to extend existing functionality; just create a subclass with a method overridden. The point of this principle is that OOP by its nature allows you to do that without impinging on the behavior of the system.

 

I am one to believe that you should start with the simplest possible solution and work from there; however, as soon as you need to test anything you will need abstraction. Given that you should be writing your tests as you write your code, abstractions will become useful right away. So no, I don’t believe abstractions are harmful. I believe that smelly code is harmful.

 

The premise of this article boils down to sophistication. If you find that writing dumb code works for you, then go for it, but please don't try to tell me or others not to write elegant and flexible code. In doing so you and others are projecting your own limitations upon those of us who can and do write good flexible code the first time. If you want to really grow then challenge yourself to understand why you can't rise to the occasion. After all you call yourself a coding unicorn, are you a coding unicorn?

 

It's a difficult balance, my rule of thumb is to stay simple until complexity is needed, however, I also leave some room for future enhancements if that doesn't require too much upfront effort.
One consideration is the people that will maintain the code in the future: over the years, with people coming and going, we've learned to write the code at mid-level developer skill level, which means it should be very easy to hire someone new to start using and maintaining the code, instead of newcomers wanting to rewrite everything or creating a competing complex framework because the old one was too complex. If we have to hire someone super smart with twenty years of experience to maintain some proprietary framework, then I failed as an architect.