DEV Community

Andrew Bastin
Andrew Bastin

Posted on

Architectural decisions that you regret...

Hi,

First post on dev.to...

Hate to make the first one a #discuss , but I couldn't resist asking about Architectural decisions you might have made (long ago?) on your projects but you regret making...

So, anything you would like to share ?

Top comments (14)

Collapse
 
hudsonburgess7 profile image
Hudson Burgess • Edited

Two come to mind:

  1. Not using enough 'dumb' components (specifically in Angular, but the idea is generalizable) -- having the same dependencies all over the place is obnoxious, and it takes longer to both write and run tests. Heavy components probably violate SRP too.

  2. Using technologies you don't need, or abusing otherwise good technologies -- currently for me, ngrx. It's a fantastic library, but wrapping every single service call / HTTP request in an effect and 3 actions is needlessly heavy-handed, especially considering the testing / maintenance burden.

Collapse
 
kspeakman profile image
Kasey Speakman • Edited

These are the largest 2 regrets I've had in making architecture over the years.

Regret: making "magic" architecture

For instance, opting into being part of the architecture by implementing an interface or abstract class.

// the infrastructure runs some reflection code on startup
// to find all IHandleCommands classes and make sure
// messages are delivered to Handle methods inside them
public class SomeHandler : IHandleCommands
{
    public void Handle(SomeCommand command)
    {
        ...
    }
}

The reflection code is not shown so your eyes don't glaze over.

This is nice and clever, because you don't have to write wiring code. But especially for developers it can be really hard to accept "Don't worry about how it works. Just put this interface on it and it will work." Magic. It is great as long as you don't run into any special cases which break it.

This is the same reason I avoid attributes (aka annotations) where possible. I'll use them for some compiler optimizations or literally for extra information like [Obsolete], but I avoid using them to control logic. Logic around these is not directly called from the code it adorns, so it's hard to track down when things go wrong. And it's not obvious how things work. Recently I looked around for a while on ASP.NET Core's source code for the exact code that is run by the [Authorize] attribute. I never found it. I found code that I suspect is called, but I can't prove it because I was not able to trace a direct call chain into that code.

Regret: making opinionated abstractions required

Changes to architecture are very costly since arch is typically used by a lot of different feature code. Any required architectural abstractions should be as course-grained as possible.

I've made the mistake of thinking that I'm going to make it super easy to just plug in new feature code and my arch framework will handle all the infrastructure details. Usually this involves requiring feature code to take on my arch abstractions. That works fine until next month when the customer requests a different kind of feature like exporting to CSV. Where the feature needs to handle some of the infrastructure itself, like writing directly to the response stream. Otherwise it can run out of memory reading a large data set. So I have to backup and rethink my whole architectural abstraction. And probably change every place where it was already used. And that's just the beginning of the fights with the required abstraction. You'll have to keep going back and refactoring to add handling for all the various cases you run into.

Instead, it's best to keep the architecture as course-grained and simple as possible. In most web frameworks you are given the Request and Response objects (although unfortunately most of the time they are just DTOs with getters and setters), sometimes packaged together in a Context object. This is a good example of a course-grained interface. Let feature code handle what they want at nearly this level. Then if there are common cases which use the same steps (e.g. a Load-Edit-Save workflow), make a helper abstraction to simplify that. Then feature code can choose to opt into the helper if it fits what they are doing or handle everything themselves. That way it should be really rare to need to change the architecture code in a way that breaks feature code. But you still have the opportunity to write very little code for really common features by using helpers.

HTH

Collapse
 
jvanbruegge profile image
Jan van Brügge

I absolutely agree with "magic". Thats the main reason why I'm not using Cycle.js instead of Angular for my projects. Everything is explicit and traceable.

Collapse
 
eljayadobe profile image
Eljay-Adobe

Mine is super-controversial.

I regret using multi-threaded programming in C++ (mid-1990s). C and C++ were not designed for multi-threading programming, and to use them thusly requires (in my opinion) super-human discipline. Even today.

Back in the 1990s, what alternatives existed that had a solid multi-threading programming paradigm? I'm not sure. Ada, I suppose. I'm not sure if OCaml was mature enough back then.

The platform was DEC Alpha 64-bit Unix. Ada, OCaml, or whatever may not have been readily available.

If I could use today's available languages, I'd choose D. 20/20 hindsight.

Collapse
 
bernadusedwin profile image
bernadusedwin • Edited

Maybe this is not architectural. More to programming conceptual.

Too many refactoring.

Too many method helper. Too many library. I try less method, copy paste everywhere on my new project.

And it work well. Focus on small line of code will kill your development productivity.

Collapse
 
ben profile image
Ben Halpern

Perhaps not "architecture" per se, but one regret that comes to mind about a previous project is getting too domain-specific with some of the model names. It became really hard to communicate, or even justify why something was called something. Domain-specification is nice on some level, but conventions are really practical and powerful.

Collapse
 
eljayadobe profile image
Eljay-Adobe

Robert Martin's Clean Code has a really good section on naming advice.

Because the 2 hard problems in computer science are:

  • cache invalidation
  • naming things
  • off-by-one errors

Oh, and also...

  • exception handling
  • race conditions
  • asynchronous operations
  • multi-threading
  • floating point number algorithms to minimize units-in-last-place fidelity loss
  • Wirth's law
Collapse
 
miniharryc profile image
Harold Combs

"Bless me father, for I have sinnned..."

  • HATEOAS "because it's the right thing."
  • Using anything more than DNS for service discovery. A central "service registry" is a designed-in single point of failure.
  • Discarding a working ACID-compliant datastore (RDBMS) in favor of NoSQL because "_____ is web scale"
  • CORBA
  • Inventing a security protocol
Collapse
 
matteojoliveau profile image
Matteo Joliveau

Not using Kotlin in a Java component that handles A LOT of nullable elements in a tree data structure. It would have made my life much easier, not to say my code more compact by removing all the obnoxious if not null, else I had to throw in it.

Collapse
 
andrewtheant profile image
Andrew Bastin • Edited

Same... I rewrote an old Java project of mine to Kotlin and boyyyy the null safety is epic!!!!

Collapse
 
thorstenhirsch profile image
Thorsten Hirsch • Edited

Not writing a guide for my architecture.

So I developed a serialization/deserialization framework for our deployment tool, which worked like this:

1.) You write a class that registers for the serialization of an artefact type and implement the serialize() function.
2.) You write a class that registers for the deserialization of the same artefact type and implement the deserialize() function.

And as examples I implemented a "file" type and a "table_row" type. Then I let my coworkers implement all the special cases (a file that needs post-processing after deployment for example).

I thought my architecture was great, because it was simple to understand (write 2 classes for each type) and it was closed for changes (no need to change "file" or "table_row"), but open for additions (copy "file" to "special_file" and change "special_file").

Or so I thought. I really should have explained my intentions better, because what the others implemented was:

1.) You write a class that serializes an artefact type into a "file" or a "table_row".
2.) You change "file" or "table_row" so that it can handle the new type.

So now I have a long deserialization functions with lots of if/then/else blocks. Well... in hindsight a better approach would have been:

1.) You write a class that registers for the serialization AND deserialization of an artefact type and implement the serialize() and deserialize() functions.
Collapse
 
baukereg profile image
Bauke Regnerus

Huge monolithic multi-functional components full of two-way bindings, computed properties and similar "magic". Nearly impossible to extend and hard to test.

Collapse
 
malnormalulo profile image
MalnormaloooOOOooolo

Synchronous network calls as part of a batch calculation job.

I work on an enterprise app which runs a number of calc jobs periodically, and for the first time we to write a job which published its results to a web service and then processed the response. We implemented this as a synchronous HTTP call mediated through our ESB infrastructure (for monitoring, broadcasting, to other endpoints, etc). Not only was this mediation very hard to implement, but it meant that a calculation job is now dependent on 1) unreliable network IO 2) the internal state of a third-party system 3) data not received until partway through calculation. Calculations became non-repeatable by nature.

In hindsight (and I do hope to do this refactoring at some point), pushing it out to the ESB and then forgetting it would have been way better. If the feedback is really needed, it can be pushed back to the app with another one-way message which initiates a totally separate process.