DEV Community

Sergiy Yevtushenko
Sergiy Yevtushenko

Posted on

Why software development is so conservative?

It might sound unexpected, but software development is extremely conservative and slow to accept new ideas. Few examples:

Agile Methods

A thing which can be considered a prototype of traditional agile team was introduced almost 50 years ago - it was Chief Programmer Team. In 1974 Ernest Edmonds published a paper “A process for the development of software for non-technical users as an adaptive system,” (General Systems, vol. 19, pp. 215–217, 1974) which describes iterative approach to software development - the core of Agile Methods.

Wider acceptance of these methods began another 30 years later, but even today you can see "scrum experience ... is a plus" in job descriptions, meaning that it's something relatively new for the hiring company.

OOP

The whole software development history is a fight with complexity. And most complexity in software appears from internal dependencies. One of the first and most successful attempts to fight this complexity was introduction of structural programming. Basically it introduced encapsulation for the code - every function/method is considered a black box with defined interface. From this point of view OOP is a natural extension of structural programming - it adds encapsulation to data. Due to this no one should be surprised that OOP did born in about the same time frame as structural programming - around second half of 60's.

Wide switching to OOP started somewhere in 90's and there are still areas (bare metal embedded software development) where many developers consider OOP as bad and harmful.


List above contains only most prominent examples which show how actually conservative software development is. The only thing which I still don't understand - why...

Top comments (12)

Collapse
 
ghost profile image
Ghost

About Agile, my impression is that is not the final solution to everything as sometimes presented, it works on a very specific set of conditions; first of all, the new version has to be easy to deploy; for most embedded is not applicable, you have to deliver the entire system you generally can add code later, at least not easily, so any update is a huge annoyance.

I suspect that 20 years ago agile didn't run because deliver small updates in CD and before that in diskettes is far from ideal. OOP has an overhead, today, with modern PL is less and the computers are more powerful so the overhead is less relevant, but on the old days every extra byte and Hz counted.

This is where I get creative, this is a big stretch, but I think that the UNIX philosophy of do 1 thing is a kind of an old OOP, in those days you didn't do a file_manager_doit_all program, but you did: ls, rm, mv, cp, etc. they naturally chopped the programs as small as possible and used pipes, conditional, the entire Bash language as the glue. If you think about it, you had a bunch of "objects": ls, cp, etc. and a "connector" language, Bash to use those objects, the arguments of the commands where the public methods.

Maybe that's why that UNIX rule was sacred and today not so much, we got the encapsulation inside each program. All of this is just a huge oversimplification on my part based mostly on speculation and coffee, but kinda makes sense to me :D

And I don't think OOP is a "modern" or "best" way to do it, is a way, it adjust very well with some wind of problem and very poorly on others, is the experience, knowledge and general awesomeness of the programmer to know when and where. I think...

Collapse
 
siy profile image
Sergiy Yevtushenko

OOP might or might not have an overhead. It depends on the implementation and on the particular way OOP is used.

Collapse
 
ghost profile image
Ghost

oh, yes, gave the impression that it has some inherent overhead, but I think that is mostly because of the tools we have now, C++ was much more clunky decades ago.

But I wonder tho if OOP actually has some inherent overhead. Is there some systems PL in pure OOP?, OOP for embedded is almost unheard of. In something like an OS or an ebmedded device 10% or even 5% slower or bigger is a big deal. Maybe the OOP overhead exists and is just irrelevant for most applications.

Thread Thread
 
siy profile image
Sergiy Yevtushenko • Edited

Unless something changed recently, linux used to have hardware drivers designed in purely OO way, although everything was implemented using plain C.

If we're talking about C++ there is only one thing which has inherent overhead - virtual methods. They add one more indirection for every method call. But. For embedded systems there is no so many cases where virtual methods might be necessary. Moreover, in cases where they might be necessary, there will be need for some similar mechanism with similar overhead. In other words, there are no technical reasons to not use OOP and C++ for embedded software. From the other hand, C++ has much more useful feature for such systems - templates. C++11 and up enable writing extremely powerful and flexible template libraries. Applications written with these libraries can be of the same size or smaller and work as fast or faster than similar code written in plain C. If you're interested, you can take a look at, for example, modm.

Thread Thread
 
ghost profile image
Ghost

nice, very nice, no ESP listed but I have some STM32 around to play, thank you for the tip. :) so much fun things to do, so little time and to think of all the fun we can have with just a few bucks.

Collapse
 
siy profile image
Sergiy Yevtushenko

I've added separate post with description why agile method are better approach in most cases.

Collapse
 
juancarlospaco profile image
Juan Carlos

Because kinda sux.

Agile usually gets just time consuming with endless scrums,
pre-scrums, post-scrums, scrums-of-scrums,
retrospective scrums, the famous meme this meeting should be an email.

OOP depends largely on the lang, but if its inheritance over composition,
it can get boilerplaty, and usually it means Global Mutable State,
unless the lang has good support for immutability on variables and objects,
all in all composition is better and still OOP, but not the Java kind of OOP,
and too much OOP can hurt performance for critical stuff,
but Functional OOP with composition with immutability is way better.

Alt Text

Collapse
 
siy profile image
Sergiy Yevtushenko

Every idea can be brought to the point of absurdity. Agile is not an exception. Problem is that all other ways to build software work even worse.

As for OOP and FP. OOP does no require language support, for example one can easily write OO code using plain C. FP is another thing, basic FP support requires at least ability to handle functions as data. Java since Java 8 has support for this and therefore can be used to write functional code. Moreover, the approach similar to what you call "Functional OOP" you can find across most posts in my blog. And all code is pure plain Java.

Collapse
 
curtisfenner profile image
Curtis Fenner

Software development has stubbornly clung onto systems which we know are difficult to wield, difficult to implement (thus slow, buggy, and remaining with few contributors available), and that made tradeoffs which are increasingly less relevant because the software&hardware landscape has changed so much. They make error handling hard, optimize for hardware that is no longer slow, force programmers to do the work that a compiler can, etc. In other words, they're expensive to use.

In particular, the longevity of SQL and C/C++, and even much of POSIX, and the refusal to fund efforts to make suitable replacements for them.

Similarly, we haven't made development environments significantly better than in, say, the 90s. Most programmers still work in languages where something as simple as mishandling null is still a scary runtime surprise; let alone actually analyzing more interesting properties of programs (are the keys valid, is this guaranteed to be non-empty, is this number always positive, ...) or datasets. We can't tell if a change is safe, because there are no tools that can tell us what behaviors it could possibly affect. etc.

Also relevantly, we can't apply tools to audit our dependencies! Simple innovations, that have existed for decades, like capability-based-security or even simply monadic IO, would essentially completely mitigate attacks on dependencies like the NPM event-stream incident.

However, it's not economical to build better tools for languages like Java, C++, JavaScript, Python, ... because they are not designed to be analyzed. I think slowly the software development community is realizing (with tools like TypeScript, Rust's borrow checking, and the slowly creeping popularity of tools like F*) that the amount of time humans spend (mostly incorrectly) analyzing code is a giant unsafe waste.

TLDR; if there's anything that software development is conservative about, it's switching to better tools. And the tools we have currently are pretty old and showing their age.

Collapse
 
siy profile image
Sergiy Yevtushenko

As you correctly point out, there are approaches which can mitigate many existing issues. I'd add, that in many cases there approaches allow to enable compiler-supported analysis of these issues even in existing languages, without introduction of any additional tools/IDE's/etc.

If you take a look into other articles in my blog, you might notice that for some time I'm advocating approach which I call "Pragmatic Functional Java". Just by using ideas heavily borrowed from FP (especially Monads) it is possible to prevent a huge amount of the well known issues. They just don't pass compilation.

Unfortunately, this approach requires significant change in the programmers' mindset and this does not happen easily.

Collapse
 
jessekphillips profile image
Jesse Phillips

I think that it moves so fast that it can't change quickly. Consider what is happening in web dev was where we see the language/development design you mention. Do you switch to rails, react, angular, Typescript, dart, silver light?

Consider your hint about agile, it requests a focus on delivering changes quickly yet many don't know how to do it. So many times you here agilish.

As all these things get explored in different settings, everyone is busy building with what was known at the time. And we know well from other stories, rewriting is not the appropriate choice most of the time.

Collapse
 
fredsteffen profile image
fredsteffen

I think it depends where you work. In general I think the tendency is to push the bleeding edge too aggressively, without enough thought into why. Microservices implemented in applications with 3 developers and 12 users, maybe 3,000 requests a day across all endpoints. No business need to scale further. Functional programming applied to problems that are easily solved with oop, etc