A broad layman's definition of a psychosis is roughly; "The belief in something that has been proven to be wrong". The definition of a mass psychosis is when a lot of people shares this false belief.
History is full of mass psychotic incidents, and today is not different in any ways. My favourite mass psychotic incident from history is "the dancing plague" from 500 years ago in Germany, when literally thousands of people danced themselves to death from exhaustion without any apparent reason. Some 30 years ago, Russian authorities had to close down primary schools in several districts to contain their "laughing plague" that started spreading at one primary school amongst teenagers and kids, resulting in children laughing so hard they could barely breadth. One small group of children had started laughing in one class room, and before the end of the week the thing had spread to dozens of schools, with hundreds of hysterical kids incapable of stopping their hysterical laughter. Psychologists were called in, and the area was set into a "temporary state of health emergency" and children were isolated at home to try to prevent them from literally laughing themselves to death by choking ...
For 30+ years, software developers have been taught a mantra that over time has turned into more or less the declaration of faith required to believe in to be able to land a job as a software developer, and this mantra is as follows.
Object Oriented Programming is a good thing
When in fact 30+ years of history has taught us the exact opposite. For instance ...
- Implementing encapsulation in OOP results in unnecessary complexity and virtually impossible to understand code
- Polymorphism results in chaos and extremely hard to track down bugs
- Coupling data (fields) with logic (methods) is the recipe for disaster, and entangles your logic and data in ways that are arguably the very definition of madness
- Single responsibility results in 1,000+ classes for something that could have been done with 5 functions in FP.
I could go on and mention hundreds of such issues with OOP. However the proof is in design patterns, clean architecture, and SOLID. If OOP was a solution to anything really, we wouldn't need design patterns, clean architecture, or SOLID design principles. OOP itself would be enough for us. The fact that OOP need crutches to stand upright, is by itself enough proof for us.
I once heard a LISP developer prove how 19 of the 23 original design patterns from GoF's book made absolutely no sense what so ever in LISP. In fact, you can quantify a programming language's amount of "psychosis" according to its required number of design patterns in order to correctly use the language. If you do, you will realise that 99% of all design patterns are simply "hacks" around OOP's inadequacies.
Of course for a developer who just started out coding, having read all the marketing gibberish for OOP, separating the truth from fiction becomes incredibly hard. It's therefor the responsibility of the senior developer to stop this psychosis, and explain the advantages of more functional programming languages to junior developers, such that we can hopefully collectively discard this paradigm that originally came out of Simula 67 in Oslo more than 50 years ago - OOP that is of course ...
AA have a 12 step program for healing yourself from addiction. Paradoxically, the same steps can be applied to almost anything in this world, and the first step is always as follows ...
Step 1, realise you have a problem
The problem of course is OOP, and the realisation point is the point in time when you utter this out loud, not afraid of the consequences - Admitting that OO is in fact a psychosis, and not a "brilliant software development paradigm" in any ways what so ever. For crying out loud, every single computer process consists of more or less the exact same parts; Input + process results in output. Exchange process in the former sentence with a verb, and you've effectively illustrated every single (successful) computer process that was ever created.
When it comes to verbs, functional programming is simply superior to OOP with its "subjects", in every regard that exists. So say after me as follows ...
OOP is a psychosis! It is not a solution, it is the problem!
For the record, if you want to work in a sane programming language, there exists dozens of nice languages out there, such as for instance ...
- LISP
- GoLang (yup, no OO here)
- F#
- Or my personal favourite (shameless plug) of course Hyperlambda
When it comes to OOP, ask yourself the following; "What would Ronald Reagan and Nancy Reagan do?" - The answer of course is simple ...
Just say no!
Top comments (157)
Very nice article. I think you argue in a really good way but I disagree with you in almost all points. You said that OOP is not the unique solution and this is totally right. Depending on the problem you should apply the right solution. I mean, OOP is a way of structuring code and can be a solution for some problems. However, people not always apply OOP in the proper way and not always apply other solutions in the right way. That's why OOP seems to be a bad solution but could happen the same with other approaches.
A broomstick is perfect because it doesn't need an instruction manual. Everything that's brilliant is intuitively understood without requiring further explanation. When I started coding (40 years ago), my very first creation was as follows.
Reproducing the above easily understood program in (correct) OO would probably require an
OutputFactory
class, anOutputFactoryMarshaler
class, aMain
class, anotherOutputFactoryMarshalerFactory
, with a couple of interface implementations to make sure it's adequately abstracted, anotherDoWork
class, definitely inheriting from (at least) 2 or 3 distinct interfaces to apply by the rules of "SOLID", separating the implementation from the interface, allowing maintainers to implement alternatives through their IoC container, to prepare for scenarios that would highly unlikely never occur - And at the end of the day, I would have increased the requirements for cognitive energy by several orders of magnitudes to maintain it, effectively created "unmaintainable code", impossible to understand, debug, or extend in any ways what so ever - Paradoxically, because I wanted something that was extendible, easily maintained, and easily understood, with proper encapsulation.I'm not sure who said this, I think it was attributed to Leonardo DaVinci though, and it goes as follows ...
You can use OO to create great code, but the paradigm is implicitly making it much, much harder - Because the paradigm is fundamentally wrong. With OO, assuming you obey by SOLID, with the aim of creating "great code", it is fundamentally impossible to create even the simplest of "Hello World", without ending up with a "class hierarchy from the depths of Mordor" ...
Don't believe me? Port my above BASIC program (2 lines of code) to any OO language of your choice, and in the process make sure you obey by all the OO design principles, such as SOLID, clean architecture, etc, applying the adequate design patterns in the process where they make sense ... :/
Computing is a process of transformation. A process of transformation takes input, applies a verb to your input, and produces a result. The nature of our brains, and the natural laws of the universe you might argue, is much better geared towards using "verbs" as the mechanism to transform such data. Verbs are fundamentally better described with "functions", and not "subjects" (classes and types) ...
When that's said, I do a lot of work in OOP, simply for no other reason than the fact of that most other produce OOP code - I just try to avoid it every time I can avoid it ...
Notice, I liked your comment ;)
Reality:
Anti OOPs Ranting:
Even if that was true, the above is 7 lines of code. That is 3.5 times as many LOC as my 2 liner. Science shows us that the amount of resources required to maintain code is proportional to the LOC count. Your example is hence 3.5 times more demanding in both initial resources to create it and resources required to maintain it. One of OOP's sales pitches was "that it makes it easier to maintain your code". You just scientifically proved it wrong ...
The ranting you did was , as I told none of them was required:
@polterguy you're repeating a very common fallacy - less code is better. In fact, the amount of code is much less important (and in most cases just irrelevant). Also, this example does not show nor prove anything because it has zero useful functionality.
LOC is always proportional to resource requirements for maintaining the code. Resource requirements is "the price". The goal is to reduce the price, without reducing quality or deliverability ...
The very first statement is the root of the issue. Because it just plain wrong. The "mechanical" metric like LOC does not consider the business (task) context. Simple counter-example for this statement: there are "write-only" code (for example, regular expressions), which is very concise, but very hard to support and maintain.
In fact, reduction of the LOC makes sense only as long as business context is preserved. Once reduction is done by dropping part of context (for example, by relying on "implicit" knowledge or "defaults"), reduction of LOC causes more harm than good.
Well, my statement is still accurate. Don't like it, blame the scientists. It's been proven over and over again, already as far back as in the 1960s. I even think Brooks wrote about it in his "Mythical Man Month" from the late 60s. I first saw it in the book called "The art of CISC x86 Assembly Programming". The author used it as an argument for that assembly was only 25% "slower" in Time2Market compared to C.
Whether or not it causes harm or does good of course, is another subject, but the amount of resources required to maintain a snippet of code, is directly proportional to the LOC count, regardless of language ...
Since the 60s there are many things changed, including tools we're using to read, write and manage code. That assessment is not accurate anymore. You might be interested to take a look: dev.to/siy/we-should-write-java-co...
It's very Java-oriented, but you might find similarities in other languages as well.
This math is written at information theory. The more complex are encoder and decoder, the smaller is the code. However, it's possible to transmit very simple code with lot of repeatitions and it can be produced/understood by same simple encoder/decoder. However, we consider less-entropy code to be better because in general we are reducing local entropy, thats is the progammers' job and i believe life in general.
You expressed a general view, but, as it often happens, the devil is buried in the details.
Information which need to be stored in the code consists of three main components:
Let's assume that incoming business logic is fixed (it is defined by the task in hands). So, if the total amount of information is fixed and preserved during encoding/decoding, then we can reduce the amount of the code only by shifting the balance between syntax and implicit information. The implicit information must be present in both, encoder and decoder to enable them to perform their functions and to avoid information loss.
Since this is the developer, who actually handles "encoder/decoder" task, reduction of the syntax by increasing amount of implicit information results in the growth of mental overhead. This, in turn, means that a smaller amount of code is enough to hit "complexity barrier" of the project. You can see this effect in the real life observation, for example it is well-known fact that strongly typed languages (i.e. ones which are inherently more verbose) far better suited for large, complex, long-living projects.
As I've assumed above, information is preserved during encoding. But for many languages this is not the case. This results in different "impedance" of writing and reading code. Perhaps the best illustration of this loss are regular expressions. They are rather easy to write, but very hard to read because information which exists during encoding is lost. Worse is that get lost most valuable part of the information - business logic/requirements.
With all of the above, it is easy to see that "less code is better" idea is too simplistic and does not consider real-life implications like loss of the information during encoding or need to keep in mind huge amounts of defaults/implicit information.
If "more code" was better, nobody would purchase off the shelf products, such as iOS, ClickUp, use GitHub for that matter, etc - Facts are; Less code is always a blessing ...
I see no reason why my point of view should be screwed and pushed to absurd, like you're trying to do.
I reiterate my real point of view: less code is better only if there is no loss of context (in the term of my previous answer - business logic/requirements remain preserved during encoding). The "fact" you're pushing, does not work in real life. Otherwise, APL would be one of the most widely used languages, but it collects dust somewhere in the IT history closet.
Code is the very definition of technical debt. Pinpointing that out isn't pushing it to the extreme in any ways. If one million LOC requires 3 people to maintain it, then two million LOC requires (at least) 6 people. The resource requirements probably also grows exponentially and not linearly too. The less LOC, the less technical debt. Whether the company is able to operate with zero LOC or not, is of course questionable - However if it can, without compromising business functions, zero LOC is the goal ...
OK, I reiterate: your argumentation is applicable only if we're talking about same language and same code base. It does not provide any basis for comparison of different languages or different code bases.
"complexity barrier" - is not property of a project. It's property of a developer.
If we can't handle decoding that does not mean the code is bad.
"Complexity barrier" is the property of the combination of the developer AND the language.
If we can't handle decoding, that does mean that code is unmaintainable. Of course, this does not mean that code is bad. It's not good nor bad. It's useless.
Actually, the language is literally irrelevant, something demonstrated by several peer reviewed scientific reports about the subject, many times too in fact.
If the code cannot be easily understood in 20 minutes by an experienced software developer it's garbage, and the only hope that exists is to initiate a "SHIFT+DELETE" refactoring project ... :/
I've maintained dozens, if not hundreds of "complex projects" in my 20+ years of experience. They've all got a lot of common traits; They're unmaintainable, there's 10x as many developers working on the project(s) as you'd need if the project was easily understood, they've had to obtain 100x as much HW to run the thing on, because it leaks like Titanic, and the software runs like freakin' syrup and consumes 10x as much resources, time and bandwidth as would be necessary if they were nicely architected. The most fascinating part of these projects, is that the CEO having paid millions of EUROs having assembled this garbage literally believes that "the thing is worth a lot of money". I wouldn't have accepted any of these codebases today if I was paid money to be given them for free.
Typically, they're millions of LOCs, outdated, impossible to update or fix bugs in, and performance is so degrading that the company as a consequence is bleeding money.
A FinTech company I was working for had a payment API that would literally reject 25% of all attempts to pay, because their API backend was so slow the payment provider gave up sending us notifications, resulting in timeouts from the payment provider, resulting in that we literally lost 25% of all payments. The exception log in this thing was accumulating 2,000 unhandled exceptions on a daily basis. We had roughly 700 users on a daily basis. 3 unhandled exceptions for each user. Today the company no longer exists.
Facts are, literally every single project I've worked on during my professional life resembles this junkyard of software ... :/
Or software developer is not as experienced, as it thinks about itself.
P.S. Your story does prove literally nothing. I saw different projects in my career, and not all of them were like ones described by you.
This is how modern application works, in .NET for example. Developers just wrap everything to try{}catch(exception){_logger.LogException(exception);} and run the code until someone complains. Then trying to investigate logs)
This is how some modern applications are working. Not all of them.
I have worked as a professional software developer for 22 years, in the US, in Norway, in Cyprus, and remotely for companies all over the world - In addition, I've worked as a consultant due to my experience as an architect and advisor. I have never seen anything not resembling garbage.
I have worked on software that was installed in some of the largest hospitals in the world. I have worked on software licensed by all major banks in (unnamed) country in EU. I have worked on software used by hundreds of thousands of traders on a daily basis. I have worked on software used by some of the largest streaming service providers in the world. I can guarantee you with 100% certainty that it's all garbage, and for the same reasons too. Overcomplicated, over engineered, astronaut architecture, created by a bunch of autistics, capable of describing DDD and SOLID until normal people "cracks", and simply leaves the room from cognitive overload, resulting in a complexity resembling some creature from John's Revelations, with configuration files (Pulsar?) with tens of thousands of lines of YAML code, to create a "bare bones" (basic) installation.
I swear to (unnamed deity) if somebody suggests DDD, SOLID, or Micro Service architecture for me once more, based upon message brokers, event sourcing, sagas and CQRS, I'll end up having to go to prison for manslaughter ... :/
My favourite system was a partner administration system using Guids as "authentication tokens", automatically injecting these as GET QUERY parameters upon invocations to the backend. The Guids was the primary key for the user records in the database BTW. That thing was built in a frontend framework that was literally abandoned by its (only) developer 15 years earlier, scattered with jQuery all over the place, 15 years after jQuery was arguably obsolete may I add. It was creating 15 different "queues" in Solace, required 4 weeks of configuration to simply get it up running, and consisted of a monster codebase, with 25+ "micro services" dependencies, to simply get a single 200 OK HTTP response from its backend. I practically begged my manager to do the big rewrite. His response was "it works". The system has since been replaced and tossed in the garbage as far as I know ...
The above system was using Durandal as an "MVC framework" to "increase code quality". One of my views had 6,000 LOC. One single JavaScript file. I was going mental over having "project team lead" responsibilities over the thing, and suffered a 2 year long non-stop headache because of "all the attempts to increase code quality" ... :/
If you see things "differently", then either I'm the one with "a problem", or maybe something else is wrong here ...
Just sayin' ... :/
My condolences. Perhaps I'm more lucky or just working in the industry little bit longer (about 35 years), but I saw projects which didn't look like ones you're describing. By the way, I share your skepsis regarding microservices (you can find many related articles in my blog).
Could you, please, show a project which you like or you think it's nearly fine?
The vast majority of the projects I was working on are enterprise ones. Obviously, they are not available publicly. Nevertheless, my current project is open source, and you can take a look here. It's definitely far from perfect, but it not even close to the tragedy mentioned above in a comment I've replied to.
Of course, there are also my personal projects, but they are too small to serve as an example.
You are obviously highly skilled, and know what you're doing, but you're also (kind of) proving the point with things such as this - Where we're 9 folders in, having a class name consisting of 5 words and 30+ characters. But as I started out with, you're highly skilled, and I don't mean to put you down as a person, but you're still arguably proving the initial statement of my article ... :/
The number of words in the class name does not matter, it's a "mechanical" metric, just like LOC. The purpose of the name is to provide as much of the business context as necessary, and this particular name does exactly that. You might notice that it does not contain any design pattern names or any other useless stuff. The importance of preserving business context I've already stressed.
Good point. However, if it was written in a functional context its name would be a single verb, such as "Create", "Run", "Transfer", etc ...
The whole point with my OP was how FP results in more readable code. More readable code translates to more maintainable code. More maintainable code again, results in less costs and less technical debt ...
If there's only one person maintaining the code, of course the above is irrelevant. The problem doesn't become a problem before somebody else needs to understand the code ...
Yet again, I want to emphasise I don't intend to pick on you in particular - You're obviously highly skilled, and a pride to your employer ...
If you take a look into my blog, you'll discover that I'm a big proponent of FP. I also believe that there is no point to confront OOP and FP. In fact, they perfectly fine complement each other and quite often trying to achieve same goal, just using slightly different (complementary) views on same things.
The class which you've pointed, actually demonstrates this approach in action: the class serves as a holder of the "context" (hashing algorithm) and its methods basically nothing else than partially applied functions in FP. This class utilizes OOP to achieve additional goals:
Would like to see one.
Just in case if anyone is looking for something relevant: the study, which proves quite low relevance of LOC as a metric across different languages. I was really impressed to find, that LOC is most relevant for COBOL :) (but relevance is still too low to be useful).
"Quite low" is not the same as "irrelevant" - However, the interesting question isn't productivity, the interesting question is "how much technical debt will I be taking on". As in the cost to maintain the thing ...
Measuring developer's productivity according to LOC is (of course) madness! Measuring a software project's complexity and amount of technical debt according to the same metric, is probably a quite good metric ...
Hence, paying developers according to LOC (which was Bill Gates' joke) becomes absurd, because you're paying them for (technical) debt ...
Which was the famous IBM quote where Bill did the Jumbo Jet analogy paying for the weight of the plane as a metric ...
Should we consider all factors? For example, amount of caffeine in the developers' coffee? Or display diagonal and resolution? All these also impacts technical debt and cost to maintain.
Recently did a huge refactoring, which increased amount of the code (for refactored part) by about ~25%. At the same time, refactored code now is readable by every team member, not just by the author of the code. And no, this is not the first time I observe such an effect. Yet another illustration that technical debt and LOC are not related to each other.
Segregating related parts into separate components is a good idea, for different reasons, since it allows the developers to focus on one problem at the time. However, even though you increased the LOC count by 25%, you probably separated the thing into multiple (smaller) components and modules. Whether or not you increased the LOC or decreased the LOC is actually "debatable", regardless of the hard core numbers you provide ... ;)
Over exaggeration to prove once point. LOC is important, but so is common sense.
A developer who finds 2 extra lines of code difficult to understand, and is ready to sacrifice modularity and every other aspect of programming, is definitely not making a strong point.
You're right, but so is (still) the science about LOC. LOC is the factor determining resources required to achieve maintainability according to science. Sometimes it helps to add some few additional lines of code to increase readability or modularity, but the science is still sound, and proclaims that there is a one to one proportional resource requirement towards maintainability and LOC ...
Notice, I don't disagree with you ...
LOC metric as a factor determining resources makes sense only under identical circumstances. Once you change language, code style or even formatting, comparison of LOC gets meaningless.
Simple example: make a license header mandatory in each source file and LOC will immediately grow, but maintenance efforts will barely change because processing is automated and folding in IDE will preserve user experience.
Actually, this was Randy's exact point, that it doesn't matter, and he used it as an argument to prove how "assembly programming is only 25% more resource intensive than C", so not it doesn't matter actually ...
I didn't say it doesn't matter. I did say that comparison makes sense only in same conditions.
And are you cool Thomas?
I think so
Hahaha :D
I was 8 years old ...
Did you get the code from a magazine, I was told that's how most kids got to write basic
Oric 1 User was the magazine's name. This was in 1982 though ...
I really enjoyed reading this article but I can't help feeling that someone has beaten you with a broomstick far too many times... Then blamed it on OOP.
If you see code that looks like that, it doesn't matter what paradigm you claim to be following, you are doing it wrong!
Thank you, and yes, you’re probably right. The problem is COMPLEXITY. But I’d still argue that the solution is NOT OOP 😉
I'm joining this thread because the rest got flattened and is difficult to read.
I think that's a non-sequitur which doesn't do anything for your argument.
I'd say more like 4. You can see it's more verbose, you don't need to start counting braces to add to the argument!
Let's try in one of the original object-oriented languages. :)
Hehe, Simula ...?
Smalltalk :D
Smalltalk is one of those languages I in general respect, although I've never really dived into it ...
Alan was a smart guy ;)
"The above system was using Durandal as an "MVC framework" to "increase code quality"."
Actually, enforcing MVC in a framework is bad architecture. It applies to any MVC framework. Frameworks should focus on core business tasks like user handling, authentication, and authorization. Assigning the responsibility of handling views to a framework may cause modularization and componentization efforts to fail, potentially resulting in spaghetti code, regardless of whether OOP or FP is used.
A more effective approach involves a modular framework implementing the hexagonal pattern, complemented by a separate module responsible for input and output. In the context of web applications, this can appear as a CMS module, but this principle can apply broadly across various application types.
Routing handled by a framework can be a warning sign, and plain wrong in applications where different UI themes or outputs might necessitate rendering different components for the same URL or input.
When considering maintainability, the coding approach (OOP or FP) or the number of lines of code (LOC) aren't the main focus. The key lies in the application of best practices and adherence to proper principles."
I hope this revision is more to your satisfaction. Please let me know if there are other aspects you'd like to modify.
While(true) Console.WriteLine("Thomas is wrong");
Any paradigm can be pushed to extreme (absurd). OOP is not an exception. FP either.
P.S. you're mixing into OOP all things, which basically not an OOP: patterns, SOLID, clean architecture and other stuff.
No, I am using design patterns and SOLID as the proof of that OOP is sub-optimal.
Try to apply the same approach to FP, and you'll see the same issues. Because there is no optimal solution.
Probably why Tetrus and games like angry birds made millions
I guess you forgot one of the most important principles: KISS.
Your article about OO being overcomplicated can be ported to anything, really.
If we would follow every FP principles, would also be very hard to maintain stuff, e.g: if you need to STDOUT something, you would have to create a monad or something like it to avoid the log side effect, since it's an I/O.
Just keep the code simple and get the best parts of OOP and FP.
You're right, and obviously that was my point. History has taught me that the statistical probability of that an OO project turns into an "astronaut architecture project from the depths of Mordor" is 10x orders of magnitudes more likely than that the equivalent FP project ending in the same result ...
Psst, here's a functional example of a code snippet taken from our website (Hyperlambda code). Try implementing the same in C#, Java or (sigh!) C++ ... :/
I assume everyone reading the above code can instantly understand what it does ...
The above is 5 lines of code. Its C# or Java equivalent would probably require hundreds of lines of code, and at least half a dozen classes, in addition to (consuming) some 25 to 50 different existing classes ... :/
Actually, your code does nothing useful beside generation of useless traffic (result of the operation are ignored) :)
In the Java framework I'm working on, your code will look like this:
Wish I have more time to work on this framework...
Beautiful, but of course creating libraries that simplifies things is possible in all languages. When I was doing my LOC count, I considered the bare bones implementation using
HttpClient
from .Net ...Najs code though :)
Edit;
I didn't see this one before now, but no, my example does not ignore the result of the operation. That's the purpose of the [join] keyword in Hyperlambda. It waits for the [fork] invocations to finish, and returns the result of the invocations to the caller. To access the content of the first for instance would be as easy as follows.
Every one can undertand what it does. Try implementing it in Hyperlambda, which will atleast require 100s of lines of code to implements this Awesome functionality. And hence by your logic it can be deduced Hyperlambda is psychosis.
No, I showed three of the fundamental building blocks in Hyperlambda (3 slots), which are similar to "functions" in structure, and how these 3 fundamentals solves an actual problem, being retrieving 2 HTML documents in parallel, and waiting for both documents to download.
I have demonstrated a better example of how to run any app with just one line of code. It does all you say and do in just one line.
If reducing number of line is yard stick.
Reducing the LOC is always a "yard stick" yes.
One of the yard stick yes, but definitely not the most important of all. A sane programmer would and should choose modularity, for a non trivial programm, always over LOC. A few extra lines which gives better extensibility and readability is instead good and promoted as a good programming atticate.
strcpy
is a bunch of lines if you don't have it in your standard library. Everything in this example is hiding complexity somewhere, just like all code ever written.I like "less code" as a principle, but I'll add more SLOC if I think it makes what I'm trying to do clearer:
Not in C#
Simply wiring up your
HttpClient
, ensuring you're disposing yourContentResponse
object, and correctly parametrise it would require 10x as many lines of code as the Hyperlambda 5 liner above. Then do it twice (two invocations), add the required boiler plate code for threading, and ensure you wait for the code to return before you move beyond thejoin
invocation, and you're probably looking at a lot more code, at least 10x as much code as the Hyperlambda example, probably even more than that. If you disagree, feel free to prove me wrong. You've got the Hyperlambda code up there - By all means, port it to C# and let's see what produces most code and complexity ...I'm not sure what the point of this article is besides being inflammatory. I am both a heavy user of functional programming and more traditional OOP languages.
My definition of OOP
There are many paradigms that get grouped under the name "OOP".
I think of OOP as:
None of these are inherently bad, and they surface in different forms in functional programming languages too.
Task join/fork example in C++
You give the example of
In a C++ framework I wrote (and I tend to be on the heavy-handed verbose side of the spectrum), I would write:
If I wanted to push it, I could easily boil it down to
which compiles to the exact same memory layout and binary code.
Because my HTTP factory implements an abstract virtual class, I can easily swap out different factories, say if I am running unit tests or decide to replace HTTP calls with some local IPC. I also get a concrete task object that I can pass around and introspect / monitor / listen to / cancel.
You might say that
http.get : fun string -> bytes
is more elegant thanbut it is missing the point.
The value is that I get a clearly documented, easily extendable data structure that encapsulate what is needed to create thunks that ultimately return bytes. I can look at the class of my HTTPFactory instance and easily see that it contains say, a TLS certificate. I can also add a
POST
method, and make it obvious that both GET and POST belong together and reuse some of the same code and data.These things are more opaque in a functional context.
Which is the best? None, they both work just fine.
OOP the strawman
Since you don't provide a concrete OOP API other than bringing up the strawman of needed an OutputFactoryMarshalerFactory to print out a single line of output, it is hard not to assume that you are arguing in bad faith. I can write
just as well. Bringing up LISP (all caps?), while Common Lisp has probably the most expansive approach to OOP of all the languages I know, strikes me as a bit odd as well.
Using C++ as an example, which is probably the language that I most often use to write what could be called "traditional OO", a class without virtual methods is arguably just a shortcut for a series of functions that take the same data structure as first argument. Once you introduce virtual dispatch, a class is just a way to combine a virtual dispatch mechanism with a data structure, give that group of dispatched functions a name, and make reuse easier. That's very similar to what many languages call "traits", arguably less flexible, but flexibility can often be a bad thing (say, if working in a big team with different skill-levels).
Data+Code centric patterns are useful
There's value in taking a data centric approach, especially in more resource constrained environments. I wouldn't know easily how much memory is allocated, when, where, by whom, and who owns it. In my C++, I know that I have 1 instance of Task_Join, 2 instances of Task_HTTPGet, and I can go look up the class to see that Task_Join keeps two pointers, and Task_HTTPGet a byte buffer, a state variable and a file descriptor. I know when they are created, I know when they are freed.
That data-centric OOP style might be less flexible than function-centric code when composing functions, but it's much simpler in many other regards. As with all code, it's about tradeoffs.
Interestingly, the following code snippet you provided ...
... is arguably more in the style of FP than in the style of OO. However, you didn't provide the creation of your
m_Factories
instance, you didn't provide the wiring of your IoC container (assuming it was somehow dependency injected), etc. But nice code - You're obviously skilled :)Thx for the snippet. It took me a couple of days to realise I don't need to prove anything at all, since your C++ example basically is all the proof I needed ...
You can write heinously bad code in any language and with any design pattern. There is no approach or pattern that saves you from bad code.
Yes, but the general rule of thumb that seems to be valid "all over the place" is KISS, as in Keep It Stupid Simple - And if you follow KISS, it's much less likely going to produce bad code. OO is fundamentally incompatible with KISS. You're writing code for human beings, the fact that it compiles is just "a lucky side effect" one could argue. The simpler your code is, the more easily understood it is. OO results in complex code, FP results in simple code ...
Every word in this reply is wrong and I am embarrassed I didn’t catch it sooner.
But I’ll reiterate, FP isn’t going to save you from bad code by being inherently simple. For every statement that says it does, you can probably find 10 repos that use FP with hilariously bad results.
It’s never the language or the style, it’s always the person using it.
Then why did even React use inheritance?
The entire .Net framework is OOP as is Java. None of those failed.
This is just a Javascript centric OOP Flame article. Problem is even Javascript supports OOP now.
NPM has messy Javascript code everywhere. It's a virtual garbage dump brought about by what Javascript allows and what people think are best practices.
It didn't
React.createClass()
: "Create a component given a specification."Component Specifications: "When creating a component class by invoking
React.createClass()
, you should provide a specification object that contains a render method and can optionally contain other lifecycle methods described here."Clearly the React team wasn't immune to fashion influences as that factory function would have been more appropriately called
React.createComponent()
especially as for all intents and purposes the component instance'sthis.props
andthis.state
were owned and managed by React - not the object instance itself as one would expect with standard class-based object orientation.This style was advocated by Douglas Crockford as class-free object orientation at least as far back as 2008 (JavaScript the Good Parts).
The component specification simply contained what was unique about that particular component.
It was the later alignment with the ES2015
class
template for creating objects that brought inextends React.Component
.For years this was common dude or are you too new?
class Car extends React.Component {
render() {
return
Hi, I am a Car!
;}
}
JavaScript has always had OOP, as long back as to the 1990s. It just didn't have class based OOP, but rather prototype OOP.
Which I even prefer over class based oop just to feel great by attaching my standalone function to a class' proto to stay dry but don't mess with inheritance.
And npm itself has no influence on Js as a language. Same issues can happen in other language registries too.
Very good point about dynamic features yes ... :)
It's prototype OOP was sucky, still is.
All OOP is bad, but the ability to dynamically attach functions to an object has its use cases.
True. All OOP Is not bad, just your opinion
There is no silver bullet... OOP and FP are no exclusive. I love FPing on my application code and agree with you on many points, but when I design library or use a 3rd party one I prefer to have well oop-organized packages/modules, class and methods rather than a bunch of functions.
Very interesting article, but it is unforgivably wrong. The problem is not in OOP itself, but in the fact that all OO languages(and developers using them) come from imperative programming in the form of C. OOP is just as declarative as FP is. A lot of good concepts like immutability are not specific for FP only and they do not appear in its definition. The correct statement is "OOP is done wrong", not "OOP is wrong".
OK, I agree with that - It's still a problem ...
However, even though I blame OOP languages themselves in the header, I'm not really doing that in my arguments. Interestingly, most languages, including C#, is easily used in a "FP style" ...
It might be interesting for you to look at EOLANG, an experimental OO language which is pure OO: everything is lazy, no flow-control statements(if and for), no classes, no METHODS, only objects and their composition. It feels very similar to FP because every object has a specific method "@", which is considered its primary method.
Interesting ...
It would be great to have some real examples, or some side-by-side comparisons, instead of hyperbole like "virtually impossible to understand code", and "arguably the very definition of madness" which are not quantifiable, and "results in 1,000+ classes for something that could have been done with 5 functions in FP", which is not even true.
This article seems to be written for people who have already decided that they prefer FP. By writing stuff like the above, you don't win over any OO programmers, you just allow them to dismiss your whole article because you lead with things that are demonstrably false.
Thx Aidan, and yes, I have thought that exact same thought myself, as in provide for instance the "textbook example" from C# doing the same thing as my 5 lines of Hyperlambda code (in another comment), etc, then compare these side by side, to illustrate the difference.
Thx for the tip though, I'll definitely do that at some point in time when I've got the time ...
Wow! I find this whole mass pyschosis thing super interesting and really enjoy how you tied this to the fallacy that OOP is infallible.
But yeah, I just had to look more into this mass psychosis thing...
From a wikipedia entry on "List of mass hysteria cases":
Thought2 created a YouTube video about it. A must see ;)
Thomas, it looks like your recommendation of programming languages is based on the lack of OOP rather than the functional features. Would you recommend against mixing both paradigms? I actually don't see a problem in it as long as one keeps the OOP very lightweight.
I guess I'm just trying to balance the debate here. I'm not "against" OO, I just believe one should be a bit cautious with it. The principles of SOLID, DDD and (overusing) design patterns, rapidly results in a code base nightmare, impossible to navigate or maintain ...
Incorrect information in this article starts from the "30+" years. OOP is about as old as procedural programming (i.e. 50+ years). And it is just a logical next step: while procedural programming focuses on encapsulation of the code, OOP adds encapsulation of the data. All these are necessary steps to reduce software complexity.
I really like FP (and use it every day), but I'm pretty certain that real working approach is the hybrid of OOP and FP.
P.S. there are very little efficient pure FP algorithms for many everyday tasks, for example, sorting.
Pure functional and functional programming aren't (necessary) the same thing. OOP was "invented" in Simula 67, but it never went truly mainstream before the 90s ...
Procedural programming "invented" around the same time, but there were hot debates about it well in the mid of the 80s. The basics of agile methodologies were created at the beginning of the 70s, but agile went mainstream only on the mid of 2000s. And so on and so forth. This is just an illustration of how conservative software development is.
Some comments may only be visible to logged-in visitors. Sign in to view all comments.