Civil Engineering builds on top of principles refined and perfected over the course of 2500+ years but it still fails sometimes. Civil engineers in the early years were just called architects I believe.
That is the nature of engineering in general (software as well), we try our best to build on top of all the best known practices but its always a struggle between 3 naturally opposing forces: budget, time and quality.
Any structure in the modern age (at least in the developed countries) has strict laws that are supposed to double and triple check each and every structure before it is built. Those checks and balances are mostly in place because an error can potentially result in the loss of life. No engineer is allowed to start building if the plans don't pass from the required check pipelines (Permits from state and/or national engineering agencies).
State permits to build based on proposed plans are sort of like code reviews in software engineering (but code reviews are not always enforced)
In general most critical structure measurements for dimensions will have a safety factor of 3 included (perhaps more in more sensitive structures). For example a critical steel rod will be made at least 3x bigger than the actual calculated/needed size just to prevent any unforseen factors.
Best practice calculations have come into place after many many years of building structures and doing controlled experiments on materials. Still to this day there is big research done in order to improve materials and adjust practices which is mostly conducted from universities' Civil engineering departments with big grants/budget to back the research. The results are published in scientific journals and are peer reviewed and might take years before they are widely adapted as "best practice".
Software engineering on the other hand is more than ever trying to squeeze budget and time since most software are not life threatining if they fail. Nasa for example probably has more strict checks on when a software is good enough for production.
Software engineering has not been around for as long as Civil has. We are evolving as industry we share are successes and failures by blogging about them and others can learn but there is no need for permits before a company releases a new version thus its OK to have a 97% uptime.
What your talking about is called "safety critical systems".
Which is the type of software used in ships, planes, nuclear power plants, cars and anything that will cause a lost of life or equivalent of more than 1 million USD if that piece of software fails.
We actually have that standard in place.
But it comes down to can you afford in building that piece of software as usually the higher the safety rate you want.
You will usually go bankrupt by building that piece of safety critical software due to compliance & rigorous testing.
You can take a look at this if your interested in it.
This is a super niche industry that unless your planning to work for Tesla, Spacex or NASA.
It won't be applicable for the majority of industry.
Even my university professor who teach this standard, has industry partners coming in to listen in the lectures on that particular standard.
Survivorship bias: for every hundred year old structure that exists today, how many have previously collapsed, rotted away, or burned down?
The surviving structures are generally also well and actively maintained to fix the issues surface.
When we maintain software (by fixing issues) people often complain that software is not as stable as building and bridges.
Yes, good point.
Type systems (in pure FP), category theory and formal methods.
You can just restart a buggy program so there's not much of a business case for proof systems. I do see their value myself, but common perception is against me om this one.
Foolproof software would probably cost 3x as much.
FP type systems have saved me a great deal of time in the past, so I'd say "citation needed". It takes some time getting used to them so people wrongly assume it's a time-sink.
If you look at the situation in practice, the tools we have are still underdeveloped and underused. A lot more could be done for near-0-cost security/certainty.
Well 3x was more for the sake of the argument, I don't have citation and I do agree that over the last years a lot of improvement has been done in the way of testing and best practices (CI/CD easier than ever). But even that will take time before it is adopted properly from all teams developing software.
We are still a young industry compared to Civil Engineering field and we are still evolving and learning.
I have heard praises on FP so I better get my hands wet in the subject.
If you have to do tests to verify your program (not everything can be proven after all) then the cost would indeed skyrocket. For instance, you need to account for every combination of inputs (a potentially infinite set) and behavior may depend on some hidden unknown variables.
You may be interested in one of my old posts about the curry-howard isomorphism the comments are pretty good too. I should also point out though, that even pure FP languages don't really exploit this property. For instance when dividing by 0, but also good old overflow problems.
If you've ever looked closely at licensing terms on software you might notice a phrase similar to: "The Programs are not intended for use in any nuclear, aviation, mass transit, medical, or other inherently dangerous applications."
Software is built to different standards, and if lives are at stake then the standards are correspondingly higher.
I'm sure that different standards also apply in civil engineering
Software still needs to run on physical hardware. Code can absolutely be 100 precent perfect to the job it is intended to do. There logical structure and code could be flawless, but then the underlying hardware may fail in an unexpected way. This doesn't mean total system failure either, it could be as simple and small as a single bit flip in a critical section in RAM. And this bit flip could be caused by an external source.
Computers are more complex than civil engineering projects, because even though the physical system is smaller, there are significantly more pieces and people involved to make it happen. Just take a look at the number of contributes to which ever OS you're using, then the application stack on top of that.
Actually it's the opposite there is a predictive model for hardware service life.
That they called bathtub model which is based upon the service period of the hardware.
Which just says that the beginning operational life and ending operational life for the hardware, requires more maintenance to keep it at optimal safety standard.
This is the reason during the space race, Soviet space equipment is mostly manual based due to this predictability of the hardware service life for safety reasons.
From my understanding some industry standards use the same model to justify your safety on the hardware failure risk that is installed with your software like the IEC 61508.
There is another school of thought which is a mathematical predictive model for software is possible but it is up to the interpretation of safety and compliance to a specific standard.
Which differs from industry and the government who is adopting that particular standard.
Which my safety systems professor, who had taught me the standards always joke on.
He will never ride a self driving car even if he was given a million dollars since in terms of safety standards it is not safe by any account due to unpredictably of software.
There is a lot what we may learn from mature engineer disciplines, I think. However, I would take a closer one - electronics. There is more logic involved as in software engineering.
I've wrote an article how we could take over the art of compose things together from electronics creating higher level abstractions. We call this principle Integration Operation Segregation Principle. This only works well at the end if the composed modules does not know each other. A point which is impossible as long as subroutine calls are used for composing. We must get rid of sub routine calls at least for composing SW modules. We call this Principle of Mutual Oblivion.
For what it's worth everything in civil engineering structurally focuses around factors of safety.
Dead loads at an additional 40% and live loads at an additional 60%.
Maybe there is something to learn about building in safety to your code or infrastructure from the outset.
I'm a civil engineer and have been for 10 years, I just got into developing and my view is you should unfollow him.
Hey... This is Kiev city! Isn't it?
This bridge ("Подольско-Воскресенский мост") being built since... OMG - from the 2003 year! They still can't finish it >.<
I don't know, to be honest. I just googled "bridge construction" in Google images, using a filter to only take images that were licensed for reuse. But I think there might be an analogy here with time estimates in software development.
Well, yeah - it is it.
Here is how it looked in 2017:
And itis still there :-)
They are writing it might(!) be finished at 2020, i.e. after 17 years of the building.
I am a Civil Engineer in Informatica (12 semester,i.e. 6 years).
And I can build bridges (I learned it when I worked with other disciplines).
btw. F*CK 2D ARCHITECTS!, the world is 3d!.
We're a place where coders share, stay up-to-date and grow their careers.
We strive for transparency and don't collect excess data.