DEV Community

Cover image for What's the longest build time you've experienced?
Geoff Stevens for Software.com

Posted on

What's the longest build time you've experienced?

The most successful engineering teams typically keep their workflow durations between five to ten minutes on average, according to CircleCI's 2022 State of Software Delivery. The median is 3.7 minutes.

Looking at our team's slowest GitHub Actions using our GitHub App, our longest workflow is just under 13 minutes. This excludes local builds and workflows.

Software.com GitHub Actions

But it's not unusual to see build times measured in hours, depending on the stack you're using.

LinkedIn's engineering org previously wrote about the steps they took to decrease build times from 60 minutes to 15 minutes for their largest microservice, improving productivity and team happiness.

What's the longest build or workflow time you've ever encountered? What tools were you using / could you fix it?

Top comments (29)

Collapse
 
ecyrbe profile image
ecyrbe • Edited

Oh boy!
Back in the day, i had a c++ pipeline taking more than 30 minutes to build...

But the builds where hapenning on the developper machine.

It was coffee time all the time :)

There was no github actions, circle ci services out there... So we built our own with buildbot.

Builds still took time. But they where no more hapenning on the developper's PC.

Happily us, nowadays, build server are the norm, and no builds are disrupting developper workflow...

Sad part, less coffee Times 😁

Collapse
 
abhinav1217 profile image
Abhinav Kulshreshtha

I do miss the build (coffee) time. We had a large build+commit every friday, at that time, it was a large monolithic asp-dotnet+sql_server application. We were a 7 person startup, so our boss would take us all to restaurant for drinks and feast.

Collapse
 
aisone profile image
Aaron Gong

VC++ in the 2000s.

Mine was 90 mins to 2 hours and it was a network build.

The the h file is global, even if there is a small change, its long build time again, and utilizing CPU of other machines in the network.

Literally paid to do nothing...

Collapse
 
joelbonetr profile image
JoelBonetR 🥇 • Edited

First of all there's a BIG difference between build and deploy stages.

I'm not convinved that using build times to measure "successful engineering teams" is a fair metric.

Having a build that lasts a subjective big amount of time is not necessarily bad.
i.e. you simply can't build and deploy a business production ready Next JS App in less than 4/5minutes. But even it lasts 1h it's not an issue. You can build and deploy when it's ready, which will last <2min more or less.

Things you can do to optimize the whole release process:

  • Cache dependencies.
  • Split build and deploy stages.

As the project grows and you need more dependencies, there's more code to evaluate, minify, bundle.... the more time it will take to build. It's ok, nothing wrong here.

On the other hand, having a pipeline that lasts more than 2 minutes to deploy is something you need to look at carefully and optimize it to reduce the downtime.

Optimizing Build and Deploy stages are a different world.
Apart from caching dependencies to speed up the build stage the only actions you can

  • Check the dependencies and reduce it to the minimum necessary (that's one of the reasons to not begin a project with `npx create-react-app` or similar).

And on the deployment side you can:

  • Check your deployment method. Can you use a lighter docker image? Are you using a private server and deploying through FTP protocol or sFTP? Won't be better to use RSYNC instead?
  • Check the steps and log the amount of time spent on each. (Usually most of the time is spent in copying files).

Disclaimer: I'm not much into infrastructure and this comment is propelled by my observations as lead dev and conversations with the Infra and DevOps guys. So If I miss something or I'm wrong on anything please tell me so I can learn more. 😊

Collapse
 
thegeoffstevens profile image
Geoff Stevens

That's a great distinction to point out between the build and deploy stages. They can be optimized in different ways + have different expectations for how they'll run.

I used to work on a decently sized Gatsby site. Cutting out some dependencies and adding better caching took the final build from ~30 minutes to less than 10. They also added incremental builds to speed things up on smaller changes. The deploy step was always pretty fast -- unless it failed for some unknown reason...

Collapse
 
joelbonetr profile image
JoelBonetR 🥇


ROLLBACK, ROLLBACK!

😂

Collapse
 
thormeier profile image
Pascal Thormeier

I'm not convinved that using build times to measure "successful engineering teams" is a fair metric.

This. I've seen enough insanely large Drupal projects with several gigs of database init dump import, cache warming, config and default content imports, XML imports and whatnot, that took up to 45 minutes until it they were considered built. The choice of framework and all code is actually sound, it just takes ages because they do a ton.

Collapse
 
phlash profile image
Phil Ashby • Edited

The longest I ever had to wait for: 3 weeks (yes really!), mostly single-threaded, nested scripting that used make very, very badly, as a result of aggregating work from many teams into the final product (this was a vendor-supplied build system for escrow purposes). I'm pleased to say that after analysing the build log, and re-writing the entire build process as a series of make fragments that were ingested by a top level Makefile (another 3 weeks work!), it was down to 35 minutes on a 128-core system.. or considerably less if only a few files had changed (thanks dependency graphs!). This finally made iteration possible. Full-disclosure: my build did not create the exact same binaries (due to using fewer, more consistent compile options), but it was close enough, passing acceptance testing.

Collapse
 
paratron profile image
Christian Engel

I was working for a client once where the build time in Jenkins would be about 40-60 minutes. The thing is, they would tear down and recreate their whole fleet of services (which actually ran on one single machine). When you committed a line of javascript for a web form, EVERY Java backend service would be rebuilt and deployed.

The build server was constantly running. Bonus points: if ANYONE made a mistake which made the build crash, the process would be stopped. Anyone got the right to commit revoked. An analysis would be made by a dedicated person on who was to blame for the build crash. Then THIS PERSON alone got commit rights granted to fix the build. When it was running again, everyone would be granted the right to commit again. There were dozends of people working on the same repository (all code of all projects in one properitary microsoft repo - using git was forbidden).

It was so much fun. 😐

At one point I build a small app that displayed a dashboard on a dedicated computer screen we placed in our office to show:

  • Is the build green?
  • Can we commit at all?
  • When will the build approximately be finished and the next turn will start?
Collapse
 
nombrekeff profile image
Keff

Well, not exactly build time, but test execution time. Though it counted towards the build time as we run tests before building of course. For us it was around 4 days, as there was some issue with the test setup and was taking a lot more time than it should... That was wild... other than that nothing out of the ordinary!

Collapse
 
tandrieu profile image
Thibaut Andrieu

I use to work on a C++ project, wrapped in Java and .NET. The full build+test took about 10h, depending on the machine.

One day, we were pissed off fixing bugs in the same part of code again and again, so we wrote a test to test all the possible combination of values (only nominal ones, not even limit case or stress case). After a few hours of running, which seemed quite long, we had a look to progress to estimate how long it would takes.

This test would require more than 300 years to run... Yeah, combinatorial explosion...

We end up continuing fixing bugs case by case and add a test case when user report a bug...

Collapse
 
karandpr profile image
Karan Gandhi

Around 2 or 3 days for chomium build on my old computer.
12-18 hours for a build of 4.3 or 4.4 Android.

Getting faster hardware reduced the build time. (SSD + Halfd decent Processor so you can allocate more threads)
ccache reduces the time to recompile updates.

Collapse
 
kaiwalter profile image
Kai Walter

Back then end of 80ies on IBM Main Frame changes to CPG (a German RPG derivate for "Transaction-Programming") CICS programs always needed 45-60 minutes of "compilation". No syntax-checking in the editor, so you had to be very thorough. Many years later when switching to Visual Basic for small applications my development style really degraded quickly into this hit-F5-and-see.

Collapse
 
grantm profile image
Grant Magdanz

I worked on an application that was deployed as an entire virtual machine on top of cloud providers or, more often, ESX. There was no local development environment. We would spin up virtual machines to test and do QA on. The build itself took 10-15 minutes.

However, before being able to merge a PR we needed to run a set of precommit tests many of which couldn't be ran locally because they were testing other architectures. These tests ran on a locally hosted Jenkins instance. Even with them parallelized the test suite took 90 minutes and sometimes over 2 hours.

I've since worked in more modern development environments and couldn't go back. The cycles were absolutely brutal. We spent a ton of time making sure the test suites were stable, but we'd still have flaky tests or issues with infrastructure. It would sometimes take a full day to get a clean run.

Collapse
 
bias profile image
Tobias Nickel

about 40 minutes was my longest. it was a hyper ledger fabric blockchain project. and we automated it all the way through. generating crypto material, starting a complete network of needed services. extra node and golang services. then run and deploy the actual application on the blockchain and execute some test transactions.

Collapse
 
adam_cyclones profile image
Adam Crockett 🌀

A year if you consider at a certain previous employer we want to work, built it, it failed. It took maybe an hour a time and this would happen for 9/10 builds then we did this for over a year the whole process was known as "The Build" and it had an external agency maintaining it.. which is why we could never fix it try as we might

Collapse
 
taijidude profile image
taijidude

Roughly 1.5 days. But it was not fully automated. Wrote a powershell script for it which now runs in a jenkins server. Now it takes about an hour.

Collapse
 
taijidude profile image
taijidude

Right now i have to debug a Java application that reads from a database, transforms the data and writes it into a different table back to the same database. And it takes forever.... Damn! Really need to think it through but i don't have time for it.

Collapse
 
abhinav1217 profile image
Abhinav Kulshreshtha

4 hours. I do have to mentioned that it was a final release and deployment of project ,so a little more than "build" was happening.

I used to work for a then startup as a college fresher, sharing roles of Q/A and customer support. My boss had send intimation email about down time. But downplayed the time it would take. He mentioned around one hour, but it took little over 4 hours, plus time it took to making network online. Which meant it was my worst night on customer support.

This was back in 2013. Now, they have moved from a monolithic dotnet codebase, to golang micro-services. Their staff consists of ivy league students, they haven't had a noticeable downtime in 5 years.

Collapse
 
pauldubois777 profile image
Paul DuBois

I am still waiting on my build from January 6th 1977 on my TRS 80 Model I (4K memory). I wrote code to calculate "The answer to life, the universe, and everything".
Although I heard there was some book about this with the answer, I am going to wait for my code, or the Vogon's, whichever comes first. ;)

Collapse
 
garyk2015 profile image
garyk2015

Oh wow well back in the day (long long time ago!) I was doing 6800 assembler on Motorola Disk Operating Systems, it was a 2 pass macro assembler so needed to resolve the labels, macros, then compile and link so about 20 minutes each time.

That said even now I see plenty of jenkins pipelines that take 15-20mins to run.