DEV Community

Jake Marsh
Jake Marsh

Posted on

Optimizing for Iteration: Best Practices for Automating Everything

As we’ve discussed in a previous blog post, product development is hypothesis testing. This is especially true in the early stages of a company when you need to confirm or reject your hypothesis as quickly as possible.
This process is then repeated until you (hopefully) reach product-market fit. To get there, your team needs to be able to work and build at a pace that allows for this constant and rapid iteration.


Monolist is the command center for engineers — tasks, pull requests, messages, docs — all in one place. Learn more or try it for free.


📝 Document what needs to be documented

Documentation is one possibly un-obvious form of automation. For everything that’s well-documented, a later employee doesn’t need to waste time figuring it out on their own. However, creating and maintaining documentation requires a fair amount of time from members of the team.

A happy medium we’ve found here at Monolist is to only focus on documentation for anything that falls into the category of “one-time process” documentation.

One-Time Process Documentation

When a new engineer joins the team there’s likely to be a large number of tools, services, and permissions to configure manually before they can truly get to work. This can be a long and painful process, especially for the new employee excited to start building.
To avoid this, as much of the process should be documented as possible. This ensures future engineers have a guidebook to reference and don’t have to waste time finding solutions for themselves.

Similar to the onboarding process, there are likely other processes that occur rarely enough that full automation is not worth the effort. One example is deploying an internal service that rarely changes. For cases like these,
it’s good to maintain documentation so that anyone on the team is able to execute the process in the future.

⚙ Automate all of your tests

Testing should be a priority from day one, and your test suite should be run often (ideally on every commit). With those two requirements in mind, it doesn’t make sense to have to run any part of your test suite manually.
If it’s hard to do and you have to do it often, someone will inevitably end up skipping the testing step.

To ensure it’s easy and fast to both write and maintain your automated tests, here are a couple of things to keep in mind.

Balancing unit tests versus integration tests

One of the everlasting struggles in software testing is maintaining a balance between unit testing and integration testing.
Although they both aim to cover and test the various code paths in your application, their opposite approaches result in different tradeoffs.

Unit tests serve to cover the more granular functionality of your modules. This means they’re great at testing the smaller details and relatively easy to maintain, but they’re prone to missing gaps between your various modules.

Integration tests, on the other hand, test your code paths from top to bottom with no (or less) mocking. This means every invoked module is actually run. These tests are more fragile and harder to write or maintain,
but they offer more reassurance that your app is functioning as expected.

Due to the relative ease of unit testing, you should aim to unit test most of your codebase. Any new or modified code should generally have an associated automated test or test update. When it comes to integration testing,
this should be used more sparingly to confirm the expected behavior of more critical code paths. Your onboarding flow, for example, is a crucial part of the new user experience and so is likely worth integration testing.

Writing implementation-independent tests

Lastly, another key point to remember when writing your automated tests is to ensure they’re as implementation-independent as possible. This means that whenever possible,
a module should be tested by asserting strictly on its input and outputs rather than by asserting on the internal behaviors or calls of that module.

The reason for this is that as your team is moving quickly and iterating often, you’re going to be refactoring large swaths of your codebase fairly frequently. The less your code has to change,
in this case your tests, the quicker you’ll be able to make these changes.

🙌 Our Recommendations
  • react-testing-library: Although this specific library is for React, the same testing-library org has created similar libraries for many other frameworks. These aim to eliminate implementation-specific tests by forcing you to interact with your modules the same way a user would.
  • Cypress: Another Javascript library, but for end-to-end (integration) testing your application as a whole. This is the most painless experience I’ve ever had writing resilient integration tests.

💨 Streamline code review

Code review should be a central part of your development process. Peer reviews ensure the team is aligned on and aware of any changes, while also providing the opportunity to catch any bugs or issues.
The importance of code review means it’s a very frequent occurrence for the average engineer. When they’re reviewing pull requests so often, it’s best to optimize and automate that process as much as possible.

Automate discovery of pull requests and their updates

The current way most engineers become aware of a new pull request to review is through email or Slack. Although these methods are instant, they’re easy to miss and are immediately out-of-date.
This results in engineers unaware of pull requests they’re blocking, and their teammates blocked from further progress.

As much as possible, your team should optimize the process of discovering pull requests or updates/comments to those pull requests. This will prevent engineers from blocking each other and ensure everyone is working with the same context.
It may take iteration to discover what works best for your team, as every approach will have its drawbacks.

Eliminate discussion about non-crucial details

When it comes to reviewing the code of your teammates, it’s often easy to fall into the trap of commenting on the obvious things: style and formatting. These comments cause unrelated discussion that not only do not
require actual comprehension of the PR, but also waste the time of the engineers involved.

Whenever possible, these decisions and discussions should be automated away. One common practice is through the use of linters: automated processes to compare your codebase and any
updates against a set of rules you define. A newer approach that takes it a step further is auto-formatting all code in a pre-commit hook, completely eliminating any further thought or discussion about things such as formatting.

🙌 Our Recommendations
  • Monolist: Designed to be an engineer’s command center, Monolist synchronizes in real time to help them discover their pull requests, messages, and tasks.
  • Prettier: As someone who’s rather opinionated about their code styles, I was reluctant to try Prettier. We’ve now fully adopted it and I’ll never go back to manual formatting or style debates in pull requests.

🚢 Ship easily and often

As mentioned earlier, your goal as an early-stage startup should be to iterate and ship as often and as quickly as possible to maximize learning. This means that you should be able to deploy a new version across all platforms (API, web, mobile) multiple times a day.
The only way to achieve this, without dedicating a large amount of manual time and effort, is through complete automation of your deployment process.

CI/CD

Continuous integration/continuous delivery allows us to achieve this full automation of the deployment process. By automating the entire process from commit, to testing, to building, to deploying,
you’re eliminating the need to ever do it manually and enabling your team to merge and deploy often.

🙌 Our Recommendations
  • GitLab: GitLab’s all-in-one approach to code management and simple CI pipelines makes it extremely easy to get started with CI and integrate it gradually across your existing process or codebase.

🔥 Catch fires automatically

Naturally, the last thing we’ll talk about automating is ensuring we catch, handle, and triage any errors or abnormalities once the app is out in the wild. With a small, fast team it’s inevitable that you’ll ship a bug here or there.
What’s important is quickly recognizing and addressing the issue so your users are not experiencing sustained difficulty.

Monitoring

The first half of continued app reliability is accurate and reliable monitoring. In the early stages, monitoring should be done for things like server health (CPU usage, RAM usage) and general app health (can the app be loaded by a user? Are async jobs processing?). Any exceptions triggered in the app (API or client) should also be tracked.

Although it can be valuable to have metrics around actual user statistics (posts liked, comments created), these should not be relied upon for any alerting as they’re likely to be volatile in the early stages.

Alerting

When you have sufficient monitoring in place, thresholds can be defined for the various metrics you’re tracking. When these thresholds are violated, the team (or a responsible engineer) should be automatically notified to ensure that the issue is investigated and addressed if necessary.

🙌 Our Recommendations
  • Sentry: Sentry provides automatic exception reporting for a large variety of languages and frameworks. We use it for both Rails and React to capture and centralize our exceptions for triaging.
  • Prometheus: A powerful, customizable open-source tool for monitoring your services. Can be configured to handle alerts in many days, i.e. sending to PagerDuty.

❗️ Are you a software engineer?

At Monolist, we're following these tips to rapidly ship new features that help software engineers
be their most productive. If you want to try it for free, just click here.

Top comments (1)

Collapse
 
johanneslichtenberger profile image
Johannes Lichtenberger

At work we have a dashboard showing if Jenkins Jobs failed (unit or integration tests of various repositories).

Other than that when I check-in on my Github repository and a test fails from an automatic Travis run, I'm getting an email :-)