DEV Community

Cover image for How to boost your Engineering Speed & Quality with the right Metrics
Heloisa Moraes for Codacy

Posted on

How to boost your Engineering Speed & Quality with the right Metrics

On March 24th we did a Webinar called How to boost your Engineering Speed & Quality with the right Metrics.

In case you miss it live, don’t worry: you can (re)watch the recording of the webinar here or below👇

https://www.youtube.com/watch?v=EoRzatLI9UI

How much value are you really creating in your software development team?

In our webinar, we’ve shared how our vision at Codacy has been influenced by the DORA research program and the subsequent state of DevOps reports. The research identified 4 key metrics, common traits in high-performing software teams.

Before diving into the metrics, we need to understand the problem tech teams typically face: their work was viewed as a cost center. Therefore, it was expected they would need to justify their costs and return on investment upfront.

But it’s more apparent now that an elite performing tech team is a value driver and innovation engine. Companies that fail to leverage this competitive advantage risk being surpassed by others. When we view tech as a cost center, we measure our performance as such, and we begin to measure things like:

  • How many lines of code do my engineers or engineering teams write per day?
  • How many commits or PRs do I make per day?

However, these are vanity metrics, and they have a poor correlation with the actual success of your business. Keep on reading to know which metrics you could use instead.

The 4 Key Metrics

The 4 key metrics were proven to indicate engineering teams who deliver value to their users quickly and consistently. They can be split into two categories: Speed and Quality.

4 Key Metrics

  • Deployment frequency (speed): How often your organization completes a deployment to production or releases code to end-users.
  • Lead time for changes (speed): How long it takes a commit to get into production.
  • Time to recover (quality): How long it takes your organization to recover from a failure in production.
  • Change failure rate (quality): Percentage of deployments causing a failure in production.

How do these metrics help you?

There is a strong correlation between high performers and the percentage of time spent on rework or unplanned work such as break/fix work or emergency software deployment and patches. 

While there is always some unexpected work to be done, catching errors early and having fast feedback loops help to minimize this for high-performance teams.

Unnecessary rework avoided

These high performers reported spending approximately 50% of their time on new work, such as design, new features, and new patch deployments, vs. 38% for low performers. So the benefits are clear: by avoiding unnecessary rework, your team can use their time to ship more value to your customers.

How to track your metrics

Four Keys project, by Google

One solution we would recommend is the Four Keys project by Google.

Four Key by Google

It is an open-source project with no cost to purchase. It’s a great way to get your foot into the door and measure the Accelerate key metrics. If approvals are an issue at your company, you should be able to integrate this yourself without requiring authorization to install GitHub or Jira apps.

However, it will be up to you to instrument your workflow and decide where to capture each event. This quickly becomes a problem if you have hundreds or thousands of repos.

You’re also not going to get historical data. If you need a 3-month snapshot, you need to wait 3 months. Plus, you’ll be limited to just the 4 key metrics, without any way to filter by team or time. Finally, you need to factor in the infrastructure and maintenance costs.

Pulse, by Codacy

And this is where we come to Pulse, Codacy’s DevOps intelligence product.

Pulse by Codacy

We’ve been working with multiple customers to understand how best we could help them. It’s very common that we speak with people who are either trying to start measuring this for the first time; or are attempting to build an in-house solution but unable to find success with it.

How Loft uses Pulse to measure Engineering health

What you get with Pulse:

  • Peace of mind when collecting engineering metrics: we do the work for you, ensuring reliable metrics and continuous tracking.
  • No need to choose what to measure: no need to get caught in output or vanity metrics that are hurtful for your team in the long run; we research for you.
  • Historical data: immediately see your performance over the last 90 days - and up to 1 year;
  • Easily filter repositories, teams, and time periods.
  • More metrics beyond the Accelerate to explain how they are evolving;
  • Our team and community can help you on the journey of continuous improvement.

See Pulse in action

Answering your questions

After the presentation and the introduction to Pulse, we opened the floor to all the questions the audience might have. You can check the detailed answers on our video recording of the webinar — we even give you the specific time frames! But we’ve summarized the answers for you to read 🤓

The list of teams comes from Jira or GitHub or both? (00:39:16)

The list of teams you see on the Accelerate dashboard comes from GitHub. We don’t have a team filter on the Lead & Cycle time dashboard yet, but we’re working on it! For now, you can filter by project or by Jira boards.

Do you have SSO and integration with the existing Codacy platform, so I do not need to reconfigure these? (00:39:57)

Since we wanted to bring Pulse to market as soon as possible, we started it with a user base from scratch, so we currently don’t have integration between Codacy and Pulse. As such, user management and accounts are different from Codacy, but we’re working on this unification so that you only have one account, one integration.

What integrations are available at this time? Which integrations are planned? (00:40:42); and Do you support BitBucket (00:43:37)?

Currently, Pulse has integrations with:

In terms of future integrations, we’re planning to expand the source code management tools to include BitBucket and GitLab. If you use anything different from PagerDuty, let us know so that we can prioritize that integration as well.

Is a Git repo the only identifier for a project, or can another source of truth be used? (00:42:11)

With the automatic GitHub integration, we associate the identifier with the repository. However, you can use our CLI to specify the identifier you want.

Is there a plan to add group_by so I can compare projects/teams for the different metrics? (00:42:57)

Yes! It is one of the most requested features, and it should be coming out very soon! Our idea is to show you how the different teams compare to each other and which are the lowest-performing teams so that you can focus on helping them.

I have a repo; how can I connect it with Pulse, and what changes do I need to do in my code so Codacy will give me profiling info? (00:43:57)

We are still not using any information from Codacy, but that’s on our roadmap. About your repo, if you are using GitHub, you just need to signup to Pulse, click on install Github, and you are good to go! It’s a one-click integration.

Pulse Get started

Check out our documentation if you are not using GitHub but still want to try Pulse. You can add a script to your deployment pipeline that does everything for you 😉

How do you foresee access to these graphs and the underlying data? The examples shown are hard to analyze due to outliers. The ability to swap between median, mean average would be needed, as would be logarithmic axes (00:44:58)

We’re looking into two options to deal with outliers in the data: one way is to add more complexity to the analysis with median, average, and other metrics. Another way is to automate the translation of data into actionable insights so that you don’t need to interpret the chart - we would give you a few sentences with all the information you need.

How do you correlate the Lead time to the production Deployment date/time? It appears that you’re just measuring the pull request process. (00:46:02)

We’ve different levels of granularity, depending on the precision you need. By default, the GitHub integration is connected to the PR process - this is the option with less effort. If you’re using semantic version tagging, Pulse also detects it and uses it for greater precision. Plus, if you’re willing to use the CLI / API, it works with the GitHub integration.

References and further reading

We selected a few interesting resources to help you continue on this Engineering performance journey. 

Thank you to everyone who joined live and those who watched the recording later on! Let’s keep the conversation going in our community.

Top comments (0)