Tech lead, team lead, software architects, and engineering managers — as any developer already knows, naming is hard. Throughout the industry, those roles are as fuzzy as their responsibilities.
For instance, at some companies, tech leads are responsible for mentoring or coaching developers, whereas, in others, they introduce the role of the Team Lead to that purpose. Before going to the list of tech lead responsibilities, let me briefly explain what I think a tech lead does, in my opinion.
Tech Leads are responsible for managing technical aspects of software development flow in a specific context or team. It’s crucial to a good tech lead to ensure the success of delivered solutions. In other words, tech leads are software engineers that enable the team to work with quality.
It’s part of the job to plan, design, lead, and execute technical solutions and improvements. Tech leads are very technical and thought-leaders among their peers. Otherwise, they won’t be able to promote collaboration or precise solutions.
The role has a considerably high intersection with architects, which, in many companies, are responsible for the whole system, not single applications. The term architect is also used for the person that focuses on the entire life-cycle of the system, including evolution, configuration, and risk management.
Team Lead’s responsibilities also intersect with tech lead’s responsibilities. Many companies adopt this role. Software engineers that want to migrate to management are a great fit. In short, team leads are people managers. They mentor (or coach) other software developers in specific technologies, languages, and frameworks. They also help team members to develop their soft skills, like leadership or conflict management.
As stated, depending on the company, Tech Leads are responsible for guiding on improving their hard and soft skills. Because of it, I chose only technical responsibilities for this list. Let’s get to it!
Quality is subjective. Depending on the context, what it varies. So, the first job for a tech lead is to understand how the code is, and then find the best way to improve it. In this sense, metrics are essential to skip subjectiveness and opinions. Here are some metrics that can help:
- Number of Style Guide mismatches : Before collecting this metric, it’s vital to adopt a style guide. The team must be aware of it. After that, it’s time to properly configure linters’ config files to match those conventions for each repository. Then, the number of mismatches tends to be accurate, and you can use it as an indicator.
- Number of Issues found by linters: some linters do more than just style guide checks. They can look for security issues, TODOs or FIXMEs mentions, and bad practices, like finding methods or functions declarations with too many arguments.
% of improvement : this is a nice metric that tech leads can extract periodically, like weekly or biweekly. To calculate that, you may use the formulae
(number_of_current_week_issues - number_of_last_week_issues) / number_of_last_week_issues * -1.
- Test Code Coverage: Test Coverage is another crucial indicator of quality. It’s essential not only to avoid quality to drop but also to measure whether a campaign for increasing the coverage is working. As mentioned in the metric above, you can also calculate the variation against any period.
Where are the hottest spots for refactoring? Is the current solution enough? Is this class following SOLID principles?
These are some of the questions tech leads should continuously ask the team and themselves. That said, it’s crucial to have metrics. They guide the team to make confident decisions.
- Code Churn: Code that is rewritten or deleted briefly after publication may indicate hotspots for design issues. Knowing the level of churn is crucial to make data-driven decisions on this concern.
- Number of Code Smells : Like the Number of Issues or Style Guide mismatches, it’s kind of impractical to zero it. Business is dynamic and forces teams to postpone refactors and rewrites. However, choosing this number as an indicator is critical to keep it under control.
As I said, zeroing technical debts is very hard. In modern software development, shipping features faster is a business strategy. When it happens, though, teams must trade quality for time, and the number of technical debts rises. Here are some metrics that can help tech leads to keep it healthy
- Number of technical debts: a simple metric with the sum of technical debts currently in the backlog can be convenient. It gives an idea of how much effort the team needs to fix them. Another metric, though, is crucial. I present it below.
- New:Paid Ratio: better than the total amount, periodically measuring the pace of its evolution is crucial. For a specific period, sum all the technical debts inserted in the code base, then compare it with the number of paid debts. If the number of paid debts is higher than the inserted ones, than you’re at a good pace.
Code Review is the core practice of modern software development. It promotes collaboration, spreads knowledge, reduces bugs, and glues the team together. A robust code review process is crucial for any organization.
One of the jobs of a Tech Lead is to review lots of code. So, it’s unlikely they are unaware of what’s going on. However, making decisions based on guesswork doesn’t work. Here are some numbers that can support Tech Leads actions:
- Time to Review: how much time does it take from opening a pull request to merging it? A satisfactory answer would be the mean of the time to review the latest pull requests. I recommend using the median here. Averages hide too much information. Another tip: measure it in days. It reduces timezone-related problems and smooth forecasting projections.
- Time to First Comment: This metric tells how much time it takes for the team to comment on a pull request. Opposite to Time to Review, I recommend measuring this metric in hours. If the value is too high, Tech Leads can investigate what’s going on.
- Pull Requests Size: The weight of a pull request can be expressed in two ways: sum_of_lines_added + sum_of_lines_removed or number_of_changed_files. Both metrics are useful for finding whether pull requests are massive or not. Extensive pull requests are evil. Developers usually don’t thoroughly review them, which may end up pushing low-quality code forward.
Collaboration and engagement are at the core of Code Review. There’s a lot of subjectiveness, and turning them into quantitative measurements is vital to control their promotion adequately. Here are some examples of metrics that tech leads can use:
- Number of Collaborators by Pull Request : this metric is simple to get, you just need to find the mean number of collaborators by pull request. By collaborator, I mean every person that commented on the pull requests. People who approved or declined but didn’t discuss in the pull request are left out. They are considered in the Number of Approves and Declines.
- Number of comments by pull requests: My tip is to find the mean and analyze it together with the Number of Reviewers by Pull Request. They are excellent indicators of how your team collaborates.
- Number of Approves and Declines : most SaaSs that empower code review have an approve/decline feature. It’s common for members to approve a pull request and leave no comment. Sometimes it’s ok, but it can’t be the behavior of the majority. So, the sum of approves and declines should be close to the Number of Collaborators by Pull Request. Otherwise, the review quality may be in check.
Keeping the quality of deliveries is one of the essential responsibilities of a tech lead.
- Deployment frequency: Features add new capabilities to the software, which increases the value perception of end-users. Deploy is the final step of the development flow, and that’s why it’s so crucial. The more teams deploy in a day, the more it adds value. You can track the number of deploys per day or week, calculating a mean or just summing them up. Most importantly, compare the metric among periods and keep a healthy pace.
- Deployment size : You can measure the size of a deploy by looking to the number of commits, lines or files changed, or the number of work items in it, for instance. It’s crucial to keep this number low, so the frequency tends to increase. Lowering the size also reduces the risks of failure during deploys or rollbacks. Small deploys are also easier to test.
- Bug Detection Rate : How many bugs are found in production? Anyone can find an escaped bug: end-users, Quality Assurance personnel, software engineers, anyone. Assuming there’s a process to register the bug in a proper tool, it’s easy to find the rate. To calculate the Bug Detection Rate, sum the number of created bugs in a given period and divide it by the number of weeks or months.
The five tech lead responsibilities presented in this article focus on the technical aspect of the role. They don’t fully cover all responsibilities, not even the technical ones. Being a tech lead is hard and requires lots of skills that intersect with lots of different areas.
Bringing metrics to the table helps professionals to visualize the big picture, and have more control over the process. In other words, I think measuring makes the job way more comfortable. That’s why I presented some ideas of what to measure.
Depending on the company, some responsibilities or metrics presented can be unnecessary. However, I think they are useful for the majority of them.
The post 5 responsibilities of a Tech Lead and 17 metrics to track their performance appeared first on SourceLevel.