DEV Community

Cover image for API's From Dev to Production - Part 7 - Code Coverage
Pete King
Pete King

Posted on • Updated on

API's From Dev to Production - Part 7 - Code Coverage

Series Introduction

Welcome to Part 7 of this blog series that will go from the most basic example of a .net 5 webapi in C#, and the journey from development to production with a shift-left mindset. We will use Azure, Docker, GitHub, GitHub Actions for CI/C-Deployment and Infrastructure as Code using Pulumi.

In this post we will be looking at:

  • Code coverage

TL;DR

We understood code coverage is an important metric, but a metric that shouldn't looked at in isolation.

Achieving a high percentage of code coverage is a great goal to shoot for, but it should be paired with having a robust, quality, high-value test suite.

We utilise Coverlet to output code coverage files (lcov format), setup and configure Codecov, update our Dockerfile and GitHub Actions Workflow for code coverage, and finally, a little insight into the Codecov settings file to set a target.

Codecov.io is was a great solution and easy to integrate, I can highly recommend the product; I set it up in less than 10 minutes!


GitHub Repository

GitHub logo peteking / Samples.WeatherForecast-Part-7

This repository is part of the blog post series, API's from Dev to Production - Part 7 on dev.to. Based on the standard .net standard Weather API sample.


Introduction

We would all like to write perfect, bug free code, but we know in reality this will not happen, we are human after all. So, we write tests whereby we arrange, act and assert a scenario, in turn, this uses our code that will effectively be running in production, servicing the needs of our customers. A good metric to track is code coverage, it can help assess the quality of our suite of tests; how much of our tests actually execute lines of code (and more) that will be servicing those needs of our customers?

Code coverage is a measurement used to express which lines of code were executed by tests. Typically there is the use of three primary terms to describe each line executed.

  • Hit indicates that the source code was executed by a test.
  • Partial indicates that the source code was not fully executed by the a test; there are remaining branches that were not executed.
  • Miss indicates that the source code was not executed by tests.

There are many products that offer solutions, here we are going to look into one of them, it's free for open source, and it has a paid plans of course too.

What percentage of coverage should I aim for?

I'm afraid I have some bad news... There is no magic formula or silver bullet. A very high percentage of coverage could still be problematic if certain critical parts of the application are not covered, or if the tests are not robust enough to capture failure when they execute; either locally or as part of CI.

Having said that, some general views are it's about 80%, however, you must be aware that it is a target and it is not something to achieve immediately and to cut corners in order to hit the target. Some product teams may feel under pressure to meet targets and create broad tests to meet those targets. Don't fall into this trap, you don't want to have laser focus trying to hit every line of code, instead, the focus should be on the business requirements of your application.

Code coverage reports

As you'll see in this post, code coverage reports provide a way to identify critical misses in your testing. You API/application will have many tests, and can at times, be hard to navigate, all you'll know is some of your tests are failing, and here is the list. Code coverage reports provide a way to dig into the details and find actions to take; find out what is not tested.

Code coverage does not equal good tests

Remember, just because code is covered with tests, it does not mean everything is all good. If your tests are not the right tests, i.e. the test does not test the right elements in the right way, it means your tests are of low quality and value. However, your code coverage is high... so what? You see, code coverage is a metric that must be viewed in context with other metrics and other practices.

Achieving a high percentage of code coverage is a great goal to shoot for, but it should be paired with having a robust, quality, high-value test suite.


Requirements

We will be picking-up where we left off in Part 6, which means you’ll need the end-result from GitHub Repo - Part 6 to start with.

If you have followed this series all the way through, and I would encourage you to do so, but it isn't necessary if previous posts are knowledge to you already.


Coverlet

Coverlet is the number one way (in my view) to add cross platform code coverage to .NET, with support for line, branch and method coverage. It works with .NET Framework on Windows and .NET Core on all supported platforms. In short, it's great!

Let's set it up in our project.

Step 1

To add Coverlet to our project, we need to add the, coverlet.msbuild package to our unit test project.

Open your terminal, cd to the folder where your unit test project is located, and run the following command:

dotnet add package coverlet.msbuild
Enter fullscreen mode Exit fullscreen mode

If you're using Visual Studio you can simply add a new reference using the UI.

This command will use Nuget and update your .csproj file like so:

    <PackageReference Include="coverlet.msbuild" Version="3.0.2">
      <IncludeAssets>runtime; build; native; contentfiles; analyzers; buildtransitive</IncludeAssets>
      <PrivateAssets>all</PrivateAssets>
    </PackageReference>
Enter fullscreen mode Exit fullscreen mode

Step 2

Update our dotnet test command to collect code coverage metrics.

-p:CollectCoverage=true \
-p:CoverletOutput="TestResults/coverage.info" \
-p:CoverletOutputFormat=lcov
Enter fullscreen mode Exit fullscreen mode

Dockerfile - [Unit test runner] section

This is what our modified Unit test runner section looks like now with the additional parameters for code coverage.

# Unit test runner
FROM build AS unit-test
WORKDIR /code/test/Samples.WeatherForecast.Api.UnitTest
ENTRYPOINT dotnet test \
    -c Release \
    --runtime linux-musl-x64 \
    --no-restore \
    --no-build \
    --logger "trx;LogFileName=test_results_unit_test.trx" \
    -p:CollectCoverage=true \
    -p:CoverletOutput="TestResults/coverage.info" \
    -p:CoverletOutputFormat=lcov
Enter fullscreen mode Exit fullscreen mode

We have chosen the lcov format simple because the tools we are going to use next require this format.

You can output other formats and even multiple formats at the same time.

As a quick example, if you wished to output lcov and opencover formats, you could modify the last two lines like so:

-p:CoverletOutput="TestResults/" \
-p:CoverletOutputFormat=\"lcov,opencover\"
Enter fullscreen mode Exit fullscreen mode

Step 3

We now need to build locally, we have our .\unit-test.ps1 script, run that and we should have output from Coverlet on our code coverage, let's double-check just to make sure.

Look in your /TestResults directory, in there you should have an additional file called, coverage.info.

Alt Text


Codecov

If you are looking for a nice elegant solution, Codecov has got you covered; no pun intended... or was there... :P

Let's setup Codecov.

Navigate to Codecov, and sign up

I've used my GitHub account.

Once connected you'll see you have no repositories setup like the screenshot below:

Alt Text


SelectAdd repository

For this example, I'm going to Select the
Samples.WeatherForecast-Part-7 repository.

Alt Text


New step

We need to add a new step to upload our coverage results to Codecov, and as you've seen previously, this is the power of GitHub Actions - There is a pre-built Action from Codecov to do that for us!

For more information, please see, GitHub Action for Codecov.

Immediately after the Unit test [publish] step, please add a new step with the following code:

- name: Code coverage [codecov]
  uses: codecov/codecov-action@v1.2.1
  with:
    files: ${{ github.workspace }}/path/to/artifacts/testresults/coverage.info
    verbose: true
Enter fullscreen mode Exit fullscreen mode

Commit time for Codecov

If you've carried out this exercise locally like myself, and ensured the coverage.info is generated by running .\unit-test.ps1.

It's time you committed your changes, your build should kick-off and Codecov should know all about it.

Once your build job has executed the Codecov action, your results should be available to see in the Codecov platform.

Alt Text


If we dive into the breakdown of the code, you should see something similar below:

Alt Text

Looking at our code coverage, it isn't that fantastic, overall we have achieved 36%, but at least we can take some action. We have our sample unit test, which is incredibly basic, and given we have no tests covering program.cs and startup.cs it will hurt our overall coverage percentage.

Code coverage badge

Everyone likes a badge on their 'readme.md', let's add our Codecov bage.

Add the following code into your, readme.md file, of course, yours will be slightly different to mine.

[![codecov](https://codecov.io/gh/peteking/Samples.WeatherForecast-Part-7/branch/main/graph/badge.svg?token=KZW5MORPPY)](https://codecov.io/gh/peteking/Samples.WeatherForecast-Part-7)
Enter fullscreen mode Exit fullscreen mode

Where can I get the code you need, well, that's easy...

Navigate to your project in Codecov

ClickSettings (top-right)

In the Sidebar ClickBadges

Alt Text


Codecov Repository Configuration

First of all, install the Codecov Bot in GitHub using the following link:

GitHub App - Codecov Bot

Alt Text


You can configure Codecov with various different configurations, for more information, please see, About the Codecov yaml

Here we have set the target coverage of 30%, this is quite low as we know, but this is just an example to show how it works.

Commit your code, and wait for your results to show-up, it should all *pass fine.

coverage:
  status:
    project:
      default:
        target: 30%    # the required coverage value
        threshold: 1%  # the leniency in hitting the target
Enter fullscreen mode Exit fullscreen mode

We will take advantage of the above settings in the next blog post :)


What have we learned?

We have learned how to add code coverage, in particular Codecov.io to our API. In order to achieve these results, we have seen how to add Coverlet and output our coverage files (specifically lcov). We've adapted our Dockerfile, GitHub Actions Workflow, configured a code coverage target (albeit a low one) and even added a badge to our GitHub repository readme file.

📄 UPDATE - CodeCov.io
We end up removing CodeCov.io from our API in Part 9, however, it's still worthwhile going through this process. Even so, CodeCov.io could be OK for you, don't let me stop you! 👍


Next up

Part 8 in this series will be about:

  • More code coverage - We will set a realistic code coverage target.
    • GitHub Status checks - How to protect your code?

More information

Top comments (0)