DEV Community

Jim Borden
Jim Borden

Posted on

The Couchbase Lite .NET Pipeline

I'm fairly proud of the pipeline I have managed to set up for myself working as the developer of Couchbase Lite .NET. The more automation you can have, the easier your life is and I'm also looking for ways to expand this so if anyone has any suggestions please let me know. I'm considering Gerrit as well, but the normal flow seems awfully verbose for a single developer. It would be nice to have commit validation though! Hopefully someone can take away something from the setup I have made as well!

Here is a diagram of the start to end flow at a high level:

Pipeline Diagram

So, let's start with what everyone knows. I have just made a commit to the repo

GitHub commit

Our official builds of Couchbase Lite .NET are based on a manifest containing the repos and branches needed and where to check them out. This manifest format is from the Google tool repo. There is a script that we have that detects changes in key repos inside the manifest and if there are changes triggers a new source scrape. It runs as a Jenkins job (as do all the other downstream jobs). The source scrape will pull down all of the repositories needed at the correct commit and compress them all into a GZipped tarball for archiving (similar to the "download zip" feature of GitHub, except it works across multiple repos). This source is then uploaded to a NAS (Network Access Share) machine and available on an internal HTTP server.

Server listing

And so this commit has resulted in a source scrape for build number 129 for version 2.1.0. This source scraping job then triggers a downstream job to build. The build job will pull the newly created source, and run the build script on it to produce a Nuget package. This is done once for the Enterprise Edition, and once for the Community Edition. These packages are then uploaded to our internal feed, hosted by a Proget server installation.

Proget package

Should this build be the lucky one, this is exactly what will be shipped. There are no further changes to the assembly whatsoever. However, the process is far from over. The reason that I upload the packages to the nuget feed at this point is because I want to detect any packaging mistakes, and ensure that the packages that go out are usable. So the packaging itself is also in essence being tested.

Once the upload is finished, the next downstream job is triggered. This is the unit testing job. The term "unit test" is used horrendously loosely here as I don't have any standards for making actual units. It's more just a collection of stuff that I want to make sure works (not sure what to call it) and stuff that failed before that was reported. The tests are run on .NET Core Windows, .NET Core macOS, .NET Core Ubuntu, UWP, iOS simulator, and Android emulator. Each suite is run twice: The first is building from source a debug build of the community edition and running the tests on that. The debug builds have lots of extra asserts that detect failures more quickly. If those pass then a release build is made of the unit tests using the package from the feed that was just inserted. On .NET Core Windows only an extra run is done to gather code coverage data.

If this passes, then the packages are all promoted to a separate feed. This feed is reserved for packages that passed the first round of automated testing. Our QE team can be confident that packages pulled from this feed are sane. The next step is triggered after that, which is a build of the QE test server application for .NET Core, UWP, Xamarin Android, and Xamarin iOS. The test server application is an internal tool we use that will accept commands over HTTP so that it can be involved in orchestration with separate processes and programs. The reason that this is done is for the next step.

The final step in the pipeline process is a set of functional testing. This means that we need to start up a program (or several) running Couchbase Lite, and also orchestrate instances of Sync Gateway and Couchbase Server and hook them up to run against each other to test replication scenarios. A typical run will look like this (carried about by a python client running pytest on a machine separate to the below ones):

  • Install Couchbase Server (machine 1 / one time)
  • Install Sync Gateway (machine 2 / one time)
  • Ensure Sync Gateway and Couchbase Server are shut down
  • Start Couchbase Server
  • Clear Couchbase Server bucket data
  • (optional) Prepopulate Couchbase Server bucket data
  • Start Sync Gateway with a config pointing to the Couchbase Server bucket
  • Download and install the Test Server app (machine 3)
  • Start the test server app
  • Confirm everything is listening on the correct port
  • Begin issuing commands to both setup scenarios, and confirm correct results
  • Stop the test server app

This is very intensive and even simple scenarios can take minutes per test (as opposed to milliseconds for unit testing). After this point a report is generated and saved regarding the pass / fail results and what went wrong.

When it comes time to release a DB (developer build) to our prerelease feed, it's almost as simple as moving a package from our internal feed to our external one. There is one step in between which unzips the package, changes the nuget version and then rezips it so that it shows up with the proper identifier on a Nuget feed (e.g. 2.1.0-db001). When GA time comes, in addition to other testing kicked off manually by the QE team, the process is the same except that instead of moving it to the prerelease feed, it gets moved to nuget.org and its version gets changed to a non-prerelease version (e.g. 2.1.0).

Automation is very nice, and saves a lot of time and helps you catch things more quickly. I hope I can get even more automated in the future!

Top comments (0)