DEV Community

Cover image for An App's Single Source of Truth: Making the case for all resources in one repo
Will S
Will S

Posted on

An App's Single Source of Truth: Making the case for all resources in one repo

You'll often hear talking to Engineers about working with a "Single Source of Truth" (SSoT) as a way to have a single point of entry to access a particular data model. This prevents you from searching across your organization's wikis for hostnames or connection strings, tracking down code repos for any possible reference to a database or data flow diagram. Sure, you could say you do detective work, but save that for fixing bugs.

Modern Detective

And it doesn't have to be a single point for EVERYTHING, it can be split up based on the data model. When I was a developer for an IT department, this could be Active Directory for User Details, Asset Management database for Workstation and Network device details with workstation assignments, etc. Each one of these databases are "Domains" or data models representing something specific.

You'll also recognize this in your code if you follow SOLID principles, especially the "Single-responsibility Principle":

"There should never be more than one reason to change" or "every class should have only one responsibility"

In other words, a piece of logic should only be responsible for one thing/domain. If you have logic in more than one place, break it out into their own modules or functions. This is also the core concept for DRY principles.

This type of "Single Responsibility" thinking is at the heart of SSoT and resolves many issues, but the two biggest ones in my view are:

  • Duplication of effort: this can be effort to maintain and change in any type of resource (person-time, hosting, documentation, financial, services, etc).
  • Human Error: it's inevitable. Reducing the number of instances where human-interaction can occur reduces the number of times that data can be inaccurate. For Data Models, this can be data model structure or data entry. For SOLID principles, this can be feature or bug fixes that mistakenly get implemented in a single place or inconsistently across all.

How do Apps fit into this?

Take a look at an example of an app getting deployed. Of course all apps are different and have different structures, so I'll keep this simple.

We have a feature that's to be released and we need to get it deployed. The steps we would follow include:

  1. Get the feature developed and prepare a PR into a Release Branch or Trunk. This includes unit tests and API/Structure documentation for the feature.
  2. The feature requires a new resource in your infrastructure, so we request a change to the Infrastructure as Code (IaC) implementation. This goes through it's Code Review process and deployed to each environment with coordination between the Dev and DevOps teams.
  3. The code review pushes the build through Dev and QA stages and is ready for UAT. We can now go into our Wiki (Confluence, Git* Pages, etc.) with the usage functionality changes including the new feature.
  4. Code Review is accepted and the change is pushed to Production.

You could say that the sources of truth are:

  • Code Repo (for source code),
  • Wiki (for documentation),
  • CI/CD Pipeline (for unit testing, builds and deployment), and
  • IaC (for resource provisioning).

The issue with this understanding is a change in one "source" is dependent on a change in another source. This highlights the issues identified above:

  • Duplication of effort as steps need to be taken in multiple places by the same team or multiple teams.
  • Human error is introduced as one of these "sources" can be forgotten or decided not relevant to a feature.

The reason for this introduction of errors is we've missed the point of the "Single Source of Truth" to focus on the "Source" and not the "Source for the Domain". In this case, the domain is the app.

Centralize App/Domain Logic

Centralize Data

Now what happens when we centralize our App/Domain details into a single source:

  1. Get the feature developed and prepare a PR into Release Branch or Trunk. This includes unit tests, IaC changes, API/Structure documentation for the feature, Wiki Documentation as AsciiDoctor/Markdown/RST.
  2. Pipeline picks up the PR and runs unit tests against the feature code, IaC, and documentation syntax.
  3. Code Review pushes the feature through Dev and QA. Each step applies the IaC to the environment and deploys the feature build to the updated environment. Members of Dev, DevOps, SecOps, etc can be part of the Code Review process.
  4. At the UAT/Production stage, the documentation is pushed to the Wiki using the extension of choice (most documentation parsers support the major wiki providers, like for Confluence there's Mark for Markdown, Official AsciiDoctor Exporter, and RST Exporter), or parsed into a DocBook/eBook/PDF for publishing.

One change to note between these two processes is the reduction in Human Interaction and "go to system" steps. The Pipeline handles updating all tasks. If we look into the repository, we have all of the update files as part of the repo:

project
 |- doc/
     |- index.ad
     |- usage.ad
 |- infra/
     |- app.yml (ansible playbook)
     |- main.hcl (terraform project)
 |- pipeline/
     |- main.yml
     |- env_template.yml
 |- project/
     |- lib/
         |- ...
     |- project.(py|ts|...)
CONTRIBUTORS.md
CHANGELOG.md
README.ad
USAGE.ad
...
Enter fullscreen mode Exit fullscreen mode

The other big change is how we've implemented a number of Stage Gates within our pipeline to apply non-human checks to validate the quality of our code. This means fewer things to consider during the Code Review and faster deployment to Production.

Considerations

Of course, there's always going to be a down-side to anything we do, so it's important to always understand what those are. This is no different so we should get that out of the way.

  1. More up-front investment: For this to work well, a lot of work needs to be done to prepare our pipelines to take on this responsibility. I call this an investment though because the time spent up-front means less time later to repeat tasks. This means:
    • creating service accounts/tokens for writing content to our wikis or integrating with our IaC provisioner
    • writing pipelines that implements the checks and unit tests
    • implement pipeline tasks that collect information to simplify stage gates
    • leveraging secrets for handling connection strings and access permissions to be used by our pipelines
    • designing your test suite to not just run your application tests, but all other kind of tests.
  2. Less flexibility: Depending on how tightly integrated this process is taken, you'll have equally less flexibility in terms of prototyping, deployments and documentation.
  3. no "fast-fixes": As we've now implemented this huge pipeline process, creating a single-line change to fix a typo is not as simple as "build and deploy". It'll have to run through all of the same checks as if you've implemented a new feature.

In response to these differences are:

  1. Thorough Code Reviews: The pipeline can implement a number of tools and practiced processes that automate the Code Review process. This means Seniors and Leads can focus on things they should focus on. The up-front work can also be done in stages and slowly built up as the deployment process is documented in the pipeline.
  2. Faster Time to Production: As we've automated many of the tasks of a Code Review, we can run them at any time and only request a human-based Stage Gate when necessary. This could potentially speed up a feature rollout from weeks to days or even hours.
  3. "fast-fixes" are "possible": I'm adding that in quotes because fast-fixes can still be done. The pipeline is still going to be thorough, but we can roll-out a "fast-fix" based on how fast the pipeline can run through it. Another note is if the pipeline is doing it's job correctly, we should get to a point where "fast-fixes" are a thing of the past.

Let me know what you think. We all have our differences and I'm sure there's a consideration I didn't think of here. From trying various processes and working under wildly different expectations, this is a process that (given time) is very well appreciated and welcome.

Top comments (0)