DEV Community

Cover image for Stop using Git for Everything
Thomas Hansen
Thomas Hansen

Posted on • Originally published at ainiro.io

Stop using Git for Everything

When you use Git you've got to maintain multiple environments. Even if you simplify it down to its bare minimum, you end up with a development machine being your development environment and a production environment where you deploy.

Having multiple environments implies you've got to create CI/CD pipelines to deploy. This results in more moving parts, and also adds latency to your ability to deliver working code. The more latency you've got the more frustration you accummulate because of having to wait for your systems to deploy. In addition you create distance in time and space between your ability to create code and QA test the same code. The end result is that you move slowly and you create additional risk of deploying buggy code that's not synchronised with your production environment because of having two different environments that might not always be in sync. When you've got two different environments you usually have to:

  1. Maintain two different environments
  2. Synchronise your configurations between your two environments
  3. Synchronise your databases between your two environments
  4. Create and maintain pipelines
  5. Babysit deployments
  6. QA test twice, once in dev and another time in prod
  7. Add unit tests because of the above differences
  8. Etc, etc, etc

In general you reduce your velocity and increase your latency by a factor of 10x by using Git.

You don't always need Git

There's nothing magical about Git that says you always need it. I've got 30 clients I'm working for. Most of my clients have fairly simple code, maybe some 100 to 300 lines of code in total. In addition I've got Magic Cloud which fuses together my development environment and my runtime environment. This allows me to work straight in production and edit the production code directly, without any latency being introduced between me developing new features and production having access to these features. This allows me to:

  1. Use the same database for development and production
  2. Use the same configuration for development and production
  3. Immediately test my code in the prod environment after saving my file
  4. Completely forget about pipelines and CI/CD
  5. Completely drop unit testing
  6. Etc, etc, etc

In general I can increase my velocity by a factor of 10x by dropping Git. To understand the value proposition let me show you how I work, and as you're watching the video try to intelligently answer the question; "What additional benefits would Git provide to me for my process?"

I need to emphasise that I do use Git, but I don't use it for everything. Magic has 8,600 commits for instance, but my client code rarely uses Git. Git is a tool and sometimes we're better out using different tools. This is true for everything in life. And getting stuck in one tool just because everybody else is using it for everything is the very definition of being crazy and delusional.

Git is an amazing tool, and when it came out everybody went crazy about it because of its quality. But just because it's an amazing tool, doesn't imply we should use it for everything - And there's nothing magical about code that tells us it always needs to be versioned. By adding Git to your project you end up with two environments. When you've got two environments you're easily doubling your workload, sometimes quadrupling it.

KISS is the only axiom you should never compromise, and when you follow KISS, you realise that sometimes Git is not needed but in fact counter productive

Top comments (3)

Collapse
 
dyfet profile image
David Sugar

There are several different and interesting points here.

As for the CI cycle, yes, this means you end up trying to figure out and debug build issues in an entirely remote system with very limited access. It feels like the blind men trying to understand the elephant. And all you can do is create more commits to try fixing your ci build when it breaks.

For this reason, I prefer to do all my production / release work for my things locally, directly on a dev workstation. It's far easier to resolve broken builds, especially if the number of people who actually do releases is small. It also means you have better idea how to setup local dev environments for everyone else correctly, too. And anyone with a correct / complete setup can then do production releases, too. We have all these local resources that often are far faster as well as far more accessible, whether for running lint, running pre-release tests, etc.

Where I find ci handy is for accepting code from other arbitrary external people submitting public merge requests, as some kind of minimal pre-verification since they probably don't know how I setup production or what my expectations would be in advance. Strangely, for a long time, this one obvious use case (integrating ci with merge requests) was actually rather poorly supported in the CI cycle.

Verify every commit? Rather useless and a big waste of resources. If people have problems locally testing code before submitting, make sure its forced in their commit system. Run deploy on tags? The horrors of the elephant, because production workflows maybe are only used in release tags, and so is rarely used and usually get broken with product changes. Then you find yourself hiring devops engineers, and other workers you didn't need because the workflow creates labor needs you wouldn't have had, and all those CI resources also have to be maintained. Yet, it is these useless workflows CI was originally optimized for supporting. Full employment for all skill levels guaranteed...

Unit tests have their own issues. For old fashioned 20th century well defined linked API's, they can be very useful to validate and prevent API regressions. For many real world things made today that interface over networks, or involve interacting components that have no purely isolated operation, etc, they are often rather useless. Proper integration and release level product testing is what you most often want instead.

GIT itself is great in part because storage is cheap, networking is often fast, and it is easy to move entire repositories locally. This makes disconnected operation also efficient, oddly enough. SVN, and many other older version control systems, often had this huge separation between what was on the remote repository host and what you had locally, and kept very narrow views in local storage. Storage was expensive and networks were so slow back then...

But does GIT work for all use cases? No. It has certain behaviors that make it hard to work with very large local directory trees, like for example, the Alpine build repository, where every package build is has its own subdirectory and is maintained in a single git repo (aports). Updating the local clone to just do a single file change for a package file for a merge request can take minutes of waiting, even on a fast machine with ssd.

SVN had some interesting properties that were never properly exploited, such as being able to form maliable subviews of a large repository (thru relative refs) and the ability to checkout even just a single subdirectory from a large repo. Perhaps better tooling may have kept it being used longer.

Collapse
 
polterguy profile image
Thomas Hansen

An amazingly interested comment, you should create your own article based from it. I need to emphasise that I use GIT for a lot of stuff, I'm just not a believer in that "because its code, it belongs in GIT".

For instance, in Magic I can create tasks that are persisted into a database. These tasks contains Hyperlambda code, and is therefore by the very definition of the term "code". However, to put this code into GIT creates more troubles than the problems it solves.

Collapse
 
dyfet profile image
David Sugar

I decided to take your suggestion and run with it ;)