For the past few weeks, I have been using a crazy mix of the following. All within a git repository (infrastructure as code), as we migrate most of our infrastructure and workload from one provider to another.
- kubernetes yaml
With lots and lots of json, and yaml configuration files. Mostly generated from one system, to be piped to another (about 10k lines worth).
terraform which is commonly sold as a single solution for everything (it isnt), rapidly broke apart for us once we started doing deployments outside the usual AWS / GCP, and its poor support for kubernetes (which we use heavily)
So we started patching up missing orchestration with nodejs + bashscripts.
And now we have a giant soup of scripts updating other scripts configuration and applying them. Not exactly the most "elegant" solution.
So wondering out loud, for those who do really large scale deployments, especially with a small team.
Is it normal to always throw in the towel at the end, and code up a custom configuration management script to handle all this chaos? If so do you all normally do it in bash? or a custom application (like java)? or some other CLI scripting language.
Alternatively, is it normal to just grow a really large sysadmin team, each managing a subset of the system?
It feels like I am reinventing the wheel on these things, yet I somehow feel like there would have been a solution out there for this.
Sidetrack: a large part of me just feels like redoing terraform in nodejs out of frustration, to support my use cases.
I do believe there are multiple cloud specific offerings out there, the reason we do not use any of them is currently we run on the following list of providers.
- digital ocean
- bare metal on-premise stuff
Covering the following regions
- +3 other data centers
The complex web of providers comes in part from the need to support regions where another provider either does not exist, or does poorly performance wise.
All to run UI tests at https://uilicious.com !??!