TL;DR notes from articles I read today.
- The standard for Kubernetes is rolling deployment, replacing pods of previous versions with the new one without cluster downtime. Kubernetes probes new pods for readiness before scaling down old ones, so you can abort deployment without bringing down the cluster.
- In a recreate deployment, all old pods are killed at once and replaced with new ones.
- A blue/green or red/black deployment offers both old and new versions together with users having access only to green (old version) while your QA team applies test automation to the blue (new version). Once the blue passes, the service switches over and scales down the green version.
- Canary deployments are similar to blue/green but use a controlled progressive approach, typically when you want to test new functionality on the backend or with a limited subset of users before a full rollout.
- Dark deployments or A/B testing are similar to canary deployment but used for front-end rather than backend features.
Full post here, 5 mins here
- Proper instrumentation of microservices ensures faster pinpointing and troubleshooting of problems.
- These include metrics for availability, metrics for capacity planning or to detect resource saturation, and metrics to understand internal states of each instance of a microservice.
- You need horizontal monitoring to monitor communication between microservices and their availability to each other.
- Load balancing across instances of microservices depends on several instances of each microservice communicating with several instances of others, so it is useful to have each microservice monitor the quality of its own inbound or outbound calls with other services as well as to have smart gateways in the service mesh inform on traffic entering and leaving it.
- Logs are the best place to keep metrics for each ETL job and are cheaper than metrics systems that are labeled by job ID.
- While metrics monitor all events crossing a particular checkpoint over time, traces monitor each event as it travels through the entire microservices chain. Traces are really helpful in monitoring flows in the product.
Full post here, 8 mins here
- If you have a functional monolith, you cannot afford to throw out all its current value and rebuild it from scratch.
- Instead, you should do a cost/benefit analysis and then rebuild parts of it as microservices accordingly.
- To refactor a monolith into microservices architecture, you need to break it into single responsibilities or services in an incremental fashion (to limit risk), replacing local function calls with a tool such as a REST operation to make a remote call for a microservice to integrate them; then remove the legacy code, slimming down the monolith.
- Most of the time, splitting a monolith also means splitting the database it consumes. Ideally, avoid sharing the database among multiple applications to avoid data duplication.
Full post here, 10 mins here