DEV Community

Cover image for Use these 3 pillars to break the Monolith
Cristtopher Quintana Toledo
Cristtopher Quintana Toledo

Posted on

Use these 3 pillars to break the Monolith

To break the monolith in the best way I certainly have 3 things in mind, DDD, Hexagonal architecture and GO microservices which we will use as pillars in the process.

So, to start with, we will design a GO scaffolding for these microservices that complies with best practices.
Best practices:

  • Domain Driven Design
  • Database per Microservice
  • CI/CD
  • Observability
  • Hexagonal Architecture
  • KISS, DRY, SoC
  • Solid principles

Ensuring a Smooth migration

Identifying those processes that can be decoupled and processed into multiple, totally independently deployable and scalable microservice is a good first step in designing and planning a phased solution. But, to do it most smoothly and correctly possible, you must first identify the domains, subdomains, bounded context, domain models, entities, value objects and more.

A guide to this is coming soon :)

After that, we will first migrate the processes as cron-jobs that get data from the MySQL database of the monolith to generate data for intelligent business.

Let’s put our hands on the keyboard and get started.

Project Structure

|-- micro-scaffolding
    |-- builld: Strategy of construction of the device separated by environment.
      |-- docker
        |-- dev
          |-- Dockerfile: For local environment.
        |-- drone
          |-- Dockerfile: For deployed environments.
    |-- cmd: Starting point of the different layers of the device.
      |-- worker: Agregation's abstraction.
        |-- job.go: Main file for init dependencies & call job service.
    |-- pkg: Go source on package organization.
      |-- job: Aggregate containing use cases.
        |-- service.go: Aggregation's use cases.
        |-- repository.go
        |-- model.go
      |-- persistence: Data storage layer.
        |-- mongodb
        |-- mysql
    |-- .dockerignore
    |-- .drone.yml
    |-- .gitignore
    |-- go.mod
    |-- go.sum
    |-- Makefile
    |-- README.md

Architecture

In this migration process, we need to have the facility to change frameworks, swap data sources, and mainly to be able to change the exposure of the data from REST to gRPC to improve the communication when we have dozens of microservices, but let’s calm down and start little by little not to go crazy.

Our diagram to build will be as follows:

Alt Text

Job as aggregate represents the abstraction of the process that will be executed, In this way, we will isolate all business logic from the framework and database.

Application Layer

Our worker service will have 2 use cases, run and generate, 3 repositories for MySQL, MongoDB and AWS represented in their respective functions for insert, find and upload.

In summary we will consult data, export to csv and upload to amazon.

Run: Apply microservice logic with available use cases and repositories. Generate: Export data finded to csv.

Insert: Create a document to Microservice’s MongoDB.
Find: Search for data on MySQL.
Update: Send csv file generated to s3 bucket.

Let’s take a look at the service:

package job

import (
    "encoding/csv"
    "os"
    "strconv"
    "time"
  "fmt"
)

// Worker represents the interface that exposes the package's use cases.
// Call Run() from main file.
type Worker interface {
    Run(*int, *int, *File) error
    generate(string, *[]Data) error
}

// Dependency injection
type port struct {
    mongo MongoRepo
    sql   SqlRepo
    aws   AwsRepo
}

// NewService represents a new service instance
func NewService(mongo MongoRepo, sql SqlRepo, aws AwsRepo) Worker {
    // Receives & return the necessary dependencies to satisface the use cases.
    return &port{mongo, sql, aws}
}

func (port *port) Run(offset, limit *int, file *File) error {
    start := time.Now()

    // Object to insert to mongo.
    log := Log{
        Date: time.Now(),
    }

    // Call find method from sql persistence.
    data, err := port.sql.Find(offset, limit)
    if err != nil {
        return err
    }

    if err := port.generateCsv(file.Path, data); err != nil {
        return err
    }
  fmt.Println(time.Since(start).String())

    if err := port.mongo.Insert(log); err != nil {
        return err
    }

    if err := port.aws.Upload(file.Folder, file.Name); err != nil {
        return err
    }

    return nil
}

func (port *port) generate(path string, data *[]Data) error {
    f, err := os.Create(path)
    if err != nil {
        return err
    }
    defer f.Close()

    writer := csv.NewWriter(f)
    defer writer.Flush()

    header := []string{"Id", "Code", "Name"}
    body := [][]string{}
    body = append(body, header)

    for _, d := range *data {
        row := []string{strconv.Itoa(d.Product.Id), d.Product.Sku, d.Product.Name}
        body = append(body, row)
    }

    for _, body := range body {
        _ = writer.Write(body)
    }

    return nil
}

The example model to be used is very simple:

type Data struct {
    Id   int
    Code int
    Name string
}

type File struct {
    Name, Path, Folder string
}

To expose the repositories to the service we can declare them as follows:

package job

/* Repo represents a port through an interface.
* It's responsible for database/other microservices related job.
*
* No business logic is implemented here and
* must receive and return primitive data types.
 */
type SqlRepo interface {
    // Exposes the find method of mysql to job service.
    Find(*int, *int) (*[]Data, error)
}

type MongoRepo interface {
    // Exposes the insert method of mongo to job service.
    Insert(Log) error
}

type AwsRepo interface {
    Upload(folder, file string) error
}

Persistence Layer

Alt Text

MySQL: Querying data.
MongoDB: Insert data.
AWS: Upload files.

Each repository contains the interface to getting entities, creating manipulating them. They keep a list of methods that are used to communicate with data sources and return a single entity or a list of entities.

let’s look at it one by one.

package mysql

import (
    "database/sql"
    "fmt"

    "bitbucket.org/cristtopher/micro-scaffolding/pkg/job"
)

type storage struct {
    db *sql.DB
}

type Data = job.Data

func NewRepository(db *sql.DB) job.SqlRepo {
    return &storage{db}
}

func (stge *storage) Find(offset, limit *int) (*[]Data, error) {
    var (
        array = make([]Data, 0)
        data  Data
    )

    query := "SELECT id, code, name FROM Product"
    if *limit > 0 {
        query = query + fmt.Sprintf(" LIMIT %d, %d", *offset, *limit)
    }

    rows, err := stge.db.Query(query)
    if err != nil {
        return nil, err
    }

    defer rows.Close()

    for rows.Next() {
        err = rows.Scan(
            &data.Product.Id,
            &data.Product.Sku,
            &data.Product.Name,
        )
        if err != nil {
            return nil, err
        }
        array = append(array, data)
    }

    return &array, nil
}
package mongodb

import (
    "context"

    "bitbucket.org/cristtopher/micro-scaffolding/pkg/job"
    "go.mongodb.org/mongo-driver/mongo"
)

type storage struct {
    db *mongo.Database
}

func NewRepository(db *mongo.Database) job.MongoRepo {
    return &storage{db}
}

func (storage *storage) Insert(log job.Log) error {
    collection := storage.db.Collection("log")

    _, err := collection.InsertOne(context.Background(), log)
    if err != nil {
        return err
    }
    return nil
}
package aws

import (
    "bitbucket.org/private/go-utils/pkg/aws/s3"
    "bitbucket.org/private/go-utils/pkg/logger"
    "bitbucket.org/cristtopher/micro-scaffolding/pkg/job"
    "go.uber.org/zap"
)

type storage struct {
    s3 s3.S3
}

func NewRepository(s3 s3.S3) job.AwsRepo {
    return &storage{s3}
}

func (storage *storage) Upload(folder, file string) error {
    if _, err := storage.s3.UploadFile(folder, file); err != nil {
        logger.Error("Error uploading csv to s3", err)
        return err
    }

    logger.Info("Csv uploaded to s3", zap.String("file", folder+file))
    return nil
}

the latter imports a private s3 package for uploading the document to the bucket to improve security and to be able to reuse it in other microservices.

Observability

Note that a private logger package is imported that allows us to observe elasticsearch data through kibana.

Presentation Layer

Alt Text

Now with these layers ready, we can design the presentation layer where we will receive 2 flags to define offset and limit the query to be made.

Then when we run the binary we can pass the parameters to it as follows:

$make run OFFSET=0 LIMIT=5
package main

import (
    "context"
    "database/sql"
    "flag"
    "os"
    "time"

    _ "github.com/go-sql-driver/mysql"

    "bitbucket.org/private/go-utils/pkg/aws/s3"
    "bitbucket.org/private/go-utils/pkg/logger"

    "bitbucket.org/private/micro-scaffolding/pkg/job"
    "bitbucket.org/private/micro-scaffolding/pkg/persistence/aws"
    "bitbucket.org/private/micro-scaffolding/pkg/persistence/mongodb"
    "bitbucket.org/private/micro-scaffolding/pkg/persistence/mysql"
    "github.com/joho/godotenv"
    "go.mongodb.org/mongo-driver/mongo"
    "go.mongodb.org/mongo-driver/mongo/options"
    "go.mongodb.org/mongo-driver/mongo/readpref"
)

var (
    mdbName string
    mdbURI  string
    mdbCli  mongo.Database

    sqlName string
    sqlURI  string
    sqlCli  sql.DB

    s3Config s3.AwsS3Config

    offset, limit int
)

const filename string = "data.csv"

func init() {
    godotenv.Load()
    mdbURI = os.Getenv("MONGO_URI")
    mdbName = os.Getenv("MONGO_DBNAME")
    sqlURI = os.Getenv("MYSQL_URI")
    sqlName = os.Getenv("MYSQL_DBNAME")

    awsAccessKey := os.Getenv("AWS_ACCESS_KEY")
    awsSecretKey := os.Getenv("AWS_SECRET_KEY")
    awsRegion := os.Getenv("AWS_REGION")
    awsBucket := os.Getenv("AWS_BUCKET")

    s3Config = s3.AwsS3Config{
        AwsAccessKey: awsAccessKey,
        AwsSecretKey: awsSecretKey,
        Region:       awsRegion,
        Bucket:       awsBucket,
    }

    flag.IntVar(&offset, "o", 0, "(offset): Allows us to specify which row to start from retrieving data.")
    flag.IntVar(&limit, "l", 0, "(limit): Determine the maximum number of rows to return.")
    defer flag.Parse()
}

func main() {
    mDB := initMongo(mdbURI, mdbName)
    sqlDB := initSQL(sqlURI, sqlName)

    s3Aws := s3.AwsS3Service(s3Config)

    sqlRepo := mysql.NewRepository(sqlDB)
    mongoRepo := mongodb.NewRepository(mDB)
    awsRepo := aws.NewRepository(s3Aws)

    logger.Info("Job start")

    service := job.NewService(mongoRepo, sqlRepo, awsRepo)
    file := job.File{
        Name:   filename,
        Path:   "./" + filename,
        Folder: "bi-csv/",
    }
    err := service.Run(&offset, &limit, &file)
    if err != nil {
        logger.Error("Error running service", err)
        panic(err)
    } else {
        logger.Info("Job done")
    }
}

func initMongo(uri string, dbName string) *mongo.Database {
    client, err := mongo.NewClient(options.Client().ApplyURI(uri))
    if err != nil {
        logger.Error("Failed to init Mongo", err)
        panic(err)
    }
    ctx, _ := context.WithTimeout(context.Background(), 50*time.Second)
    err = client.Connect(ctx)

    err = client.Ping(ctx, readpref.Primary())
    if err != nil {
        logger.Error("Failed pinging to Mongo", err)
        panic(err)
    }
    return client.Database(dbName)
}

func initSQL(uri string, dbName string) *sql.DB {
    db, err := sql.Open(`mysql`, uri+dbName)
    if err != nil {
        logger.Error("Failed to init Mysql", err)
        panic(err)
    }
    db.SetMaxIdleConns(10)
    db.SetMaxOpenConns(10)
    _, err = db.Exec("DO 1")
    if err != nil {
        logger.Error("Failed pinging to MySql", err)
        panic(err)
    }
    return db
}

CI/CD

To automate these tasks we will use drone and dockerhub to integrate and deploy continuously.

FROM alpine:3.6 as alpine

RUN apk add -U --no-cache ca-certificates

ENV USER=appuser
ENV UID=10001

RUN adduser \
    -D \
    -g "" \
    -s "/sbin/nologin" \
    -H \
    -u "${UID}" \
    "${USER}"

FROM scratch
COPY --from=alpine /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/
COPY --from=alpine /etc/passwd /etc/passwd
COPY --from=alpine /etc/group /etc/group

COPY ./job ./job

USER appuser:appuser

ENTRYPOINT ["./job"]
CMD ["-o", "0", "-l", "0"]

This dockerfile has an important consideration, the multi stage to add non-root user and run the binary.

To build the local docker image we must use another dockerfile with the following

FROM golang:1.13.6 AS builder

ENV GO111MODULE=on
ENV CGO_ENABLED=0
ENV GOOS=linux
ENV GOARCH=amd64
ARG SSH_PRIVATE_KEY

WORKDIR /app

COPY . .

RUN echo "[url \"git@bitbucket.org:\"]\n\tinsteadOf = https://bitbucket.org/" >> /root/.gitconfig
RUN mkdir -p ~/.ssh && umask 0077 && echo "${SSH_PRIVATE_KEY}" > ~/.ssh/id_rsa && \
  echo "StrictHostKeyChecking no " > ~/.ssh/config && \
  eval "$(ssh-agent -s)" && \
  eval "$(ssh-add /root/.ssh/id_rsa)" && \
  touch ~/.ssh/known_hosts && \
  ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts

RUN make install

RUN go build ./cmd/worker/job.go

FROM scratch

COPY --from=builder /etc/ssl/certs/ca-certificates.crt /etc/ssl/certs/ca-certificates.crt
COPY --from=builder /app /

ENTRYPOINT ["./job"]
CMD ["-o", "0", "-l", "0"]

Our drone configuration file should look like this:

kind: pipeline
name: micro-scaffolding

steps:
  - name: build
    image: golang:1.13.6
    when:
      branch:
        - master
        - development
    environment:
      CGO_ENABLED: "0"
      GO111MODULE: "on"
      GOOS: "linux"
      GOARCH: "amd64"
      GOPRIVATE: "bitbucket.org/private"
    commands:
      - make install
      - go build -ldflags="-w -s" -o ./job ./cmd/worker/job.go

  - name: publish
    image: plugins/docker
    when:
      branch:
        - development
    settings:
      username:
        from_secret: docker_username
      password:
        from_secret: docker_password
      repo: cristtopher/${DRONE_REPO_NAME}
      dockerfile: build/docker/drone/Dockerfile
      tags:
        - ${DRONE_BRANCH}

  - name: prod-publish
    image: plugins/docker
    when:
      branch:
        - master
    settings:
      username:
        from_secret: docker_username
      password:
        from_secret: docker_password
      repo: cristtopher/${DRONE_REPO_NAME}
      dockerfile: build/docker/drone/Dockerfile
      tags:
        - latest
        - 1.0.0

  - name: notify
    image: plugins/slack
    when:
      branch:
        - master
      status: [success, failure]
    settings:
      webhook:
        from_secret: slack_webhook
      channel:
        from_secret: slack_channel
      template: >
        *{{#success build.status}}✔{{ else }}✘{{/success}} {{ uppercasefirst build.status }}: Build #{{ build.number }}* (type: `{{ build.event }}`)

        :ex: Artifact: *{{uppercase repo.name}}*:<https://bitbucket.org/{{ repo.owner }}/{{ repo.name }}/commits/branch/{{ build.branch }}|{{ build.branch }}>

        :hammer: Commit: <https://bitbucket.org/{{ repo.owner }}/{{ repo.name }}/commits/{{ build.commit }}|{{ truncate build.commit 8 }}>

        <{{ build.link }}|See details here :drone:>
trigger:
  branch:
    - master
    - development

note that in the construction step, the flag w and s are added as follows -ldflags=”-w -s” to shrink the binary size omitting the symbol table, debug information and the DWARF table.

And finally, a Makefile to handle every possible option.

IMAGE_NAME := $(shell basename "$(PWD)")

install:
    go env -w GOPRIVATE=bitbucket.org/private
    go mod download

updatedeps:
    go get -d -v -u -f ./...

build:
    go build -o bin/job cmd/worker/job.go

compile:
    GOOS=linux \
    GOARCH=amd64 \
    go build \
    -ldflags="-w -s" \
    -o bin/job ./cmd/worker/job.go

run:
    go run cmd/worker/job.go -o=$(OFFSET) -l=$(LIMIT)

docker-build:
    docker build \
    -f build/docker/dev/Dockerfile \
    --build-arg SSH_PRIVATE_KEY="$$(cat ~/.ssh/id_rsa)" \
    --build-arg O=$(OFFSET) \
    --build-arg L=$(LIMIT) \
    -t cristtopher/${IMAGE_NAME}:local .

docker-run:
    docker run --rm -it
    --env-file ./.env \
    cristtopher/${IMAGE_NAME}:local -o=$(OFFSET) -l=$(LIMIT)    

Summary

We have seen the basic implementation of a simple data export process as an example to show you my strategy to break the monolith progressively.
References

Follow me to be aware of the next posts where we will see more implementations, DDD and others funny stuff.

Happy coding!

Top comments (2)

Collapse
 
phantas0s profile image
Matthieu Cneude

Interesting article.

I would like to add that it depends of the context of your software. I think it's not necessarily needed to bring hexagonal architecture for a CRUD application which will stay CRUD, for example. Let's not forget that all of that bring complexity.

Before using design patterns from DDD (services, aggregates, value objects...), using the strategic methods (Ubiquitous language / recognizing bounded context) are less costly and can show you if you really need to break your monolith.

The problem is not really the monolith, it's the coupled monolith. Reducing coupling, increasing coherence can be done with another monolith (with modules, for example). Microservice can bring a lot of complexity on the network layer, let's not forget that too.

Looking forward for your article on DDD.

Collapse
 
cristtopher profile image
Cristtopher Quintana Toledo • Edited

Thanks Matthieu, nice feedback.

I have added the complexity of hexagonal architecture in this scaffolding as each fork will have its own logic that does not necessarily correspond to a simple CRUD, which will allow me to evolve each one strategically without creating new micro-monolith.

With respect to DDD I totally agree with what you say, I had this strategic process before designing this solution, starting with ubiquitous languge which was a tremendous challenge at the company level to agree on each concept, this process I would like to document to expose and see how to improve.