DEV Community

Cover image for Streamlining SonarQube on AWS ECS: Simplified Deployment Using Cluster.dev
Sidath Munasinghe for AWS Community Builders

Posted on • Originally published at Medium

Streamlining SonarQube on AWS ECS: Simplified Deployment Using Cluster.dev

SonarQube, crafted by SonarSource, is an open-source platform designed to scrutinize code quality continuously. It proficiently identifies bugs and code smells across a spectrum of programming languages through automated reviews leveraging static code analysis.

However, if you are going to self-host SonarQube, it takes significant effort to provision both a resilient database infrastructure and a scalable compute layer capable of accommodating fluctuating traffic demands. Let's use AWS RDS for the resilient database and AWS ECS for the scalable compute layer. To simplify the deployment, let's use Cluster.dev. If you are new to Cluster.dev, I recommend you read my previous post for a comprehensive introduction and understand its benefits.

Below is the infrastructure setup we will build with Cluster.dev in this blog post.

High-level architecture

Before jumping into the implementation, let's learn some basics about Cluster.dev first.

Cluster.dev Basics

Below are the fundamental building blocks of a Cluster.dev project.

  • Unit: A unit represents a unit of resource that we have in our infrastructure setup (ex. a load balancer). We can use a variety of technologies to implement a unit, such as Terraform modules, Helm charts, Kubernetes manifests, Terraform code, Bash scripts, etc. We need to provide specific inputs to a unit to configure it as we want, and it gives specific outputs to use as well as to refer to other units if required.

  • Stack Template: A stack template contains a set of units to implement an infrastructure pattern we need to provision. In this scenario, it's our SonarQube deployment on ECS. We can get the benefit of a variety of technologies by using different units and connecting them to lay out a complex infrastructure pattern in the stack template.

  • Stack: A stack is used to define different variables and configure the stack template as needed. This helps to tailor the defined infrastructure pattern in the stack template according to the use case.

  • Project: A project can be used to orchestrate one or more stacks depending on the complexity of the infrastructure. Any global variables that can be used across stacks can be defined at the project level.

  • Backend: This includes configuration of the location where Cluster.dev keeps track of its state of deployments.

The diagram below reveals how these building blocks are set up for SonarQube ECS deployment.

The layout of the Cluster.dev components for ECS deployment<br>

Implementing the Infrastructure

Before implementing any infrastructure pattern, we need to identify the resources we need to create for the infrastructure pattern as units and the technology we should use for each. For this setup, we will be using Terraform modules to create the AWS resources below.

  • ECS Cluster
  • ECS Task Definition
  • ECS Service
  • Load Balancer
  • Postgres RDS Database
  • Security groups for Database, Load balancer & ECS service
  • Necessary IAM roles

Let's start with the template.yaml file to define the resources we need. The below YAML file contains all the AWS resources we need to create for this setup. Note how we have connected different terraform modules to provision the infrastructure we need. Also, we have used several variables to make the infrastructure pattern repeatable for diverse use cases. The syntax for using a variable is {{ .variables.<variable_name> }}. Further, we can refer to the outputs of one unit in another using the {{ remoteState "this.<unit_name>.<attribute>" }} syntax.

Finally, we have a printer unit to output the DNS name of the load balancer to access the deployed SonarQube application.

_p: &provider_aws
  - aws:
      region: {{ .variables.region }}

name: cdev-sonarqube
kind: StackTemplate
units:
  - name: WebSecurityGroup
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/security-group/aws//modules/http-80
    inputs:
      name: 'WebSecurityGroup'
      vpc_id: {{ .variables.vpc_id }}
      ingress_cidr_blocks: ["0.0.0.0/0"]

  - name: DBSecurityGroup
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/security-group/aws
    inputs:
      name: 'DBSecurityGroup'
      vpc_id: {{ .variables.vpc_id }}
      ingress_with_source_security_group_id:
        - rule: "postgresql-tcp"
          source_security_group_id: {{ remoteState "this.ECSSVCSecurityGroup.security_group_id" }}

  - name: ECSSVCSecurityGroup
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/security-group/aws
    inputs:
      name: 'ECSSVCSecurityGroup'
      vpc_id: {{ .variables.vpc_id }}
      ingress_with_cidr_blocks:
        - from_port: 9000
          to_port: 9000
          protocol: "tcp"
          cidr_blocks: "0.0.0.0/0"
      egress_with_cidr_blocks:
        - from_port: 0
          to_port: 0
          protocol: "-1"
          cidr_blocks: "0.0.0.0/0"

  - name: Database
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/rds/aws
    inputs:
      engine: 'postgres'
      engine_version: '14'
      family: 'postgres14' # DB parameter group
      major_engine_version: '14' # DB option group
      instance_class: 'db.t4g.large'
      identifier: 'sonar-database'
      db_name: 'sonarqube'
      username: 'sonar_user'
      password: 'password'
      publicly_accessible: true
      allocated_storage: 5
      manage_master_user_password: false
      vpc_security_group_ids: [{{ remoteState "this.DBSecurityGroup.security_group_id" }}]
      subnet_ids: [{{ .variables.subnet_1 }}, {{ .variables.subnet_2 }}]

  - name: ECSCluster
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/ecs/aws
    inputs:
      cluster_name: 'sonar-cluster'

  - name: ECSTaskDefinition
    type: tfmodule
    providers: *provider_aws
    source: github.com/mongodb/terraform-aws-ecs-task-definition
    inputs:
      image: 'sonarqube:lts-community'
      family: 'sonar'
      name: 'sonar'
      portMappings:
        - containerPort: 9000
          hostPort: 9000
          protocol: 'tcp'
          appProtocol: 'http'
      command:
        - '-Dsonar.search.javaAdditionalOpts=-Dnode.store.allow_mmap=false'
      environment:
        - name: SONAR_JDBC_URL
          value: jdbc:postgresql://{{ remoteState "this.Database.db_instance_endpoint" }}/postgres
        - name: SONAR_JDBC_USERNAME
          value: sonar_user
        - name: SONAR_JDBC_PASSWORD
          value: password
      requires_compatibilities:
        - 'FARGATE'
      cpu: 1024
      memory: 3072
      network_mode: awsvpc

  - name: LoadBalancer
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/alb/aws
    inputs:
      name: 'sonarqube'
      vpc_id: {{ .variables.vpc_id }}
      subnets: [{{ .variables.subnet_1 }}, {{ .variables.subnet_2 }}]
      enable_deletion_protection: false
      create_security_group: false
      security_groups: [{{ remoteState "this.WebSecurityGroup.security_group_id" }}]
      target_groups:
        ecsTarget:
          name_prefix: 'SQ-'
          protocol: 'HTTP'
          port: 80
          target_type: 'ip'
          create_attachment: false
      listeners:
        ecs-foward:
          port: 80
          protocol: 'HTTP'
          forward:
            target_group_key: 'ecsTarget'

  - name: ECSService
    type: tfmodule
    providers: *provider_aws
    source: terraform-aws-modules/ecs/aws//modules/service
    inputs:
      name: 'sonarqube'
      cluster_arn: {{ remoteState "this.ECSCluster.cluster_arn" }}
      cpu: 1024
      memory: 4096
      create_task_definition: false
      task_definition_arn: {{ remoteState "this.ECSTaskDefinition.arn" }}
      create_security_group: false
      create_task_exec_iam_role: true
      assign_public_ip: true
      subnet_ids: [{{ .variables.subnet_1 }}, {{ .variables.subnet_2 }}]
      security_group_ids: [{{ remoteState "this.ECSSVCSecurityGroup.security_group_id" }}]
      load_balancer:
        service:
          target_group_arn: {{ remoteState "this.LoadBalancer.target_groups.ecsTarget.arn" }}
          container_name: sonar
          container_port: 9000

  - name: outputs
    type: printer
    depends_on: this.LoadBalancer
    outputs:
      sonar_url: http://{{ remoteState "this.LoadBalancer.dns_name" }}
Enter fullscreen mode Exit fullscreen mode

With that, the complex part is done 🙂

Now, let's define the stack.yaml file, including the variables to configure the stack template. Here, we have defined the below configurations as variables so that we can change them and use the existing AWS networking infrastructure.

  • region: AWS region
  • vpc_id: ID of VPC we need to deploy
  • subnet_1: ID of subnet 1
  • subnet_2: ID of subnet 2

We can define more variables as needed to allow more flexibility to the stack template.

name: cdev-sonarqube
template: ./template/
kind: Stack
backend: aws-backend
variables:
  region: {{ .project.variables.region }}
  vpc_id: {{ .project.variables.vpc_id }}
  subnet_1: {{ .project.variables.subnet_1 }}
  subnet_2: {{ .project.variables.subnet_2 }}
Enter fullscreen mode Exit fullscreen mode

Let's use a S3 bucket to store the backend state of Cluster.dev. We can define a backend.yaml to configure this.

name: aws-backend
kind: Backend
provider: s3
spec:
  bucket: {{ .project.variables.state_bucket_name }}
  region: {{ .project.variables.region }}
Enter fullscreen mode Exit fullscreen mode

Now, we are ready to define the project.yaml file to use this stack. For this infrastructure pattern, we are only a single stack. Here, we can define the global variables for the project as well.

name: cdev-sonarqube
kind: Project
backend: aws-backend
variables:
  organization: <org-name>
  region: <aws-region>
  state_bucket_name: <state-bucket-name>
  vpc_id: <vpc-id>
  subnet_1: <subnet1-id>
  subnet_2: <subnet2-id>
Enter fullscreen mode Exit fullscreen mode

The full implementation of this can be found on this GitHub repository.

Deploying the Infrastructure

Now, we can use the Cluster.dev CLI to deploy the infrastructure with the following command.

cdev apply
Enter fullscreen mode Exit fullscreen mode

Once we run this command, it gives us a summary of the resources that it is going to deploy, like below.

Deploying AWS resources

Also, once the deployment is complete, the printer unit outputs the URL to access the deployed SonarQube application.

Deployed SonarQube application

Also, you can notice that our deployment has auto-scaling enabled to scale out and scale in according to the incoming traffic.

ECS Service auto-scaling policy

As per the diagram above, we can see that it scales out when the CPU and memory reach certain thresholds to a max of 10 tasks. We can fine-tune these settings based on our requirements.

And there you have it — the culmination of our efforts. With the templates prepared, you can configure them to suit the specific use case, enabling seamless and repeatable deployments. This streamlined approach ensures adaptability and efficiency, allowing for quick and hassle-free setup whenever needed.

Conclusion

In conclusion, we've walked through the essential steps to deploy SonarQube on AWS ECS using Cluster.dev covering its key aspects. This guide provides a seamless and efficient approach, empowering users to set up SonarQube on AWS ECS effortlessly. By combining the capabilities of SonarQube with the simplicity of Cluster.dev, we've created a reliable and easily managed infrastructure for elevated code analysis and quality assurance practices.

Top comments (0)