DEV Community

InterSystems Developer for InterSystems

Posted on • Originally published at community.intersystems.com

Deploying a sharded cluster with Docker and MergeCPF

In this article, we will run an InterSystems IRIS cluster using docker and Merge CPF files - a new feature allowing you to configure servers with ease.

On UNIX® and Linux, you can modify the default iris.cpf using a declarative CPF merge file. A merge file is a partial CPF that sets the desired values for any number of parameters upon instance startup. The CPF merge operation works only once for each instance.

Our cluster architecture is very simple, it would consist of one Node1 (master node) and two Data Nodes (check all available roles). Unfortunately, docker-compose cannot deploy to several servers (although it can deploy to remote hosts), so this is useful for local development of sharding-aware data models,  tests, and such. For a productive InterSystems IRIS Cluster deployment, you should use either ICM or IKO.<!--break-->

Docker-compose.yml

Let's start with docker-compose configuration:

docker-compose.yml

version: '3.7'
services:
  iris1:
    image: containers.intersystems.com/intersystems/iris:2020.3.0.221.0
    init: true
    command: --key /ISC/iris.key
    hostname: iris1
    environment:
     - ISC_DATA_DIRECTORY=/ISC/iris.sys.d/sys1
     - ISC_CPF_MERGE_FILE=/ISC/CPF2merge-master-instance.conf
    volumes:
     - ./:/ISC:delegated
    ports:
      - 9011:1972
      - 9012:52773

  iris2:
    image: containers.intersystems.com/intersystems/iris:2020.3.0.221.0
    command: --key /ISC/iris.key --before 'sleep 60'
    init: true
    hostname: iris2
    environment:
     - ISC_DATA_DIRECTORY=/ISC/iris.sys.d/sys2
     - ISC_CPF_MERGE_FILE=/ISC/CPF2merge-data-instance.conf
    volumes:
     - ./:/ISC:delegated
    depends_on:
      - iris1
    ports:
      - 9021:1972
      - 9022:52773

  iris3:
    image: containers.intersystems.com/intersystems/iris:2020.3.0.221.0
    command: --key /ISC/iris.key --before 'sleep 60'
    init: true
    hostname: iris3
    environment:
     - ISC_DATA_DIRECTORY=/ISC/iris.sys.d/sys3
     - ISC_CPF_MERGE_FILE=/ISC/CPF2merge-data-instance.conf
    volumes:
     - ./:/ISC:delegated
    depends_on:
      - iris1
    ports:
      - 9031:1972
      - 9032:52773 
Enter fullscreen mode Exit fullscreen mode

As you can see we're running a default intersystems/iris:2020.3.0.221.0 image, providing the license key (it must support sharding), persisting the data using Durable %SYS feature, and provide ISC_CPF_MERGE_FILE pointing at our merge files (which are different for Node1 and Data Nodes). Additionally Data Nodes are started a minute late allowing Node1 to start and that's an extremely conservative estimate, on a decent hardware startup time takes seconds tops.

Cluster configuration happens at CPF merge files, let's check them out

CPF2merge-data-instance.conf

[Startup]
PasswordHash=FBFE8593AEFA510C27FD184738D6E865A441DE98,u4ocm4qh
ShardRole=node1


[config]
MaxServerConn=64
MaxServers=64
globals=0,0,400,0,0,0
errlog=1000
routines=32
gmheap=256000
locksiz=1179648
Enter fullscreen mode Exit fullscreen mode

What happens here?

In the [Startup] part we enable Sharding by assigning Node1 role to our cluster. And in [config]we expand our server a bit allowing more caches and connections. That's all!

CPF2merge-data-instance.conf

[Startup]

ShardClusterURL=IRIS://iris1:1972/IRISCLUSTER
ShardRole=DATA
Enter fullscreen mode Exit fullscreen mode

For data nodes, we need to provide the URL of the Node1 and the node role.

Try it

Check the repository or run this code:

git clone https://github.com/intersystems-ru/iris-container-recipes.git
cd iris-container-recipes
cd cluster
// copy iris.key in cluster folder
docker-compose up -d
Enter fullscreen mode Exit fullscreen mode

After starting the InterSystems IRIS cluster you can access it from the browser. User/pass is: _SYSTEM/SYS.

Conclusions

Merge CPF files is a great and simple tool allowing you to configure InterSystems IRIS instances.

Thank you to @luca.Ravazzolo for providing the code and answering all of my questions.

Top comments (0)