If you are running CrateDB in a production environment, you have probably wondered what would be the best way to monitor the servers to identify issues before they become problematic and to collect statistics that you can use for capacity planning.
We recommend pairing two well-known OSS solutions, Prometheus which is a system that collects and stores performance metrics, and Grafana which is a system to create dashboards.
For a CrateDB environment, we are interested in:
- CrateDB-specific metrics, such as the number of shards or number of failed queries
- and OS metrics, such as available disk space, memory usage, or CPU usage
For what concerns CrateDB-specific metrics we recommend making these available to Prometheus by using the Crate JMX HTTP Exporter and Prometheus SQL Exporter. For what concerns OS metrics, in Linux environments, we recommend using the Prometheus Node Exporter.
Things are a bit different of course if you are using containers, or if you are using the fully-managed cloud-hosted CrateDB Cloud, but let’s see how all this works on an on-premises installation by setting all this up together.
First we need a CrateDB cluster
First things first, we will need a CrateDB cluster, you may have one already and that is great, but if you do not we can get one up quickly.
You can review the install documentation at https://crate.io/docs/crate/tutorials/en/latest/self-hosted/index.html and https://crate.io/docs/crate/howtos/en/latest/clustering/multi-node-setup.html.
In my case, I am using Ubuntu and I did it like this, first I ssh to the first machine and run:
nano /etc/default/crate
This is a configuration file that will be used by CrateDB, we only need one line to configure memory settings here (this is a required step otherwise we will fail bootstrap checks):
CRATE_HEAP_SIZE=4G
We also need to create another configuration file:
mkdir /etc/crate
nano /etc/crate/crate.yml
In my case I used the following values:
network.host: _local_,_site_
This tells CrateDB to respond to requests both from localhost and the local network.
discovery.seed_hosts:
- ubuntuvm1:4300
- ubuntuvm2:4300
This lists all the machines that make up our cluster, here I only have 2, but for production use, we recommend having at least 3 nodes so that a quorum can be established in case of network partition to avoid split-brain scenarios.
cluster.initial_master_nodes:
- ubuntuvm1
- ubuntuvm2
This lists the nodes that are eligible to act as master nodes during bootstrap.
auth.host_based.enabled: true
auth:
host_based:
config:
0:
user: crate
address: _local_
method: trust
99:
method: password
This indicates that the crate
super user will work for local connections but connections from other machines will require a username and password.
gateway.recover_after_data_nodes: 2
gateway.expected_data_nodes: 2
And this requires both nodes to be available for the cluster to operate in this case, but with more nodes, we could have set recover_after_data_nodes
to a value smaller than the total number of nodes.
Now let’s install CrateDB:
wget https://cdn.crate.io/downloads/deb/DEB-GPG-KEY-crate
apt-key add DEB-GPG-KEY-crate
add-apt-repository "deb https://cdn.crate.io/downloads/deb/stable/ $(lsb_release -cs) main"
apt update
apt install crate -o Dpkg::Options::="--force-confold"
(force-confold
is used to keep the configuration files we created earlier)
Repeat the above steps on the other node.
Setup of the Crate JMX HTTP Exporter
This is very simple, on each node run the following:
cd /usr/share/crate/lib
wget https://repo1.maven.org/maven2/io/crate/crate-jmx-exporter/1.0.0/crate-jmx-exporter-1.0.0.jar
nano /etc/default/crate
then uncomment the CRATE_JAVA_OPTS
line and change its value to:
CRATE_JAVA_OPTS="-javaagent:/usr/share/crate/lib/crate-jmx-exporter-1.0.0.jar=8080"
and restart the crate daemon:
systemctl restart crate
Prometheus Node Exporter
This can be set up with a one-liner:
apt install prometheus-node-exporter
Prometheus SQL Exporter
The SQL Exporter allows running arbitrary SQL statements against a CrateDB cluster to retrieve additional information. As the cluster contains information from each node, we do not need to install the SQL Exporter on every node. Instead, we install it centrally on the same machine that also hosts Prometheus.
Please note that it is not the same to set up a data source in Grafana pointing to CrateDB to display the output from queries in real-time as to use Prometheus to collect these values over time.
Installing the package is straight-forward:
apt install prometheus-sql-exporter
For the SQL exporter to connect to the cluster, we need to create a new user sql_exporter
. We grant the user reading access to the sys
schema. Run the below commands on any CrateDB node:
curl -H 'Content-Type: application/json' -X POST 'http://localhost:4200/_sql' -d '{"stmt":"CREATE USER sql_exporter WITH (password = '\''insert_password'\'');"}'
curl -H 'Content-Type: application/json' -X POST 'http://localhost:4200/_sql' -d '{"stmt":"GRANT DQL ON SCHEMA sys TO sql_exporter;"}'
We then create a configuration file in /etc/prometheus-sql-exporter.yml
with a sample query that retrieves the number of shards per node:
jobs:
- name: "global"
interval: '5m'
connections: ['postgres://sql_exporter:insert_password@ubuntuvm1:5433?sslmode=disable']
queries:
- name: "shard_distribution"
help: "Number of shards per node"
labels: ["node_name"]
values: ["shards"]
query: |
SELECT node['name'] AS node_name, COUNT(*) AS shards
FROM sys.shards
GROUP BY 1;
allow_zero_rows: true
- name: "heap_usage"
help: "Used heap space per node"
labels: ["node_name"]
values: ["heap_used"]
query: |
SELECT name AS node_name, heap['used'] / heap['max']::DOUBLE AS heap_used
FROM sys.nodes;
- name: "global_translog"
help: "Global translog statistics"
values: ["translog_uncommitted_size"]
query: |
SELECT COALESCE(SUM(translog_stats['uncommitted_size']), 0) AS translog_uncommitted_size
FROM sys.shards;
- name: "checkpoints"
help: "Maximum global/local checkpoint delta"
values: ["max_checkpoint_delta"]
query: |
SELECT COALESCE(MAX(seq_no_stats['local_checkpoint'] - seq_no_stats['global_checkpoint']), 0) AS max_checkpoint_delta
FROM sys.shards;
- name: "shard_allocation_issues"
help: "Shard allocation issues"
labels: ["shard_type"]
values: ["shards"]
query: |
SELECT IF(s.primary = TRUE, 'primary', 'replica') AS shard_type, COALESCE(shards, 0) AS shards
FROM UNNEST([true, false]) s(primary)
LEFT JOIN (
SELECT primary, COUNT(*) AS shards
FROM sys.allocations
WHERE current_state <> 'STARTED'
GROUP BY 1
) a ON s.primary = a.primary;
Please note: There exist two implementations of the SQL Exporter: burningalchemist/sql_exporter and justwatchcom/sql_exporter. They don't share the same configuration options.
Our example is based on the implementation that is shipped with the Ubuntu package, which is justwatchcom/sql_exporter.
To apply the new configuration, we restart the service:
systemctl restart prometheus-sql-exporter
The SQL Export can also be used to monitor any business metrics as well, but be careful with regularly running expensive queries. Below are two more advanced monitoring queries of CrateDB that may be useful:
/* Time since the last successful snapshot (backup) */
SELECT (NOW() - MAX(started)) / 60000 AS MinutesSinceLastSuccessfulSnapshot
FROM sys.snapshots
WHERE "state" = 'SUCCESS';
Prometheus setup
You would run this on a machine that is not part of the CrateDB cluster and it can be installed with:
apt install prometheus --no-install-recommends
Please note that by default this will right away become available on port 9090 without authentication requirements, you can use policy-rcd-declarative
to prevent the service from starting immediately after installation and you can define a YAML web config file with basic_auth_users
and then refer to that file in /etc/default/prometheus
.
For a large deployment where you also use Prometheus to monitor other systems, you may also want to use a CrateDB cluster as the storage for all Prometheus metrics, you can read more about this at CrateDB Prometheus Adapter.
Now we will configure Prometheus to scrape metrics from the node explorer from the CrateDB machines and also metrics from our Crate JMX HTTP Exporter:
nano /etc/prometheus/prometheus.yml
Where it says:
- job_name: 'node'
static_configs:
- targets: ['localhost:9100']
We replace this with the below configuration, which reflects port 8080 (Crate JMX Exporter), port 9100 (Prometheus Node Exporter), port 9237 (Prometheus SQL Exporter), as well as port 9100 (Prometheus Node Exporter).
- job_name: 'node'
static_configs:
- targets: ['ubuntuvm1:9100', 'ubuntuvm2:9100']
- job_name: 'cratedb_jmx'
static_configs:
- targets: ['ubuntuvm1:8080', 'ubuntuvm2:8080']
- job_name: 'sql_exporter'
static_configs:
- targets: ['localhost:9237']
Restart the prometheus
daemon if it was already started (systemctl restart prometheus
).
Grafana setup
This can be installed on the same machine where you have Prometheus and can be installed with:
echo "deb https://packages.grafana.com/oss/deb stable main" | tee -a /etc/apt/sources.list.d/grafana.list
wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -
apt update
apt install grafana
systemctl start grafana-server
If you now point your browser to http://<Grafana host>:3000 you will be welcomed by the Grafana login screen, the first time you can log in with admin as both the username and password, make sure to change this password right away.
Click on "Add your first data source", then click on "Prometheus", and enter the URL http://<Prometheus host>:9090.
If you had configured basic authentication for Prometheus this is where you would need to enter the credentials.
Click "Save & test".
An example dashboard based on the discussed setup is available for easy importing on grafana.com. In your Grafana installation, on the left-hand side, hover over the “Dashboards” icon and select “Import”. Specify the ID 17174 and load the dashboard. On the next screen, finalize the setup by selecting your previously created Prometheus data sources.
Alternative implementations
If you decide to build your own dashboard or use an entirely different monitoring approach, we recommend still covering similar metrics as discussed in this article. The list below is a good starting point for troubleshooting most operational issues:
- CrateDB metrics (with example Prometheus queries based on the Crate JMX HTTP Exporter)
- Thread pools rejected:
sum(rate(crate_threadpools{property="rejected"}[5m])) by (name)
- Thread pool queue size:
sum(crate_threadpools{property="queueSize"}) by (name)
- Thread pools active:
sum(crate_threadpools{property="active"}) by (name)
- Queries per second:
sum(rate(crate_query_total_count[5m])) by (query)
- Query error rate:
sum(rate(crate_query_failed_count[5m])) by (query)
- Average Query Duration over the last 5 minutes:
sum(rate(crate_query_sum_of_durations_millis[5m])) by (query) / sum(rate(crate_query_total_count[5m])) by (query)
- Circuit breaker memory in use:
sum(crate_circuitbreakers{property="used"}) by (name)
- Number of shards:
crate_node{name="shard_stats",property="total"}
- Garbage Collector rates:
sum(rate(jvm_gc_collection_seconds_count[5m])) by (gc)
- Thread pool queue size:
crate_threadpools{property="queueSize"}
- Thread pool rejected operations:
crate_threadpools{property="rejected"}
- Thread pools rejected:
- Operating system metrics
- CPU utilization
- Memory usage
- Open file descriptors
- Disk usage
- Disk read/write operations and throughput
- Received and transmitted network traffic
Wrapping up
We got a Grafana dashboard that allows us to check live and historical data around performance and capacity metrics in our CrateDB cluster, this illustrates one possible setup. You could use different tools depending on your environment and preferences. Still, we recommend you use the interface of the Crate JMX HTTP Exporter to collect CrateDB-specific metrics and that you always also monitor the health of the environment at the OS level as we have done here with the Prometheus Node Exporter.
Top comments (0)