DEV Community

Cover image for Try DragonflyDB on Spheron for a ~30x Cache Speed Boost!
SpheronStaff for Spheron

Posted on • Originally published at blog.spheron.network on

Try DragonflyDB on Spheron for a ~30x Cache Speed Boost!

In modern web development, enhancing API performance is critical for delivering swift and scalable services. Leveraging caching solutions like Redis has been a standard practice, but with the emergence of DragonflyDB, claiming a remarkable throughput of approximately 30 times more than Redis, there's an opportunity to explore and harness its potential.

This guide will explore how to supercharge your Nest.js API with DragonflyDB, a high-performance, Redis-cli compatible boasting up to 30x Redis's throughput in terms of QPS. You'll learn to set up DragonflyDB locally, deploy it on Spheron, and integrate it into a Nest.js API as a caching layer.

Getting Started with DragonflyDB

Before setting up DragonflyDB on Spheron, let's familiarize ourselves with the basics. DragonflyDB is a lightweight, in-memory data store built for modern application workloads. Fully compatible with Redis and Memcached APIs, Dragonfly requires no code changes to adapt. Unlike legacy in-memory datastores, Dragonfly delivers 25X more throughput, higher cache hit rates with lower tail latency, and can run on up to 80% less resources for the same sized workload. It offers a simple, efficient, cost-effective caching and session management solution. It's perfect for applications that require low latency, high throughput, and easy scaling.

To get started, we'll begin by running a DragonflyDB container locally using Docker. This will allow us to play around with the database and test its capabilities before deploying it to Spheron.

Working with DragonflyDB in our Local Environment

First off, lets get DragonflyDB running locally in a Docker container and test if we can use the redis-cli with it as a drop-in replacement.

Install Docker Desktop for Mac

If you haven't already, please install Docker Desktop for Mac from here. This will provide a convenient environment for running and managing Docker containers.

Docker for Mac includes:

  • The Docker Daemon (dockerd) which manages containers and images and exposes the Docker API

  • A GUI to track Docker images in the local repository, running and stopping Docker containers storage volumes, and lets you manage Docker settings.

  • Docker CLI

  • Start up Docker

  • Sign up for an account

Run Docker Container Instance Locally

This command will pull down the Docker image to your local repository and then spin up the container based on the following parameters:

  • -d : Runs in detached mode as a background process

  • ulimit memlock=-1 : unbounded memlock value (Not recommended for prod)

  • -p : exposes port 6379 externally on the docker container, which maps to the same port in the container. Pattern: HOST:CONTAINER

docker run -d -p 6379:6379 --ulimit memlock=-1 docker.dragonflydb.io/dragonflydb/dragonfly
Enter fullscreen mode Exit fullscreen mode

Check out the docs here: Docker run

This command tells Docker to:

  • Run the container in detached mode (-d flag).

  • Map port 6379 on the host machine to port 6379 in the container (-p flag).

  • Set the memory lock limit to unbounded (-u flag).

Once the container is up and running, you can verify its status using the docker ls command:

docker ls
Enter fullscreen mode Exit fullscreen mode

Which will display all currently running containers:

Or in the Docker Desktop GUI:

Install the Redis CLI

Dragonfly supports the Redis CLI command interface almost 1-to-1.

First off, use Brew to install the Redis CLI if you dont already have it.

brew install redis
Enter fullscreen mode Exit fullscreen mode

For info on available commands: Redis CLI docs

Connect to Dragonfly

Invoke the redis-cli to connect to the instance on the default port, 6379, running on localhost.

redis-cli
Enter fullscreen mode Exit fullscreen mode

Persisting and Reading Primitive Data

DragonflyDB is an in-memory, key-value datastore just like Redis.

You can find the docs on the Redis set command here.

Getting and Setting a Simple String Value to a Key

Assign a new value to the defined key. Pattern: SET key value

SET hello world
Enter fullscreen mode Exit fullscreen mode

Now, retrieve the value associated with the "hello" key:

GET hello
Enter fullscreen mode Exit fullscreen mode

This will display the value linked to that key.

Calling SET again for the same key will overwrite the existing value.

You can add more keys and values:

set hello:2 world2
set hello:3 world3

Enter fullscreen mode Exit fullscreen mode

Retrieve all available keys:

keys *
Enter fullscreen mode Exit fullscreen mode

This will list all the currently available keys.

Storing Data Structures Values

DragonflyDB not only stores strings but also supports more complex data structures like JSON and lists. Here's an example of setting a JSON value to a key:

Check out the docs on Dragonfly JSON here

Here is an example of setting this JSON to the key etherprice:1

JSON.SET etherprice:1 $ '{ "name" : "Ether", "symbol" : "ETH", "timestamp" : "2023-12-20T16:02:23.568Z", "price" : 2245.42 }'
Enter fullscreen mode Exit fullscreen mode

Then, you can retrieve that JSON using the JSON.GET command:

JSON.GET etherprice:1
Enter fullscreen mode Exit fullscreen mode

This will return the value for that key:

Which returns as a string with escaped quotes.

You can also parse the JSON using JSONPath expressions to get a specific value from the JSON:

You can marshall the raw JSON string into a typed object using libraries in multiple languages. Check out this spec for more detailed patterns on how JSON is serialized in the wire protocol RESP.

Clean up Docker Artifacts

Now that we've finished playing with DragonflyDB let's clean up the Docker artifacts we created:

You can clear out the docker containers, images, and volumes afterward, if desired, using the following commands:

docker container kill containerid
docker container prune
docker image rm imageid

Enter fullscreen mode Exit fullscreen mode

Run the Dragonfly Container on Spheron

Now, let's get DragonflyDB running on Spheron.

Step 1: Create a Free Spheron Network Account

  1. Visit Spheron Network: https://spheron.network/

  2. On the Spheron homepage, locate and click the "Free Trial" button.

  3. You'll be directed to a login/signup page. Choose your preferred authentication method: Web2 (GitHub account, GitLab account, or Bitbucket account) or Web3 (Ethereum).

  4. Follow the provided prompts to authenticate your chosen account securely. This step ensures safe access to the Spheron Network platform.

  5. After successful authentication, you'll be guided to a confirmation page confirming the completion of your account setup.

Step 2: Creating an Organization

  1. Upon logging in, navigate to the "Compute" section using the dropdown on the top right corner.

  2. You'll be prompted to create your organization's name, username, and Avatar. Add and click Save.

  3. Next, you'll be taken to a new page. Click the "New Cluster" button.

  4. Select Compute Type As: On-demand

Step 3: Deploy the Dragonfly container on the Spheron Platform UI

  1. Select Import from Docker Registry and enter the image and tag of the DragonflyDB Docker image from the GitHub Repository. You can see more on the GitHub Packages page for Dragonfly.

  2. Image: ghcr.io/dragonflydb/dragonfly

  3. Tag: v1.13.0-ubuntu

4.Since it is a database, we need persistent storage, so toggle Add Persistent Storage. On and select a storage size:

Persistent storage will not be erased unless the instance is closed.

5.Then, set the following values and select a size for Persistent Storage:

  • Mount point: /data
  • Type of Storage: SSD
  • Port Mapping: 6379:Random Port

After your cluster deploys successfully:

6.Grab the Connection URL:

provider.us-west.spheron.wiki:32333

Then, navigate to that URL to test the deployment and view the Dragonfly status page.

Connect the Local Redis CLI to the Dragonfly Container Running on Spheron

Connect to the container on Spheron:

Get your hostname and port from Spheron, and then lets connect to the instance using the Redis CLI:

redis-cli -h provider.us-west.spheron.wiki -p 32333
Enter fullscreen mode Exit fullscreen mode

You are now connected to the Dragonfly instance and can interact with the database!

Running a Nest.js Typescript API using the DragonflyDB Instance on Spheron as a Cache

  1. Now, lets pull down the Nest.js reference app setup to query the OpenSea API and cache results in Dragonfly. Clone this repo locally:
git clone git@github.com:anataliocs/nestjs-dragonflydb-cache.git
Enter fullscreen mode Exit fullscreen mode
  1. Load up the project in your favorite IDE, and then lets install dependencies:
npm install
Enter fullscreen mode Exit fullscreen mode
  1. Then, lets set up your local config. Create a .env file:
touch .env
Enter fullscreen mode Exit fullscreen mode

Then, we need to add the following parameters. The REDIS_HOST and REDIS_PORT settings come from your deployed Spheron DragonflyDB instance.

  1. You will also need an OpenSea API key. If you need to provision a key, check out the OpenSea docs.
REDIS_HOST=provider.us-west.spheron.wikiREDIS_PORT=32333OPENSEA_API_KEY=
Enter fullscreen mode Exit fullscreen mode
  1. Now that our app is configured, lets try running it!
npm run start:dev
Enter fullscreen mode Exit fullscreen mode
  1. After the app starts up, lets try hitting the GET endpoint localhost:3000/opensea/proof-moonbirds

We can hit our API running on port 3000 by executing the following cURL command:

curl --location 'localhost:3000/opensea/proof-moonbirds'
Enter fullscreen mode Exit fullscreen mode

Alternatively, you can import that cURL command into a tool called Postman. Postman is a GUI for managing and executing API calls. You can download Postman here!

  1. After starting up Postman, you can generate an API request by using the import function. Learn more about using the import function in Postman Docs. After importing the cURL command, you can click Send to execute the API call. You should see the following JSON response:

If you execute the API call again, it should be noticeably faster the second time! This is because we are using a cached response from the DragonflyDB instance deployed on Spheron!

If we take a look at the Nest.js API logs, we can see the first API request had to reach out to the OpenSea API, but subsequent requests will be served from the cache.

This will allow you to improve the performance & scalability of your API while reducing the volume of requests that you have to make to OpenSea in case you are getting rate-limited.

  1. Now, let's check in on our DragonflyDB instance. Lets connect to it again using the redis-cli with the following command:
redis-cli -h provider.us-west.spheron.wiki -p 32333
Enter fullscreen mode Exit fullscreen mode
  1. After you connect to the Spheron instance, then lets look up all the available keys:
keys *
Enter fullscreen mode Exit fullscreen mode

You will now see an entry for the slug proof-moonbirds representing the cache entry for that API call.

1) "proof-moonbirds"

|

Benefits of Deploying DragonflyDB with Spheron Compute

Deploying your app on Spheron Network has many benefits, including:

  1. Decentralized deployment: Spheron Network allows developers to deploy their applications to a decentralized network. This means your app is not dependent on a single centralized server, making it more resilient to failures and attacks.

  2. Continuous deployment: Spheron offers continuous deployment for apps, which means your application will be automatically deployed whenever you push changes to your repository. This can significantly speed up the development process and allow for quick rollouts of new features and bug fixes to your app.

  3. Simplicity and Speed: Spheron Compute's user-friendly interface and streamlined deployment process make it easy for both beginners and experienced users to set up DragonflyDB quickly. You can save valuable time that would otherwise be spent on server provisioning and configuration.

  4. Spheron Marketplace: Spheron supports 25+ different instances to deploy, such as Redis, PostgreSQL, MySQL, Grafana, substrate, stride, kyve, etc.

  5. Cost-Efficiency: Spheron Compute provides cost-effective solutions, ensuring you maximize your resources. You can choose the plan that aligns with your budget and scale as needed without breaking the bank.

  6. Reliability and Performance: Spheron's infrastructure is designed for stability and high performance. Your DragonflyDB will run smoothly, ensuring you have access to your data when you need it, without interruptions.

  7. Flexibility: With a variety of instance plans and the ability to create custom plans, you can tailor your deployment to meet your specific requirements. This flexibility allows you to optimize your resources for the best performance.

  8. Persistent Storage: Spheron Compute offers the option to add persistent storage, ensuring your data is safe and accessible even if the instance is updated or replaced.

  9. Scalability: Whether you're starting small or planning for growth, Spheron Compute allows you to scale your DragonflyDB deployment easily. You can handle increasing data volumes and growing user demands effortlessly.

  10. Support and documentation: Spheron provides comprehensive documentation and support, helping you navigate any challenges you may face during deployment and development.

Conclusion

In summary, combining DragonflyDB with Spheron to cache data in a Nest.js API shows much promise for improving web performance. Combining DragonflyDB's speedy caching abilities with Spheron's user-friendly and reliable infrastructure presents a compelling solution for modern development. Setting it up locally or on Spheron is straightforward and user-friendly. It allows developers to deliver faster, scalable, and resilient services while optimizing resources and managing costs effectively.

Using DragonflyDB with a Nest.js API demonstrates clear benefits, notably in faster API response times and reduced reliance on external services like OpenSea. This setup helps improve scalability and lessens the pressure on external services. It enables developers to optimize API performance, take advantage of caching, and efficiently manage data storage, resulting in better user experiences and application scalability.

Top comments (0)