DEV Community

Cover image for Building turbo-ledger: A Scalable Ledger with Go and Redis
Rafik Naccache
Rafik Naccache

Posted on • Updated on

Building turbo-ledger: A Scalable Ledger with Go and Redis

turbo-ledger: Why and How

What's a ledger and why should we bother?

In a nutshell, a ledger is a register we use to keep track of transactions in which people trade goods and services for assets that represent equivalent value.

In the physical world, an asset's value stems from its scarcity as a physical object(think banknotes, wheat, gold, collector guitars,...)

But online, assets are digital - and digital isn't scarce. As such, any amount's worth will only make sense if correctly traced back - through a set of authentic transactions - to a legitimate origin. That origin will create that scarcity that made physical world assets valuable(think bitcoins, NFTs, digital tokens,...). So in digital, ledgers ARE the assets.

Hence being able to keep secure and scalable ledgers is of crucial importance in today's eCommerce-fueled world, where nearly every service is becoming digital. Tomorrow's money (and a significant part of today's) will only flow through digital bookkeeping!

Keeping in mind that, let's explore a minimal ledger implementation that will provide bookkeeping services. We'll call it turbo-ledger, and will implement it in Go and will be fueled by Redis. you can find it here

turbo-ledger will revolve around two main entities, wallets and transactions. First, let's study wallets.

Wallets: where the assets lie

A Wallet, as the name suggests, is a bucket containing some digital funds. Just like we explained, what we care about most in a digital wallet are transactions as they constitute the very online money, but for the sake of convenience, we can keep some sort of balance field up to date at the wallet's level.
Moreover, some information about the wallet identity needs to be present. For this post, we'll just use a plain ID and owner fields.
Finally, we can qualify wallets with some tags: VIP, Personal,...

If we go on and model this, the following would be a JSON object representing a Wallet:

{
  "wallet_id":"my_wallet",
  "transactions":[],
  "balance":10,
  "owner":"bob",
  "tags":["vip","personal"]
}
Enter fullscreen mode Exit fullscreen mode

In a full-fledged implementation, we'll need to somehow attach a wallet to a cryptographic signer entity and maybe enforce real-world user identity-checking like verifying phone numbers or emails, but we'll settle for our toy-ish implementation in this post.

And that will be it for our wallet object. Let's look closer at transactions.

Transactions: Assets movement tracking

A transaction will describe a certain amount of assets being transferred between a source and a destination wallets. For every transaction, we'll need to keep a record of the date when it happened. We'll also attach textual descriptions to transactions so they're augmented with some explanation. Finally, transactions will have an ID field.

Here is a JSON object representation of our toy transaction:

{
  "transaction_id": "some_id",
  "source_wallet": "src_wallet",
  "destination_wallet": "dst_wallet",
  "amount": 5,
  "transaction_description":"description 1",
  "date":"2022-06-07T19:02:01.0Z"
}
Enter fullscreen mode Exit fullscreen mode

In a real-world setup, transactions need to be signed by an authorized source wallet owner and must be logged as they flow through the system with mention of their status, like the reason why they might have been discarded, etc. We might also want to make sure one cannot tamper with the transactions history(a la blockchain). But we'll stay with this minimal implementation we exposed for this post.

Moving Assets Around

The workflow for transferring an amount of amount_assets assets from source wallet src_wallet to destination wallet dst_wallet would be as follows:

  • transaction submitted
  • turbo-ledger verifies src_wallet exists
  • turbo-ledger verifies src_wallet has enough balance to transfer to dst_wallet (i.e balance is equal or greater than amount_assets)
  • turbo-ledger verifies dst_wallet exists
  • turbo-ledger writes transaction (with amount_assets negated - to simplify balance recomputing if needed in the future) in src_wallet
  • system deduces amount_assets from src_wallet
  • turbo-ledger does the exact inverted thing to dst_wallet:
  • writes transaction (with positive amount_assets ) to dst_wallet
  • increases dst_wallet's balance by amount_assets

for example, after an assets transfer operation of 5, here's what src_wallet will look like (assuming a starting balance of 10) :

{
  "wallet_id":"src_wallet",
  "owner":"alice",
  "transactions":[
    {"transaction_id":"a7749032-7be9-4cc8-a1b0-a14701bc1208",
      "source_wallet":"src_wallet",
      "destination_wallet":"dst_wallet",
      "amount":-5,
      "transaction_description":"my cool transaction",
      "date":"2022-06-07T19:02:01Z"}
  ],
  "balance":5,
  "tags":["vip","personal"]
}

Enter fullscreen mode Exit fullscreen mode

for dst_wallet, assuming an initial balance of 0, we'd get:

{
  "wallet_id":"dst_wallet",
  "owner":"bob",
  "transactions":[
    {"transaction_id":"a7749032-7be9-4cc8-a1b0-a14701bc1208",
      "source_wallet":"src_wallet",
      "destination_wallet":"dst_wallet",
      "amount":5,
      "transaction_description":"my cool transaction",
      "date":"2022-06-07T19:02:01Z"}
  ],
  "balance":5,
  "tags":["normal","professional"]
}
Enter fullscreen mode Exit fullscreen mode

Now we've seen how turbo-ledger foundational entities -wallets, and transactions - are modeled and how asset transfers function, you'd agree we need a solid JSON-aware datastore with fast I/O and search capabilities.
With that in mind, let's dive into the implementation. (Spoiler alert: we'll use Redis!)

A Naive turbo-ledger Implementation with Redis

Why Redis?

(DISCLAIMER: I got to discover RedisJson and RediSearch when I applied to the "Write for Redis" promotional program, which sponsors this post - and I am glad I did because I discovered those two great modules!)

To be honest, I would not have considered Redis to implement turbo-ledger: Though I highly value Redis as a high throughput key-value data store, I was not aware of its JSON and Search capabilities through the RedisJson and RediSearch modules. But when I learned about them I fell in love!
With RedisJson and RediSearch, you have a fully JSON-fueled and indexed datastore, all accessed through the blazing-fast key-value paradigm in which Redis shines. No more needed to be said for me, I jumped in.

Prototyping quickly with RedisCloud and RedisInsights Desktop

RedisCloud

I was planning to work on turbo-ledger as a side project and so was not willing to allocate much time and effort to it.
As such, I was not going to bother about Installing Redis or having it set up as a Docker container either on my already overloaded Mac.

This is how I chose Redis Cloud, A Managed Redis service made by the Redis Team. I could easily set up a database for free with a trial account (no credit cards needed) and verified the needed modules (RedisJson and RediSearch) were present in that database. I was all set in a couple of minutes.

To learn more about Redis Cloud, you can watch it here:

RedisInsights Desktop

As explained, turbo-ledger was a side project for me, one that was on a tight time and effort budget. But it presented nevertheless an opportunity to learn about Redis modules that were new to me, RedisJson and RediSearch. So I needed some tool to help me explore commands and query JSON keys while I was working, and Redis CLI was not going to assist me with that.

This is how I discovered RedisInsight Desktop GUI.

With a few mouse clicks, I connected to my Redis Cloud database and started to play.

I particularly loved the workbench, which allowed me to try commands, before going to the Go IDE:
Redis Workbench Sample

I also could explore JSON keys using the browser interface:

Redis RedisInsights Browser JSON

I could even change JSON Keys in place which is very helpful when prototyping turbo-ledger:

Redis RedisInsights Browser JSON

With all these tools in hand, let's go on modeling and coding turbo-ledger in Go and Redis!

Representing our wallets' Objects

Wallets as JSON Documents

A Wallet will be represented as a JSON key at Redis' level. We'll observe the following key scheme for wallets:

wallet:wallet_id
Enter fullscreen mode Exit fullscreen mode

In Redis, for instance, to create a wallet, you'd set a key with the JSON, exposed above, as a value, using the JSON.SET command:

JSON.SET wallet:wallet_id $ "{....}"
Enter fullscreen mode Exit fullscreen mode

Note the $ sign, which denotes the JSON path of the element you want to change inside the JSON Document. In the above, that just means that we want the Input JSON document to sit at the root of the key, which simply means setting its value to it.

But we can change a nested part of the document just as easily. For example, assume we want to set some wallet's balance to 10. We'd have to do:

JSON.SET wallet:wallet_id  $.balance 10
Enter fullscreen mode Exit fullscreen mode

Wallets in Go

In turbo-ledger's Go implementation, to create a Wallet, we have the following function:

func createWallet(ctx context.Context, rdb *redis.Client, walletId string, startingBalance float32, newWallet Wallet) error {
    if walletId == "" {
        return errors.New("could not create Wallet with empty Wallet Id")
    }
    strNewWallet, errStrNewWallet := json.Marshal(newWallet)
    if errStrNewWallet != nil {
        log.Printf("could not marshal Wallet into json")
        return errStrNewWallet
    }
    errCreateVaultWallet := rdb.Do(ctx, "JSON.SET",
        fmt.Sprintf("wallet:%s", walletId),
        "$",
        strNewWallet,
    ).Err()

    if errCreateVaultWallet != nil {
        log.Printf("could not create Wallet %s with command", errCreateVaultWallet)
        return errCreateVaultWallet
    }

    log.Printf("successfully created Wallet %s with starting balance %f",
        walletId,
        startingBalance)
    return nil
}

Enter fullscreen mode Exit fullscreen mode

You should have noticed how we're using plain Redis commands through go-redis - choosing not to use any third-party RediSearch or RedisJson specialized clients. This way, we stay as close as possible to Redis commands, not using any abstractions or ORMs.

Indexing and searching through wallets

Indexing data through RediSearch is a novelty for me.
Before, I only knew to access values knowing keys they're associated with - there was no way to query data on particular conditions on values. This had the Redis developer design key schemes that would hold parts of data, which was limiting and cumbersome to say the least. To get a sense of how this works, see the checkWalletExists function implementation in the turbo-ledger go implementation:

func checkWalletExists(ctx context.Context, rdb *redis.Client, walletId string) error {
    if walletId == "" {
        return errors.New("could not check Wallet with empty Wallet Id")
    }


    searchWalletCmd := rdb.Do(ctx, "JSON.GET",
        "wallet:"+walletId, // get the key with ID information
    )
...

Enter fullscreen mode Exit fullscreen mode

Now, with RediSearch, it is possible to create search indexes on particular elements of the value. In turbo-ledger, we created an Index on wallet owners:

FT.CREATE idx:wallet:owner ON JSON SCHEMA $.owner AS owner TEXT
Enter fullscreen mode Exit fullscreen mode

This way, we can search wallets that are owned by bob like so:

FT.SEARCH idx:wallet:owner "@owner:(bob)"
Enter fullscreen mode Exit fullscreen mode

Also, we created an index to search through the transactions tags array. Note the use of the JSON Path $.tags.*:

FT.CREATE idx:wallet:tags ON JSON SCHEMA $.tags.* AS tags TAG
Enter fullscreen mode Exit fullscreen mode

Then you can search through wallets using tags through:

FT.SEARCH idx:wallet:tags "@tags:{rich}"
Enter fullscreen mode Exit fullscreen mode

In turbo-ledger's Go implementation, you'll see Redis plain commands creating those indices in the Genesis function, and using search in the SearchWallets function

Representing Our transaction Objects

Transactions inside Arrays in Wallets

In redis, you use the JSON.ARRAPPEND command to add an element to an array situation under a particular JSON path. In our case, here is the command to add a transaction to a wallet:

JSON.ARRAPPEND wallet:wallet_id $.transactions {...}
Enter fullscreen mode Exit fullscreen mode

You want transactional transactions

Writing a transaction to the ledger involves verifying the source wallet's balance. If there's enough assets, we decrease that source wallet balance by the transaction's amount then add a transaction object to this wallet's transactions array. We will then mirror this process to the destination wallet (without balance verification).

As you see, the overall transaction writing process entails multiple steps that need to happen all - or not to happen at all - for the turbo-ledger state to stay consistent. Does this ring a bell?

Yes. These operations need to happen in a transactional way. when they happen, we need:
0- verify the source balance
1- the balance in source is updated
1'- transaction is appended to the source's array
2- the balance in destination is updated
2'- transaction is appended to the destination's array
Be done as a whole or:
3- ... Do nothing

For 1-1',2-2' and 3 we'll use transactions in Redis
In Redis, you start a transaction using the MULTI keyword, then you start to queue commands, which you'll execute using EXEC or discard using DISCARD.

Now while you're queuing your transaction commands, someone or something else might have changed the source or destination's wallet balance - your increment or decrement operations might not be working on a clean state no more.

This is where Redis' feature: optimistic locking using check-and-set might come handy.
if you issue a WATCH on certain keys before beginning a Redis transaction with MULTI, you tell Redis to monitor those keys for changes, and if this happens this will make your transaction execution with EXEC fail - so you might choose to retry it.

This is the approach we follow in the turbo-ledger's Go implementation.
We watch on the source and destination wallets involved in any transaction.
We then start a MULTI session, after which we QUEUE series of JSON.ARRAPPEND and JSON.NUMINCRBY as we go through the different verification steps:

  • We WATCH the source and destination wallets
  • We start a transactional session with MULTI
  • We verify the existence of the source balance. If not there, we DISCARD
  • We verify source balance. If not enough, we DISCARD
  • We Queue source balance decrease using JSON.NUMINCRBY (using a negative amount)
  • We Queue adding the transaction object to the source wallet transactions array using JSON.ARRAPPEND
  • We verify the existence of the destination balance. If not there, we DISCARD
  • We Queue destination balance increase using JSON.NUMINCRBY
  • We Queue adding the transaction object to the destination wallet transactions array using JSON.ARRAPPEND
  • And we EXEC - catching errors that would have been emitted by the WATCH Above and retry for max_tries times

Let's have a look how this is implemented in turbo-ledgers's Go Implementation

Transactions In Go

The above process is bootstrapped in the PostTransaction function, exposed as a REST API:

func PostTransaction(rdb *redis.Client, mutex *redsync.Mutex) func(*gin.Context) {
    return func(c *gin.Context) {

        ctx := context.Background()
        var receivedTransaction Transaction
        if errBind := c.BindJSON(&receivedTransaction); errBind != nil {
            log.Printf("could not process received Transaction, %s", errBind)
            return
        }
        log.Printf("received Transaction:%+v", receivedTransaction)

        if errProcessTransaction := processTransaction(ctx, rdb, receivedTransaction, 3, mutex); errProcessTransaction != nil {
            c.IndentedJSON(http.StatusInternalServerError, receivedTransaction)
            return
        }

        c.IndentedJSON(http.StatusCreated, receivedTransaction)
    }
}
Enter fullscreen mode Exit fullscreen mode

To keep this post short and not overload you with code, I suggest you follow along reading through the processTransaction and attemptTransaction functions in the turbo-ledger's Go implementation.

That was quite a ride! But as you may have suspected, something's missing, as we called this part "naive" implementation. It's about race conditions, every distributed system's designer nightmare. Read-on.

Making turbo-ledger turbo

So now turbo-ledger naively but happily serves its clients. But do you remember that "transactional transactions" scheme ?
0- verify the source balance
1- the balance in source is updated
1'- transaction is appended to the source's array
2- the balance in destination is updated
2'- transaction is appended to the destination's array
Be done as a whole or:
3- ... Do nothing

We are using a WATCH to halt a transaction if any of the involved wallets' state changed during operation 1-1' and 2-2'.

But... What if, after we happily (and naively) verified the source wallet's balance in 0, found enough assets there, and just were about to proceed with steps starting from 1-1', someone else spends some assets from that same source wallet?

WATCH can't help us here, as it begins monitoring from 1-1'!
Indeed, concurrent processes are racing over the source wallet balance in this case, because our Redis' Optimistic Locking only protects the system's state AFTER the source balance check.

The situation that would occur then is called double-spend, as if allowed, we erroneously would have spent assets that should not exist, as they've been spent by someone else (or a same fraudulent person) - it's like you've spend twice the same money!
In this case, locking can be the solution.

Preventing the Double Spend Problem with local locks

One approach would be to lock the code section posting transactions. Entering the function, you'd ask to acquire a lock, and you'd release it when you're out. If someone else acquired the lock, you'd have to wait for its release up until a certain time. This will look like the following:

var m sync.Mutex
...
func ProcessTransaction(ctx context.Context, rdb *redis.Client, transaction Transaction, maxAttempts int) error {
    if transaction.Amount <= 0 {
        return errTransactionWrongAmount
    }

    //Acquire single thread lock
    m.Lock()

    //Make sure to unlock on exiting transaction
    defer m.Unlock()
        //....

Enter fullscreen mode Exit fullscreen mode

While this solution would definitely do the trick, it will imply a scalability problem: a lock is only valid in the execution context of a single instance of the ledger. If multiple turbo-ledger instances are accessing the Redis Database, having Go manage local locks in each turbo-ledger process won't protect users calling through one instance getting data compromised by users accessing through another. As such, you can only have one single turbo-ledger instance running at anytime, for this locking mechanism to stay valid. Hence, no horizontal scalability would be possible here.

To address this, we can use a distributed-lock based version. How does it work?

Addressing double-spend with a Redis-based distributed lock: RedLock

Redlock follows the spirit of the previous solution, but as it relies on a central Redis database to store the locks, locking becomes instance-independent, which will allow us to have as many turbo-ledger sessions as we please. This behavior is exposed in the current commit at the turbo-ledger's Go implementation, and is coded like so:

// Preparing redsync instance
   pool := goredis.NewPool(rdb)
   rs := redsync.New(pool)
   mutexname := "global-wallets-mutex"
   mutex := rs.NewMutex(mutexname)
   ...
 func processTransaction(ctx context.Context, rdb *redis.Client, transaction Transaction, maxAttempts int, mutex *redsync.Mutex) error {
    if transaction.Amount <= 0 {
        return errTransactionWrongAmount
    }

    log.Printf("acquiring global lock...")
    if err := mutex.Lock(); err != nil {
        log.Printf("could not acquire global mutex, aborting")
        return err
    }

    defer func() {
        log.Printf("releasing global lock...")
        if ok, err := mutex.Unlock(); !ok || err != nil {
            log.Printf("could not release global mutex, aborting")
        }
    }()

/ ...
Enter fullscreen mode Exit fullscreen mode

Now we can argue that we don't need WATCH anymore. But I left it in the current implementation, as I am still investigating a means to have Redis manage everything without having locking at Go level, distributed or not. Maybe through LUA scripting? Let's see what the future holds!

Conclusion

Congratulations for making it thus far!
In this post, we've seen how to implement a scalable ledger based on Redis thanks to the powerful RedisJson and RediSearch modules, how the development experience and exploration on Redis can be made smooth using Redis Cloud and RedisInsight Desktop GUI. We also discussed Redis Transactions and saw how we needed also to protect against race conditions over source wallets balances in transaction starts, using Redlock.
One last thing needs to be taken into consideration: as you're storing non-volatile, sensitive data, be sure to to understand the different Redis Persistence models and applications, so you can choose the most secure data persistence model you see fit!

To learn more about Redis, RedisInsight Desktop, and RedisCloud head over to the following links:
Redis Cloud
Redis Developer Hub - tools, guides, and tutorials about Redis
RedisInsight Desktop GUI

This post is in collaboration with Redis

Top comments (0)