DEV Community

Cover image for Compressing Authority
John Driscoll
John Driscoll

Posted on • Updated on

Compressing Authority

Elevator pitch: Explore a scalable proof-of-concept permission system that is resistant to downgrade attacks while reducing the security boundary to a minimally physical device.

This will begin as a more high- than low-level explanation, but, if you're not familiar with concepts like hash functions and public-key cryptography, you might not get much out of it.

If you're already familiar with cryptographic accumulators, don't stop reading. I'll be focusing on some seldom used features that make these underrepresented beasts absolutely shine in real world cases outside the HR department of Cabal, Inc.

Before I get to that, though, I'll cover some fundamentals.

Universal means fixed size

An accumulation is an irreversible sum or product of elements. The most notable type of accumulator is universal, meaning that its size stays the same regardless of how many elements have been accumulated. In this way, a universal accumulator is a one-way compression algorithm much like a hash function.

Zero-knowledge

Unlike hash functions, however, you can prove an accumulation contains a single element without revealing any other elements. To illustrate the difference, consider this; You hash the input abc getting xyz as a result, then give just the xyz part and single letter from the input to your client. If your client has just c from the input, there's no way for them to prove that it's part of the input that produced xyz without you revealing the input in its entirety or having them brute force the domain of inputs.

You might say "I'll just give them the rest of the input and they can verify that c is a member," and you'd be right. But, for the purposes of this exercise, the rest of the input is sensitive information that your client need not know.

Forced Witness

Instead of using a hashing function, you could use an accumulator. Now, you accumulate the individual elements a, b, and c, getting a witness W for each addition, along with an updated accumulation value A. The witness and the accumulation value are very similar; they both act like digests produced by the hash functions that you're already familiar with. The difference between the witness and the accumulation value is that the witness contains all the elements from the accumulator except the element for which it proves membership.

If your client has the single element c, its witness W, and the latest accumulation value A, they can simply accumulate c with W to produce a new digest Z. If Z and A are equal, then your client knows c is a member of A all while never having known of the values a or b.

So?

How is this useful to me? While anonymity is an oft touted property of accumulators, it's not what initially drew my interest. Rather, it's that aspect of compression.

For the time being, I work in embedded security where resources are constrained and peripherals are rare and dubious. If I need a tiny processor with limited RAM and no external storage to control a scalable, versioned permission system that's resistant to downgrade attacks, the only scheme that's going to work in this scenario must involve compressing the set of all permissions from gigabytes to kilobytes.

A universal accumulator is an attractive option because an embedded security device can simply store the fixed size accumulation value instead of the entire set of permissions, meanwhile the permissions and their witness values can live in the cloud.

The cost of business

Size isn't everything as it turns out. My device is constrained by both space and time. Its processor isn't "slow", but a modern smartphone can outperform it. Its memory, while more than adequate for most applications, won't make it into "problems you want to have" territory. Think Raspberry Pi-like capability.

The bad news is that accumulators have a prominent Achilles' heel; Witnesses are fragile. Whenever an element is added to or deleted from an accumulator, the witnesses for the previously accumulated elements are invalidated with respect to the new accumulation value. Those newly invalidated witnesses can be updated, but doing so requires visiting every witness and becomes an expensive operation with large sets.

The little emperor of a board is definitely not capable of updating every witness at scale, let alone simultaneously serving requests with valid witnesses.

(One could argue this would be reason to expand the security boundary to include additional hardware, however, it's safe to assume that requires copying sensitive key material to hardware that was not physically designed to store it. Physical boundaries, EMC shielding, power filtering, tamper resistance, breach sensors; that kind of thing. This is not an acceptable solution for a security critical system.)

Dividing responsibilities

The good news is the authority doesn't have to perform the witness updates. It's possible for an untrusted service to update the witnesses without compromising the security of the accumulator. Trustless witness updates are a feature of CL accumulators, a construction built on the RSA cryptosystem.

A brief side note; Although being an inherent feature to accumulators implemented using elliptic curves, trustless updates are nowhere near as performant as they are in the CL accumulator construction. That is indeed ironic considering ECC operations are much more efficient than RSA operations for keys of equivalent strength. The issue though is that an untrusted witness update in a CL scheme is a constant time operation, while in the ECC scheme, it is quadratic.

As you would expect from anything built on RSA, CL accumulators rely on the principles of asymmetric cryptography. As a consequence of the strong RSA assumption, only the controller of the private key is able to delete elements, while any party with the public key may verify and update witnesses.

Bearing that in mind, I can now establish some service roles for my setup:

  • Authority: The little emperor that holds the private key. The authority is responsible for adding and deleting elements, authorizing permissioned requests, and being the effective root of trust.

  • Worker: Compute oriented cloud instance that holds the public key. The worker has access to the set of witnesses and is responsible for updating them after the authority has added or deleted elements.

  • Synchronizer: Manages a witness update window by synchronizing the Worker and Authority. More on this later.

It's a good time to start typing some code and coding some types.

SPOILER ALERT: The following sections are essentially a code review intended to gradually teach the reader about the core mechanics of the system, namely multistage accumulations, synchronized witness values, and update windows. If you'd prefer to skip to the demonstration, no one will judge you.

Types

First, I will define Permission, the basic structure that my accumulator will store:

#[derive(Serialize, Deserialize, Clone)]
pub struct Permission {

    /// The Permission's unique nonce.
    ///
    /// This acts like an ID and will persist for the 
    /// Permission across its lifetime of updates.
    pub nonce: Nonce,

    /// The actions this Permission allows its owner to take.
    pub actions: Vec<Action>,

    /// The version of the Permission.
    ///
    /// This must be incremented every time the Permission is
    /// updated with different actions.
    pub version: usize,
}
Enter fullscreen mode Exit fullscreen mode

--------COMPULSORY SECURITY ADVISORY--------
In a real world scenario, Permission would
include a mechanism for authenticating its
owner, like a public key. New permissions and
updates to existing permissions would then
need to be signed by the owner and verified
by the Authority.
------END COMPULSORY SECURITY ADVISORY------

The Serialize and Deserialize traits are derived so that the struct can be easily converted to or from bytes for its accumulation and for getting passed around between services.

The associated types Nonce and Action are defined here:

/// A unique number assigned to new Permissions by the
/// Authority.
pub type Nonce = u53;

/// Actions are identified by a string such as "sign-in" or
/// "send-message".
pub type Action = String;
Enter fullscreen mode Exit fullscreen mode

The remaining data types are transient structs that the services will use to encapsulate method arguments. Let's first look at ActionRequest which contains a permission and an action to be performed:

#[derive(Deserialize, Serialize)]
pub struct ActionRequest {

    /// The Permission associated with the action.
    pub perm: Permission,

    /// The Witness attesting that the Permission is a member
    /// of the accumulation.
    pub witness: Witness<BigInt>,

    /// The action being taken.
    pub action: Action,
}
Enter fullscreen mode Exit fullscreen mode

You might notice that ActionRequest contains the first references to the Witness struct which comes from the rust-clacc package, a library that implements a CL accumulator. This struct already implements the Serialize and Deserialize traits so there's no need to reinvent the wheel.

Next, UpdateRequest is used to request changes to an existing permission's actions:

#[derive(Deserialize, Serialize)]
pub struct UpdateRequest {

    /// The previous version of the Permission.
    pub perm: Permission,

    /// The Witness attesting that the previous version of
    /// the Permission is a member of the accumulation.
    pub witness: Witness<BigInt>,

    /// The new version of the Permission.
    pub update: Permission,
}
Enter fullscreen mode Exit fullscreen mode

Finally, UpdateResponse is returned by the Authority after a permission has been updated:

#[derive(Deserialize, Serialize)]
pub struct UpdateResponse {

    /// The original UpdateRequest.
    pub req: UpdateRequest,

    /// The accumulation value after the Permision has been 
    /// updated.
    pub value: BigInt,
}
Enter fullscreen mode Exit fullscreen mode

What's unconventional about UpdateResponse is this value field storing the post-update accumulation. The reason for this is not immediately obvious, but recall that in our scheme, untrusted services are not able to delete elements from the accumulator because they lack knowledge of the private key.

In order for the Worker (an untrusted service) to stay synchronized with the latest state of the accumulator, the Authority (a source of truth) must provide its accumulation value after it has deleted the previous version of a permission and subsequently added the new version.

Authority

With the basic data types out of the way, have a look at the Authority struct:

pub struct Authority {

    /// The Accumulator's public key.
    key: BigInt,

    /// The Accumulator used to verify Permissions.
    verifying: Accumulator<BigInt>,

    /// The Accumulator containing Permissions whose
    /// Witnesses are currently being updated by the Worker.
    updating: Accumulator<BigInt>,

    /// The Accumulator containing the most recent versions
    /// of all Permissions.
    staging: Accumulator<BigInt>,

    /// Mutex locked while the Authority is operating on its
    /// Accumulators.
    guard: Mutex<()>,
}
Enter fullscreen mode Exit fullscreen mode

The unremarkable fields are the key, which is just the RSA modulus, and the guard mutex for thread safety. The interesting thing going on here is the three independent accumulators. While they all share the same private key, their accumulation values will seldom match. This must be the case in order to facilitate uninterrupted service for action requests while accepting new and updated permissions. Consider each case:

  • verifying: If the Authority service were a night club, think of the verifying accumulation as the guest list currently in the bouncers' hands. There might be a revised list being worked on, but, until that revision is available, the verifying list is ultimately deciding who's who.

  • updating: If the Authority were a fancy venue hosting a private event whose guest list was printed on subtle off-white, tastefully thick, watermarked card stock, then the updating accumulation is the version of the list that is at the printers. While it's essentially finalized for the printing process, it can't inform the security detail until its printed and delivered.

  • staging: This is the version of the guest list written on legal pad in someone's office. In this draft state, guests may easily be added on or stricken, however, getting it in the hands of staff requires the time and resources necessary to have it printed.

Adding or updating permissions is thus a multistage process. It begins with changes being applied to the staging accumulation during a predefined update window. When the window is closed, the staging accumulation is copied to become the new updating accumulation. At this point, all the witnesses are updated. Finally, after the update process has finished, the updating accumulation is copied to become the new verifying accumulation. Meanwhile, the staging accumulation has been accepting changes in anticipation of the next update.

Choosing an appropriate duration for the update window is a trade-off. A shorter window means a faster turnaround between the time a permission update request is received and the time the updated permission can be verified by the Authority. However, if the rate of update requests is slow for a large set of existing permissions, updating the equally large set of witnesses can be costly at a faster cadence. And, of course, the window should not be shorter than the amount of time it takes to update all the witnesses, so that must also be considered.

Jumping back into the code, examine the methods implemented by the Authority:

impl Authority {

    /// Create a new Authority.
    pub fn new() -> Self {
        // Create an accumulator with a random private key.
        let (acc, _, _) =
            Accumulator::<BigInt>::with_random_key(None);
        // Allocate the Authority using the public key and
        // three copies of the Accumulator for each phase of
        // the update process.
        Authority {
            key: acc.get_public_key().clone(),
            verifying: acc.clone(),
            updating: acc.clone(),
            staging: acc.clone(),
            guard: Mutex::new(()),
        }
    }
Enter fullscreen mode Exit fullscreen mode

The constructor starts by creating an accumulator with a random private key and then clones it for each initial accumulation stage.

--------COMPULSORY SECURITY ADVISORY--------
In a real world scenario, the accumulator's private key would be generated, sharded, and encrypted to several different Security Officers in a trusted execution boundary as part of a key ceremony. The Security Officers entrusted with the shards would then submit their part of the key to the Authority server in order to reconstitute it in a secure environment. The full key must never be assembled outside a trusted boundary.
------END COMPULSORY SECURITY ADVISORY------

    /// Add a Permission.
    pub async fn add_permission(
        &mut self,
        mut perm: Permission,
    ) -> Permission {
        // Lock the Mutex.
        let _guard = self.guard.lock().await;
        // Assign a random Nonce that prevents other
        // Permissions from overwriting this Permission in
        // the future.
        perm.nonce = rand::random::<u64>().into();
        // Add the Permission to the staging Accumulator.
        self.staging.add(&perm);
        // Return the Permission with the new Nonce.
        perm
    }
Enter fullscreen mode Exit fullscreen mode

The add_permission method starts off by assigning a random nonce to the new permission. The permission is then added to the staging accumulator.

    /// Update an existing Permission.
    pub async fn update_permission(
        &mut self,
        req: UpdateRequest,
    ) -> Result<UpdateResponse, &'static str> {
        // Ensure the new Permission's Nonce matches the old
        // Permission's Nonce.
        if req.update.nonce != req.perm.nonce {
            return Err("nonce mismatch");
        }
        // Ensure the new Permission's version is greater
        // than the old Permission's version.
        if req.update.version <= req.perm.version {
            return Err("new version must be greater than \
                        old version");
        }
        // Lock the Mutex.
        let _guard = self.guard.lock().await;
        // Delete the old Permission from the staging
        // Accumulator.
        self.staging.del(&req.perm, &req.witness)?;
        // Add the new Permission to the staging Accumulator.
        self.staging.add(&req.update);
        // Return the latest accumulation value.
        Ok(UpdateResponse {
            req: req,
            value: self.staging.get_value().clone(),
        })
    }
Enter fullscreen mode Exit fullscreen mode

The update method starts with some sanity checks: the old permission's nonce must match the new permission's nonce, and the new version number must be greater than the old version number. The old permission is then deleted from staging and the new version is added. Finally, the method returns an UpdateResponse that includes the original request and the latest accumulation value.

    /// Perform an action if a given Permission is part of
    /// the Accumulation.
    pub async fn action(
        &self,
        req: ActionRequest,
    ) -> Result<(), &'static str> {
        // Lock the Mutex.
        let _guard = self.guard.lock().await;
        // Verify the Permission is part of the verifying
        // Accumulator.
        self.verifying.verify(&req.perm, &req.witness)?;
        // Ensure the requested action is in the permission's
        // actions list.
        match req.perm.actions.iter().find(
            |&action| action == &req.action) {
                Some(_) => Ok(()),
                None => Err("permission not granted to \
                             perform action"),
        }
    }
Enter fullscreen mode Exit fullscreen mode

The action method first verifies that the permission is a member of the verifying accumulator using its witness, then checks that the requested action is included in the permission's action list.

--------COMPULSORY SECURITY ADVISORY--------
In a real world scenario, the action might
require authorization so that an external
service may perform the requested action. In
this case, the Authority wouldn't perform the
action, but instead simply sign the request.
The request should have its own nonce in
order to prevent replay attacks. The same
rules for the accumulator's private key apply
to the signing key; Shard it, encrypt it to
Security Officers, and never reconstitute it
outside the trusted boundary.
------END COMPULSORY SECURITY ADVISORY------

    /// Copy the current staging Accumulator to the updating
    /// Accumulator.
    ///
    /// This should be called when the Worker begins updating
    /// Witnesses so that the updating Accumulator captures
    /// all Permission additions and deletions made during
    /// the update window.
    pub async fn update(&mut self) {
        let _guard = self.guard.lock().await;
        self.updating = self.staging.clone();
    }
Enter fullscreen mode Exit fullscreen mode

The update method simply copies the staging accumulation to the updating accumulation. This method should be called when the update window has closed. In the fancy venue analogy, this is where the legal pad guest list is sent off to the printers.

    /// Copy the current updating Accumulator to the
    /// verifying Accumulator.
    ///
    /// This should be called when the Worker has finished
    /// updating all Witnesses so that the verifying
    /// Accumulator reflects all additions and deletions that
    /// have been captured during the previous update window.
    pub async fn sync(&mut self) {
        let _guard = self.guard.lock().await;
        self.verifying = self.updating.clone();
    } 
}
Enter fullscreen mode Exit fullscreen mode

Finally, the sync method copies the updating accumulation to the verifying accumulation. This must be called after the witnesses have been updated in order to have them be verifiable by the Authority. This is analogous to giving the printed guest list to the guards.

Worker

Most of the system's complexity resides in the Worker. Here's its struct:

pub struct Worker {

    /// The value of the Accumulator before any updates
    /// absorbed during the current window have been applied.
    value: BigInt,

    /// The current Accumulator. Although it will only have
    /// the public key, it will stay synchronized with the
    /// trusted value.
    ///
    /// The field will start in a None state until the public
    /// key from the Authority can be set using `set_key`.
    acc: Option<Accumulator<BigInt>>,

    /// The absorbed updates.
    update: Update<BigInt>,

    /// The Permission-Witness pairs that will be added
    /// during the current update window.
    additions: PermissionMap,

    /// The current map of Permission-Witness pairs.
    perms: PermissionMap,

    /// The additions that are having their initial witnesses
    /// calculated during the update process.                                                     
    updating_additions: PermissionMap,

    /// The permissions that are having their witnesses
    /// updated during the update process.                                                         
    updating_perms: PermissionMap,

    /// Mutex locked during updates to the Accumulator.
    guard_acc: Mutex<()>,

    /// Mutex locked while the Worker is in the process of
    /// updating Witnesses.
    guard_update: Mutex<()>,
}
Enter fullscreen mode Exit fullscreen mode

The first field is value and its purpose is a bit esoteric. For now, think of it as being a bit like the Authority's verifying accumulator. In fact, the value and the verifying accumulation should be equal, although their practical application is very different. I'll come back to this as I get into the implementation.

The Worker maintains its own accumulator that is synchronized with the Authority's staging accumulator. The acc field is an Option so that a Worker may be instantiated in an uninitialized until it's keyed with the modulus from the Authority's accumulator. This is more of a consequence of the fact that this proof-of-concept uses a randomly generated private key, however, (compulsory security advisory) the best practice is to use a known, trusted key.

The update field is an unfamiliar Update type, which is again provided by rust-clacc. It's essentially a sponge that soaks up all the updates occurring within the window. When the window closes, the slurp juice gets wrung out in order to update the witnesses.

The additions and perms fields are maps that allow the Worker to easily find a permission and its witness when given only the permission's nonce. The PermissionMap is defined as such:

/// Type for a collection where Nonces map to Permission-
/// Witness pairs.
type PermissionMap = HashMap<Nonce, (Permission,
                                     Witness<BigInt>)>;
Enter fullscreen mode Exit fullscreen mode

As you might expect, the additions map stores the permission-witness pairs for new permissions that are being collected during the update window, while perms stores the pairs that may be verified against the Authority's verifying accumulation. The updating_perms and updating_additions maps are used to store the witnesses that are being updated while the update task is in process.

--------PRETTY OBVIOUS SYSTEMS ARCHITECTURE NOTE--------
In a real world scenario, the permissions collection
would be stored in a database.
------END PRETTY OBVIOUS SYSTEMS ARCHITECTURE NOTE------

Finally there are two mutexes, one that guards access to the Worker's accumulator, and one that guards the update process.

Now to get into the implementation:

impl Worker {

    /// Submit the Authority's public key.
    ///
    /// This allocates the Worker's Accumulator and allows
    /// the other methods to be called successfully. If there
    /// is already an Accumulator allocated, this method
    /// returns an error.
    pub async fn set_key(
        &mut self,
        key: BigInt,
    ) -> Result<(), &'static str> {
        // Lock the Accumulator Mutex.
        let _guard_acc = self.guard_acc.lock().await;
        // Error out if there is already an Accumulator
        // allocated.
        match self.acc {
            Some(_) => Err("already have public key"),
            None => {
                // Allocate new Accumulator initialized from
                // the Authority's public key.
                let acc = Accumulator::<BigInt>
                          ::with_public_key(key);
                self.value = acc.get_value().clone();
                self.acc = Some(acc);
                Ok(())
            }
        }
    }
Enter fullscreen mode Exit fullscreen mode

The set_key method accepts the Authority's public key so that it can allocate its own accumulator and set an initial value. As a precaution, all the methods (with the exception of set_key) will immediately return an error if called before the Worker has been keyed with the set_key method.

    /// Absorb a new Permission into the update window.
    pub async fn add_permission(
        &mut self,
        perm: Permission,
    ) -> Result<(), &'static str> {
        // Lock the Accumulator Mutex.
        let _guard_acc = self.guard_acc.lock().await;
        // Error out if there is no Accumulator allocated.
        let acc = match &mut self.acc {
            Some(acc) => acc,
            None => {
                return Err("need public key");
            },
        };
        // Use the helper to add the Permission.
        Self::add_permission_internal(
            perm,
            &self.value,
            acc,
            &mut self.update,
            &mut self.additions,
        );
        Ok(())
    }
Enter fullscreen mode Exit fullscreen mode

The add_permission method is a front-end to an internal helper function:

    /// Internal helper to add a new permission.
    ///
    /// This code is reused by `add_permission` and
    /// `update_permission`. It is assumed that the caller
    /// has locked a Mutex so that operations on the
    /// Accumulator are thread safe.
    fn add_permission_internal(
        perm: Permission,
        value: &BigInt,
        acc: &mut Accumulator<BigInt>,
        update: &mut Update<BigInt>,
        additions: &mut PermissionMap,
    ) {
        // Add Permission to Accumulator.
        let mut witness = acc.add(&perm);
        // Absorb the addition into the batched Update.
        update.add(&perm, &witness);
        // Set the witness value.
        witness.set_value(value);
        // Insert the pair into the additions collection.
        additions.insert(perm.nonce, (perm, witness));
    }
Enter fullscreen mode Exit fullscreen mode

The new permission is added to the Worker's accumulator in order to stay synchronized with the Authority's staging accumulator. The permission is then absorbed by the update sponge. Finally, the witness value is manually set by witness.set_value(value) before the permission-witness pair is added to the additions map.

I previously mentioned the vague importance of the value field. As you may recall, value is essentially the same thing as the Authority's verifying accumulation, i.e. it represents the state of the accumulator before any of the changes occurring during the update window have been applied. The witness for new elements must be set to this value so that the update process correctly computes their witnesses.

Picture a new permission contained in the Update sponge along with all the other permission changes that occurred during the window in which it was added. To compute a given permission's new witness for the post-updated state, simply squeeze out just that permission from the update before applying it to the old accumulation value.

    /// Absorb an updated Permission into the update window.
    ///
    /// This is simply a deletion of the old version and an
    /// addition of the new version.
    pub async fn update_permission(
        &mut self,
        res: UpdateResponse,
    ) -> Result<(), &'static str> {
        // Lock the Accumulator Mutex.
        let _guard_acc = self.guard_acc.lock().await;
        // Error out if there is no Accumulator allocated.
        let acc = match &mut self.acc {
            Some(acc) => acc,
            None => {
                return Err("need public key");
            },
        };
        // Absorb the deletion into the batched Update.
        self.update.del(&res.req.perm, &res.req.witness);
        // Use the helper to add the Permission.
        Self::add_permission_internal(
            res.req.update,
            &self.value,
            acc,
            &mut self.update,
            &mut self.additions
        );
        // Synchronize the Worker's accumulation with the
        // Authority's. Note that the Worker can't call
        // Accumulator.del because it does not have the
        // private key.
        acc.set_value(&res.value);
        Ok(())
    }
Enter fullscreen mode Exit fullscreen mode

The update_permission method is simple. The deletion is absorbed by the update, then the updated permission is added using add_permission_internal. Note that the accumulation value for acc must be manually set to the value field from the UpdateResponse because there is no way for the Worker to delete the old Permission without the private key.

    /// Retrieve the current Witness for a given Nonce.
    pub async fn witness(
        &self,
        nonce: Nonce,
    ) -> Result<Option<Witness<BigInt>>, &'static str> {
        // Lock the Accumulator Mutex to ensure latest
        // Permissions collection is available if called
        // during the update process.
        let _guard_acc = self.guard_acc.lock().await;
        // Error out if there is no Accumulator allocated.
        match &self.acc {
            Some(_) => {},
            None => {
                return Err("need public key");
            },
        }
        // Return the Witness stored for the Nonce.
        match self.perms.get(&nonce) {
            Some(pair) => Ok(Some(pair.1.clone())),
            None => Ok(None),
        }
    }
Enter fullscreen mode Exit fullscreen mode

The witness method retrieves a witness for a permission identified by its nonce. The Worker maintains the collection of valid witnesses for the Authority. Consequently, any operation involving a permission contained in the Authority's verifying accumulation must first use this method in order to get proof of its membership.

    /// Perform Witness updates.
    ///
    /// This will block the current thread during the
    /// process, however, other threads may call
    /// `add_permission` and `update_permission` to absorb
    /// updates for the next window without adversely
    /// affecting the current update process.
    pub async fn update(
        &mut self,
    ) -> Result<(), &'static str> {
        // Lock the update Mutex.
        let _guard_update = self.guard_update.lock().await;
        // Error out if there is no Accumulator allocated.
        match self.acc {
            Some(_) => {},
            None => {
                return Err("need public key");
            },
        }
        // Cache volatile values that are needed for the
        // update process.
        let acc;
        let update;
        {
            // Lock the Accumulator Mutex so that other
            // threads cannot call
            // `add_permission` or `update_permission` while
            // the instance values are copied to the local
            // cache.
            let _guard_acc = self.guard_acc.lock().await;
            // Store a copy of the current Accumulator.
            acc = self.acc.as_ref().unwrap().clone();
            // Store a copy of the updates absorbed during
            // this update window.
            update = self.update.clone();
            // Copy the elements added during this update
            // window.
            self.updating_additions = self.additions.clone();
            // Reset the batched Update and clear the
            // additions collection for subsequent calls to
            // `add_permission` and `update_permission`.
            self.update = Update::new();
            self.additions.clear();
            // Set the accumulation value for the additions
            // in the next update.
            self.value = acc.get_value().clone();
            // The Accumulator Mutex gets unlocked here,
            // allowing other threads to call
            // `add_permission` or `update_permission`.
        }
        // Perform the Witness update using all available
        // cores.
        let additions = Arc::new(StdMutex::new(
            self.updating_additions.values_mut()
        ));
        let staticels = Arc::new(StdMutex::new(
            self.updating_perms.values_mut()
        ));
        thread::scope(|scope| {
            for _ in 0..num_cpus::get() {
                let acc = acc.clone();
                let u = update.clone();
                let additions = Arc::clone(&additions);
                let staticels = Arc::clone(&staticels);
                scope.spawn(move |_| u.update_witnesses(
                    &acc,
                    additions,
                    staticels,
                ));
            }
            Ok(())
        }).unwrap()
    }
Enter fullscreen mode Exit fullscreen mode

The update method is run when the update window has closed in order to recalculate witnesses for all permissions, old and new. To summarize the above code in plain English:

  1. The current state of the Worker is cloned as a local copy, including the accumulator and permission maps. The local copy now reflects the state of the Authority's updating accumulator.

  2. The update and additions fields on the instance are cleared so that the next update window starts inserting changes into empty collections.

  3. The current accumulation replaces the value instance field so that witnesses for additions made during the next update window represent the state of the accumulator after changes from this update window have been applied.

  4. Update.update_witnesses is called to update the witnesses.

The final step is to copy the permissions in the updating_additions field to the updating_perms field, and then copy the completed updating_perms into the perms field so that it may be the new source for witness data:

    /// Finalize the update process.
    pub async fn sync(&mut self) {
        // Lock the update Mutex.
        let _guard_update = self.guard_update.lock().await;
        // Insert the Permissions that were added during this
        // update window into the updated Permissions map.
        for pair in self.updating_additions.values() {
            self.updating_perms.insert(
              pair.0.nonce,
              pair.clone(),
           );
        }
        // Lock the Accumulator Mutex so that other threads
        // may not call `add_permission` or
        // `update_permission` while the updated Permissions 
        // map is copied back into the `perms` field.
        let _guard_acc = self.guard_acc.lock().await;
        // Copy the updated Permissions map into the `perms`
        // field.
        self.perms = self.updating_perms.clone();
    }
Enter fullscreen mode Exit fullscreen mode

Synchronizer

Last but not least is Synchronizer, the service that connects the Authority, the Worker, and the user. As you might have noticed, the Authority and the Worker have states that need to be synchronized, and here is where that synchronization occurs.

pub struct Synchronizer {
    auth_client: AddrBaseClient,
    worker_client: AddrBaseClient,
    guard_acc: Mutex<()>,
    guard_update: Mutex<()>,
}
Enter fullscreen mode Exit fullscreen mode

The auth_client and worker_client are just HTTP clients dedicated to the other services in the system. Communication with the Authority is performed through the auth_client and communication with the Worker is performed through the worker_client. And there's also the familiar mutexes.

impl Synchronizer {

    /// Set the Worker's public key by requesting it from the
    /// Authority.
    async fn key_worker(
        mut self,
    ) -> Result<Self, &'static str> {
        // Request the public key from the Authority.
        let resp = self.auth_client.get("/key").await?;
        // Deserialize the response to a BigInt.
        let bytes = to_bytes(resp.into_body()).await;
        let key: BigInt = match from_bytes(
            bytes.as_ref()
        ) {
            Some(res) => res,
            None => {
                return Err("response error");
            },
        };
        // Submit the public key to the Worker.
        self.worker_client.post("/key", key).await?;
        // Return self on success.
        Ok(self)
    }
Enter fullscreen mode Exit fullscreen mode

While this isn't strictly necessary for an accumulator based solution, it allows my Worker to be keyed using the Authority's randomly generated modulus. The worker simply sends a GET /key request to the Authority, deserializes the result as a sanity check, and forwards the value with a POST /key request to the Worker.

    /// Create a new Synchronizer.
    pub async fn new() -> Result<Self, &'static str> {
        Synchronizer {
            auth_client: AddrBaseClient::new(
                "http://", AUTHORITY_ADDR
            ),
            worker_client: AddrBaseClient::new(
                "http://", WORKER_ADDR
            ),
            guard_acc: Mutex::new(()),
            guard_update: Mutex::new(()),
        }.key_worker().await
    }
Enter fullscreen mode Exit fullscreen mode

The constructor creates an instance of the struct and automatically calls key_worker, returning the future.

    /// Add a permission to the system.
    pub async fn add_permission(
        &mut self,
        actions: Vec<Action>,
    ) -> Result<Permission, &'static str> {
        // Lock the Accumulator Mutex.
        let _guard = self.guard_acc.lock().await;
        // Create a Permission that includes the requested
        // actions.
        let mut perm = Permission {
            nonce: 0.into(),
            actions: actions,
            version: 0,
        };
        // Submit the permission to the Authority and read
        // back the response that includes a populated Nonce.
        let resp = self.auth_client.post(
            "/permission",
            perm
        ).await?;
        let bytes = to_bytes(resp.into_body()).await;
        perm = match from_bytes(bytes.as_ref()) {
            Some(res) => res,
            None => {
                return Err("response error");
            },
        };
        // Submit the finalized Permission to the Worker.
        self.worker_client.post(
            "/permission",
            perm.clone()
        ).await?;
        // Return the Permission on success.
        Ok(perm)
    }
Enter fullscreen mode Exit fullscreen mode

The add_permission method begins by creating a permission from the supplied action list and submitting it first to the Authority so that it can be assigned a nonce, then to the Worker so that it is included in the update window.

    /// Update a permission.
    pub async fn update_permission(
        &mut self,
        perm: Permission,
        actions: Vec<Action>,
    ) -> Result<Permission, &'static str> {
        // Lock the Accumulator Mutex.
        let _guard = self.guard_acc.lock().await;
        // Get the Permission's current Witness.
        let witness = Self::get_witness(
            &mut self.worker_client,
            perm.nonce
        ).await?;
        // Create Permission with new actions and an
        // incremented version.
        let update = Permission {
            nonce: perm.nonce,
            actions: actions,
            version: perm.version + 1,
        };
        // Create the UpdateRequest struct containing the
        // Witness as well as the old and new Permissions.
        let req = UpdateRequest {
            perm: perm,
            witness: witness,
            update: update.clone(),
        };
        // Submit the request to the Authority and
        // deserialize the response.
        let resp = self.auth_client.put(
            "/permission", 
            req
        ).await?;
        let bytes = to_bytes(resp.into_body()).await;
        let response: UpdateResponse = match from_bytes(
            bytes.as_ref()
        ) {
            Some(res) => res,
            None => {
                return Err("response error");
            },
        };
        // Submit the response to the Worker so that it has
        // the most current accumulation value.
        self.worker_client.put(
            "/permission",
            response
        ).await?;
        // Return the updated Permission on success.
        Ok(update)
    }
Enter fullscreen mode Exit fullscreen mode

In the Synchronizer's implementation of update_permission, the witness for the existing permission is first retrieved and then a new permission is created with the new actions list and an incremented version number. An UpdateRequest is then created including the existing permission, its witness, and the updated permission. A request to the Authority's PUT /permission endpoint calls its own update_permission method which returns an UpdateResponse and in kind, it gets forwarded to the Worker's update_permission endpoint.

    /// Perform an action.
    pub async fn action(
        &mut self,
        perm: Permission,
        action: Action,
    ) -> Result<(), &'static str> {
        // Lock the Accumulator Mutex.
        let _guard = self.guard_acc.lock().await;
        // Get the Permission's current Witness.
        let witness = Self::get_witness(
            &mut self.worker_client,
            perm.nonce
        ).await?;
        // Create the ActionRequest struct.
        let req = ActionRequest {
            perm: perm,
            witness: witness,
            action: action,
        };
        // Submit the request.
        self.auth_client.post("/action", req).await?;
        // Return success.
        Ok(())
    }
Enter fullscreen mode Exit fullscreen mode

To perform an action using this system, the Synchronizer's action method accepts a permission and the requested action. It then retrieves the permission's witness from the Worker, assembles an ActionRequest instance, and submits it the Authority.

    /// Start the synchronization task.
    ///
    /// The synchronization task executes in a continuous
    /// loop until a communication error occurs with the
    /// Authority or the Worker. The owner of a Synchronizer
    /// instance must await the returned future before the
    /// instance may be freed safely.
    pub fn sync(
        &mut self,
    ) -> JoinHandle<Result<(), &'static str>> {
        // Create an AtomicPtr so that a reference to the
        // instance may be moved into the task.
        let mut ptr = AtomicPtr::new(self);
        tokio::spawn(async move {
            // Dereference the pointer to get the reference.
            let sync = unsafe {
                ptr.get_mut().as_mut().unwrap()
            };
            // Lock the update Mutex to prevent additional
            // update tasks from executing.
            let _guard_update = self.guard_update.lock()
                                .await;
            // Define the update window.
            let dur =
                Duration::from_millis(UPDATE_WINDOW_MILLIS);
            let mut window = interval(dur);
            // The first tick completes immediately.
            // Get it out of the way.
            window.tick().await;
            // Start looping.
            loop {
                // Wait for the next interval tick.
                window.tick().await;
                // Create a future for the update task, but
                // only lock the Accumulator Mutex while the
                // Authority and Worker states are mutated.
                {
                    // Lock the Accumulator Mutex.
                    let _guard_acc = sync.guard_acc.lock()
                                     .await;
                    // Tell the Authority to switch over its
                    // staging accumulation.
                    sync.auth_client.get("/update").await?;
                    // Tell the Worker to start updating
                    // Witnesses.
                    sync.worker_client.get("/update")
                    // Accumulator Mutex gets released here,
                    // even though the Worker update result
                    // will be awaited for.
                }.await?;
                // Tell the Authority to switch over its
                // updating accumulation.
                sync.auth_client.get("/sync").await?;
                // Tell the Worker to switch over its
                // updated permissions map.
                sync.worker_client.get("/sync").await?;
            }
        })
    }
Enter fullscreen mode Exit fullscreen mode

Finally, the sync method executes a continuous loop waiting for an interval to tick down. After a tick, it instructs the Authority to copy its staging accumulation into its update accumulation, then it triggers the Worker to begin updating the witnesses. It awaits the completion of the update process from the Worker then finalizes the update by instructing the Authority and the Worker to copy their updating state to their verifying state.

Up and running

To try all this out, I'll first start up my services, making sure that the Synchronizer starts after the Authority and the Worker in order to transfer the public key:

$ cargo run --bin authority &
$ cargo run --bin worker &
$ cargo run --bin synchronizer &
Enter fullscreen mode Exit fullscreen mode

The user API is served by the Synchronizer, which listens locally on port 3000. First, I'll add a new permission:

$ curl -X POST localhost:3000/permission -w "\n" -d @- << EOF
> ["tick"]
> EOF
{"nonce":8302967033790438,"actions":["tick"],"version":0}
Enter fullscreen mode Exit fullscreen mode

After waiting for the update window to close (60 seconds by default), I'm able to perform the tick action using the permission:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tick"],
>     "version": 0
>   },
>   "action": "tick"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode

To update the permission with a new tock action while removing the tick action, I submit the permission and the new action list:

$ curl -X PUT localhost:3000/permission -w "\n" -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tick"],
>     "version": 0
>   },
>   "actions": ["tock"]
> }
> EOF
{"nonce":8302967033790438,"actions":["tock"],"version":1}
Enter fullscreen mode Exit fullscreen mode

You can see the returned permission has the new action list and well as an incremented version number.

After waiting another minute for the update window to close, I can try out the new tock action:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tock"],
>     "version": 1
>   },
>   "action": "tock"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode

I can also verify that the previous version of the permission gets rejected with a 401 Unauthorized:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tick"],
>     "version": 0
>   },
>   "action": "tick"
> }
> EOF
401
Enter fullscreen mode Exit fullscreen mode

In order to test that the witness updates are correct for permissions that have not changed during the update window, I just have to add a new permission:

$ curl -X POST localhost:3000/permission -w "\n" -d @- << EOF
> ["tack"]
> EOF
{"nonce":3276091879824438,"actions":["tack"],"version":0}
Enter fullscreen mode Exit fullscreen mode

Thanks to the update window, service for the tock permission will not be impacted by the addition of the tack permission, even without waiting for the next witness update process to complete:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tock"],
>     "version": 1
>   },
>   "action": "tock"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode

After waiting for the next update window to close, I can verify that both permissions are served successfully:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 3276091879824438,
>     "actions": ["tack"],
>     "version": 0
>   },
>   "action": "tack"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode
$ curl -X POST localhost:3000/action -w "%{http_code}\n" \
  -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tock"],
>     "version": 1
>   },
>   "action": "tock"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode

That's almost all there is to it. For the sake of brevity, I have omitted some code here and there, most notably the binaries that shim a hyper HTTP server with the Authority, Worker, and Synchronizer services.

If you're interested in seeing the project in its entirety or experimenting with it yourself, checkout the accompanying compauth source repository.

GitHub logo johnoliverdriscoll / compauth

A proof-of-concept permission system using accumulators.

Compressing Authority

A scalable proof-of-concept permission system using CL accumulators that is resistant to downgrade attacks while reducing the security boundary to a minimal physical device.

Usage

Start the necessary services:

$ cargo run --bin authority &
$ cargo run --bin worker &
$ cargo run --bin synchronizer &
Enter fullscreen mode Exit fullscreen mode

Try adding a permission:

$ curl -X POST localhost:3000/permission -w "\n" -d @- << EOF
> ["tick"]
> EOF
{"nonce":8302967033790438,"actions":["tick"],"version":0}
Enter fullscreen mode Exit fullscreen mode

After the next update window has closed (60 seconds by default), you are able to perform the tick action:

$ curl -X POST localhost:3000/action -w "%{http_code}\n" -d @- << EOF
> {
>   "perm": {
>     "nonce": 8302967033790438,
>     "actions": ["tick"],
>     "version": 0
>   },
>   "action": "tick"
> }
> EOF
200
Enter fullscreen mode Exit fullscreen mode

Update the permission with a new action:

$ curl -X PUT localhost:3000/permission -w "\n" -d @- << EOF
> {
>
Enter fullscreen mode Exit fullscreen mode

Top comments (0)