DEV Community

Cover image for Security of Navalny's underground resistance on the Dark Web
Navalny Team
Navalny Team

Posted on

Security of Navalny's underground resistance on the Dark Web

In 2017, Navalny's team started an underground resistance in Russia. Initially serving as pre-election hubs, they later evolved into regional political centers. This network of these centers posed a significant threat to Putin, leading to a relentless escalation of repression tactics, including raids, account freezes, employee arrests, physical assaults, and menacing threats. In 2021, the authorities falsely concocted an extremism case, ultimately forcing the shutdown of those centers, all purportedly in the name of safeguarding those involved.

In December 2022, our team relaunched Navalny's resistance movement, but this time in a form of an underground organization. Our primary objective was to create a secure platform for online communication and collaboration among coordinators and activists.

In developing our security system, we avoided relying on security through obscurity. By sharing the information in this article, we are not increasing the risks but, instead, providing experts with an opportunity to evaluate our security measures and assess their relevance and depth.

About the system

The website of Navalny's underground resistance provides each user with a personal account. Activists sign up based on their professional and geographical affiliations. The platform offers participants assignments tailored to their specialties and interests, providing them a possibility for direct communication with coordinators. Users earn points for completing assignments, with that achieving specified levels, and then can purchase avatars as rewards.

Personal account interface screen

In August 2023, the platform underwent a significant update, introducing a messenger feature that facilitates anonymous communication among participants while they are working on assignments.

Threat and attacker modeling

Our system development commenced with threat modeling, where we outlined the objectives that potential attackers might pursue. We were acutely aware of the risks faced by those in Russia who support our cause, and we prioritized their safety above everything else. We identified the primary threat as the potential exposure of the activists' identities. Additionally, we acknowledged secondary threats, which encompass denial of user access to the service, jeopardizing confidential projects and plans, compromising the integrity of our service's software, and moving laterally to other services after a potential compromise.

During the second phase, we profiled potential intruders, identifying who might act as our adversaries — primarily, corrupt law enforcement agencies or special services, and Internet censorship enforcement entities. We then assessed their capabilities for attaining their objectives. We considered four primary attack vectors: targeting our systems through hacking, conducting searches on activists, compromising our employees through bribery, infiltration, or threats, and the most straightforward approach: encountering a "Comrade Major" in an online chat room.

Threat and attacker modeling serve as the bedrock of information security. Our priorities often meant we had to make product decisions that could negatively impact user experience and retention. For instance, we couldn't create a user-friendly mobile app with a clever workaround for evading censorship, equipped with notifications to improve user retention, like the ones in the “Smart Voting” app. That limitation arose from the concern that such an app would become easy to discover if someone were to search the phone of an activist.

Once we pinpointed the threats that demanded our attention, we initiated planning of a layered security system comprising multiple security perimeters. With each additional barrier we erected — such as authentication or a firewall — we asked ourselves, "Alright, but what if these defenses are bypassed? What potential actions could an attacker take afterwards?" Our goal was to ensure that any breach would fall short of facilitating a successful attack.

Tor

One pivotal decision we made was hosting the website on the dark web. Standard web browsers not only expose a wealth of user information, including device metadata, IP addresses, but also leave behind extensive traces on the user's device, like browsing history and caches. We made the deliberate choice to exclusively employ the Tor browser to access our system, as it minimizes these risks and takes numerous other measures to safeguard users.

This approach complicates the process of implicating a user in the organization's activities in the event of device seizure. While it might be evident that the Tor application is installed on a device, it would remain unclear what purpose it served. It's worth noting an interesting tidbit: Edward Snowden used Tor to communicate with journalists when disclosing classified information about the NSA. As a side note, Tor is being developed by a nonprofit organization, so if you can, consider supporting our fellow colleagues financially — donate.torproject.org.

Tor browser can also serve as a means to evade Roskomnadzor's (the Russia’s Media Supervision Agency, or simply saying the main censorship body of the country) surveillance. With the widespread deployment of the DPI (Deep Packet Inspection) equipment by most ISPs in Russia, concealing access to HTTPS websites has become considerably more challenging. Mainstream browsers still lack the ECH (Encrypted Client Hello) protocol support, and while Roskomnadzor may not discern the data exchanged between the user and the site, it can track plenty of metadata, e.g. which user has accessed which domains. Consequently, we opted to host the site on the .onion domain, rendering it inaccessible through regular browsers. This ensures that users cannot inadvertently stumble upon the website and register that fact in DPI logs.

Personal data

To sign up within the system, an activist is not required to disclose any personally identifying information. We only identify activists through their chosen pseudonyms, without any attempts to uncover their true identities. Every activist is assigned a reputation, quantified in points. These points accrue as supporters successfully complete tasks, raising their trustworthiness. As a result, more confidential and significant assignments can be delegated to them. So if a "Comrade Major" seeks access to classified projects, they must first undertake simpler tasks, such as distributing flyers or creating protest graffiti. Let’s give them a chance to finally do something useful!

To sign up for the system, a participant is required to create a pseudonym (username), a suitably complex password, and provide a brief description of their professional skills in an open format.

Sign up interface screen

Actual names, email addresses, and phone numbers are not collected (except for an email address for the first contact, which is then erased from the database). All registrations undergo an extra level of review, following which users gain access to the system's complete functionality. The moderation process serves as a safeguard against excessive registration attempts. Given that Tor users remain anonymous, Tor shields malicious users who could inundate the system with automated bot submissions and disruptive messages from us. To ensure minimal disruption to the site's operation, our moderators scrutinize and filter out irrelevant or spam registrations. They also categorize supporters based on their professional interests (such as software engineers, designers, attorneys, etc.) before granting them access to the site.

Moderation awaiting interface screen

Insider risks

Insiders refer to our staff members, including coordinators, moderators, software engineers, system administrators, and so on. They possess additional access to the system, which could potentially be exploited by attackers. While we have confidence in our employees, we still implement precautions in case they are compromised. This is because an employee might not only willingly become an attacker (due to ideological reasons or bribery) but could also fall victim to hacking (remember that phishing remains a threat) or be coerced into cooperation, possibly through pressure on their family members in Russia.

To safeguard against insider threats, we take a few measures. Firstly, we adhere to the principle of least privilege, ensuring that employees only have access essential for their immediate responsibilities. For instance, software engineers can access the code repository but not the servers, while testers can access the testing environment but not the production environment. Secondly, we employ multiple security layers to shield against insider threats. For example, even if system administrators were to dump the database or exfiltrate files from the downloaded file storage, they would be unable to decrypt them.

Encryption

We apply encryption to protect all sensitive database fields and uploaded files. Our solution involves using symmetric encryption with authentication (AEAD), which serves as a protection against tampering with the encrypted content. Importantly, the encryption keys are not retained within the application itself but are securely stored in Google's Cloud KMS (Key Management System). Additionally, we employ envelope encryption, that entails encrypting each file with its unique ephemeral key, that is in turn wrapped by KMS.

Cloud KMS keys are not exportable and are stored within a separate Google Cloud project. Access to this project is restricted to a select group of highly trusted administrators. Permission for encryption and decryption using these keys is exclusively granted to service accounts linked to workload identities of services that need to interact with the database.

This encryption approach minimizes the security perimeter of the data processed by the service. For instance, even if an administrator accesses the database, they won't be able to see the plaintext. The same holds true for backups. Furthermore, even systems responsible for creating backups do not need to decrypt the data. Everything is securely backed up directly in the encrypted state, and in order to compromise the data, one would also need to acquire access to the KMS keys, significantly raising the difficulty level of any potential data compromise.

Attacks on the application

If the application itself is compromised due to a technical vulnerability, an attacker can potentially access everything within the application's purview. It is the system's most valuable asset, and a successful attack could yield significant damage. To mitigate this risk, we implement two key strategies. Firstly, we refrain from storing any personal data of our users, ensuring that even in the event of data exfiltration, it would be exceptionally challenging to identify supporters. Secondly, we make concerted efforts to minimize the attacker's ability to establish a foothold within the system.

As our servers run on Google Kubernetes Engine (GKE), they consist of virtual machines that undergo regular updates and are frequently replaced by fresh ones. We employ Google's Container-optimized OS (COS) as the base image, known for its security posture. When it comes to service authentication, we avoid using exported JSON keys of service accounts, which can be susceptible to theft and long-term misuse. Instead, we rely on GKE Workload Identity, which furnishes applications with temporary tokens that have a brief lifespan, typically lasting only a few minutes.

Deployment

The entire infrastructure is segregated into two distinct environments: trusted and untrusted. The untrusted environment, devoid of sensitive data, is primarily dedicated to development and testing purposes. Conversely, the trusted environment encompasses all components with access to protected data.

The application code and containers are built by specialized secure virtual machines known as "runners." The runners operate in an isolated environment separate from other projects and engineers. Access to the project where the runners operate is restricted to a limited number of trusted engineers, and even then, direct access is only permitted in the event of runner failures. Service accounts associated with these machines hold administrative access to the production environment. This enables adoption of the gitops approach, resulting in meticulously documenting a comprehensive history of all production changes in both git logs and Google Cloud audit logs. Such dual audit setup allows for retrospective tracing of potential attacks.

We use the untrusted environment for building code from unprotected GitLab branches that are not considered secure. These branches are open for any developer to commit changes for review or deployment to test servers. Even if a developer were to introduce something malicious that compromises the build server, it would not grant them access to the production environment. Following a review by another team member and the subsequent merging of the branch into the primary production branch, the code undergoes build and deployment within the trusted environment, dedicated to production.

To improve cost-efficiency and achieve better server utilization, we use Kubernetes. For our build system, we've developed an in-house solution, allowing virtual machines with GitLab runners to run only when they are needed. The runner management system itself runs in the trusted environment, bolstering our overall security measures.

DoS attacks

When the system was launched, the team anticipated that, unlike in the open Internet, attackers would find it costly to carry out DoS attacks within the Tor network. However, attackers wasted no time, and our monitoring systems quickly detected a significant surge in server load. In the beginning, we were prepared to filter traffic only at the HTTP level, and initially it was sufficient. However, attackers switched tactics to more sophisticated ones, exploiting numerous ways to overwhelm our Tor client.

To provide some context, it's essential to understand how onion sites work. Unlike in regular Internet, where clients directly connect to servers, in Tor, both clients and servers establish connections with specialized introduction nodes. Those nodes act as intermediaries to introduce clients and servers to each other. For data transfer, after being introduced, both parties connect to rendezvous nodes through secure tunnels. The rendezvous servers then consolidate both tunnels into a single circuit, facilitating transmission of regular HTTP traffic between the clients and the servers.

In our case, the attackers flooded us with a massive influx of connection requests, effectively forcing our server to initiate numerous connections with rendezvous servers, which involved computationally intensive asymmetric cryptography. Notably, they never even attempted to transmit any data through these established tunnels, and the attack never reached the HTTP servers.

Standard defense mechanisms against DoS attacks, such as identifying the sources and temporarily blocking them, are unfeasible within the Tor network because we lack any means to identify the clients. Consequently, we developed a specialized solution. The key challenge was that a typical onion server could not scale efficiently. When an onion server is launched, it connects to the Tor network and publishes a service descriptor that points exclusively to itself. If one were to initiate a second instance of the same server, it would overwrite the descriptor of the first instance, rendering them unable to function at the same time.

To tackle this issue, we employed onionbalance, a third-party tool that enabled us to deploy multiple onion servers at once, each operating on its unique, randomly generated onion address. Subsequently, onionbalance retrieved their descriptors from the network, consolidated them into a unified descriptor, and then published that combined descriptor as the primary onion site's descriptor. This approach effectively distributed the traffic load across the servers, enabling them to handle traffic surges.

Here's an overview of our solution's architecture:

Architecture screen

Tor-balancer serves as a wrapper around onionbalance, interacting with our cluster to obtain a list of healthy and active Tor server instances. It then creates a configuration for onionbalance and runs the balancer. If the collection of active instances change, for instance, if some instances go offline or new ones come online, the configuration is automatically regenerated, and onionbalance is restarted.

Meanwhile, Tor-frontend wraps a standard onion server. It generates a random onion address, starts an onion server at that address, waits until it successfully registers itslef and becomes accessible on the Tor network, and subsequently starts responding to HTTP requests with its designated onion address. This HTTP interface is what allows tor-balancer to discover which individual onion addresses should be merged to construct the final descriptor. This way, we evenly distribute the load across multiple onion addresses using "discovery" descriptors, preventing any single address from being overloaded by an attacker and causing denial of service.

The client initiates a connection with the server once, which is typically a slow and resource-intensive operation. Afterward, within that same connection or circuit, it can establish numerous logical connections, which are faster and more resource-efficient. To counteract potential attacks, we enforce a rule: if a client establishes an excessive number of connections within its circuit, we either terminate the circuit entirely or impose rate limits using ingress-nginx. This strategy allows our system to effectively counteract such attacks.

The second layer of defense involves relinquishing server privacy. Since we are not a dark market and take pride in openly declaring ownership of our service, there's no point for the server to establish a triple-wrapped connection to a rendezvous node. Instead, we utilize a Tor option that enables our server to form a direct connection. This significant adjustment has led to a substantial reduction in the amount of asymmetric cryptography performed on the server, significantly reducing its workload. Client privacy remains unaffected, and the server's operations become more efficient. As a result, we have successfully thwarted any further DoS attacks on the server.

Coordinators

Within this system, any participant has the capability to send a message to a coordinator and receive a response. For instance, they can report the completion of an assignment or ask a question.

To facilitate the coordinators' responsibilities, we employ a dedicated administrative panel built on the Django framework. Its functionality proves more than sufficient for efficiently managing sign-up requests, assigning tasks, assessing their completion, and engaging in private chats with activists. Moderators are not privy to users' real usernames; instead, they are assigned impersonal sequential numbers to enable another level of pseudonymization. All administrators are mandated to employ two-factor authentication, and their access is restricted by job role, with access exclusively allowed through a VPN. We also maintain audit logs to enable tracking of which staff member utilized which access.

Team coordinators were tasked with distributing and grading assigned tasks, supervising execution and maintaining communication with the activists through private chats. This indirect process necessitated human resources and raised the issue of delayed feedback. To address it, we decided to launch public group chats based on the platform, allowing supporters to interact directly with each other within a task.

Chats

In terms of both functionality and design, the chat interface closely resembles the interface of a messenger app like Telegram. Users can seamlessly switch between chats, read, compose, and delete messages, respond to messages from others, and clear message histories.

Chat interface screen

Our chats run on top of websockets over the Tor network. Every chat message is encrypted at rest with Cloud KMS. To prevent an excessive number of requests to KMS, the system incorporates a local cache in the application's memory for plaintext messages.

Users have the option to manually delete chat messages, and messages are also automatically deleted after a certain period of time.

To safeguard chat rooms against message spam, the system enforces a limit on the number of messages a user can send within a given time frame. Once this limit is reached, further messages will not be accepted. Additionally, we conducted comprehensive load testing of the chats with various scenarios to ensure that the server can withstand the anticipated load.

We've implemented measures to safeguard chat users from potential metadata-based identification. The concern here is that attackers with access to SORM (the police system for traffic supervision) can detect when users access Tor. While they may not discern the specific activities users engage in, concealing their presence becomes quite challenging. Attackers could enter a popular chat room within the system, send bulky messages of a certain size, and exploit SORM data to identify users across Russia who received messages of the same size at the same time. Although this identification process may yield noisy results and somewhat fuzzy identification, repeated attempts could eventually allow them to reliably ascertain which Russian users were present in the chat room during that time. To substantially complicate such an attack, we introduced randomness in both the message sizes delivered to users and the time intervals between these messages. This deliberate variation obscures the signal, reducing it to a level beneath the noise, thereby making identification through metadata exceedingly difficult.

Anonymity

The deepest level of our defense is anonymity. Even if all the abovementioned security measures were to fail—servers get compromised, coordinators are compromised, and administrators face threats—the only thing an attacker would obtain is solely a record of anonymous-to-anonymous communication that doesn't disclose any personally identifying information.

However, even without any security breaches, publicly accessible chat rooms are susceptible to spammers, provocateurs, and other malicious individuals. To mitigate those risks, we implement some protective measures.

To begin with, each chat room is allocated a moderator who can see slightly more than a regular user. The moderator has access to both the merged feed, where all messages subject to moderation are routed to, and a violations feed, which contains reports from other users regarding rule violations. Within these administrative chats, the moderator holds the ability to delete messages and impose temporary or permanent bans on users. Importantly, the moderator need not be an administrator with access to the admin panel over VPN; they could be an ordinary user signed in within their personal account, albeit endowed with extra privileges.

Furthermore, to thwart any attempts by an individual like "Comrade Major" to amass enough data to indirectly identify a user, we've instituted an additional mandatory layer of pseudonymization. Within a chat room, each user is allocated a randomly generated alias distinct from the one they use for their primary sign-in—this is the alias visible to other participants. The usernames of the activists that they use when logging into their accounts remain secret and are not displayed anywhere. We've prearranged a selection of adjective-noun combinations to ensure that all participants sport amusing and unique aliases. Consequently, users encounter each other as "Spotted Hippogryph" or "Cute Cat." Users also have the option to change their alias to another randomly generated one if desired.

Username random generator interface screen

The Fedor bot, being constant present in the chat room, serves as a reminder to participants about safety protocols. For instance, it advises them not to disclose any personal information about their real lives and encourages them to regularly rotate their nicknames to sever ties with their past online identities. Functioning much like a regular chat participant, the bot selects a random message from its knowledge base and shares it within the chat.

File uploads

The messenger doesn't just facilitate exchange of short messages; it also allows image uploading. We've taken measures to ensure that the metadata associated with the images uploaded to a shared chat room is automatically stripped off, preventing anyone from ascertaining details such as the creator, creation date, and location.

To accomplish this, we opted for exiftool as our solution for metadata removal. It supports the broadest range of formats and is easy to use. However, it's worth noting that exiftool relies on numerous codecs written in C, a language that makes it easy to introduce vulnerabilities enabling remote code execution.

To mitigate the risk of a compromised exiftool affecting the entire application, we've isolated it as a separate microservice within a sandboxed environment. This setup ensures that if a malicious file is encountered, hackers won’t be able to breach the primary service or establish a foothold on the server, thereby preventing interception of other users' files. This microservice runs within Google's serverless Cloud Functions execution environment, which limits the lifespan of an instance and restricts its access to the database and other microservices at the network level.

During the initial day of the chat rooms’ launch, our security measures were put to the test when malicious testers attempted to shell our system with a genuine zip bomb disguised as an image containing millions of empty pixels. A zip bomb, in this context, refers to an archive filled with billions of zeroes that have been compressed to an extraordinarily compact size by a compression algorithm. If a recipient attempts to decompress it, they would rapidly deplete their available memory or storage, or even both.

_decompression_bomb_check
raise DecompressionBombError(
PIL.Image.DecompressionBombError: Image size (50625000000 pixels) exceeds limit of *N* pixels, could be decompression bomb DOS attack.

Fortunately, the bomb was defused as our library's security measures kicked in. However, even if it had exploded, it would not have caused any harm to the primary service thanks to the isolation.

Conclusion

We've developed a unique tool that empowers supporters and activists in Russia to challenge the regime more efficiently while maintaining secure communication and minimizing the risk of being exposed. This endeavor was made possible through generous donations from our supporters, and we express our heartfelt gratitude to them! You can also become a part of the underground resistance and lend your support right away:

Patreon of the underground resistance: https://www.patreon.com/shtab_navalny

You can donate company shares to us via the DonateStock website. It's quick and convenient, and the site sometimes doubles the sum of donation:
https://donatestock.com/rikolto-ltd

ACF crypto wallets:

Bitcoin
3QzYvaRFY6bakFBW4YBRrzmwzTnfZcaA6E

Ethereum
0x314aC71aEB2feC4D60Cc50Eb46e64980a27F2680

Monero
42e1hkdsTHUUWSM24jVgLTH4MDJPMbNS1XDt7rK36dTBSKweohz9FSWKAAoqHLN9nSVgocnSnkR2AaLMWtsrmAoQGPdWSE1

Zcash (Z-address)
zs1l8ztrqpk0qyn2hyte3x9m568taz64jyc90ppfskylqgawxw7r3gq453yn6hk9swq2l0dq9yal0a

USDT (ВЕР-20)
0x9A4B20a7909Af03dd8B4f28A419840F9715E1F73

Benevity is a corporate donation software used by over 900 companies: Apple, Google, Twitter, Facebook and others. If you work for a large company, your corporate website probably has an Employee Giving section. Type Anti-Corruption Foundation in the search box there and enter the donation amount. The advantage of Benevity is that sometimes corporations will multiply the amount of donations made through it by several times.

Top comments (0)