DEV Community

CoinMonks
CoinMonks

Posted on • Originally published at Medium on

The Mesa DEX Exploit

This is kind of a Part 2 to my post How Mesa DEX Works which I wrote on November 27th, three days before the API3 public token distribution. During the distribution — in fact, right at the beginning — a previously untapped exploit was used by an attacker to purchase approximately 1.6 million API3 tokens for relatively cheap, at $0.56 USD per token. This result in the API3 DAO to raise ~$680,000 less than expected under normal circumstances. (In case you’re wondering: the attacker resold all of those tokens on Mesa, at a higher price, within a day.)

In this post, I will explain what exactly that exploit was (it was relatively sophisticated and complex) and I will update some of my initial thoughts on Mesa / Gnosis Protocol.

This post is rather technical, although not as technical as existing documentation; I try and keep things rather high-level. If you’d prefer to skip the technical stuff all together, I suggest you scroll down to the section “Rand0m and not so random final thoughts” which contains my afterthoughts on the exploit and various addendums to my first post.

A note on the objective function

In my previous article, which should be prerequisite reading for this one, I stated that the optimization function for matched orders for a given batch is total trading surplus , which is maximized.

This is not entirely correct. I had written this having only read the original Gnosis Protocol whitepaper written in 2019 when the protocol was called dfusion. Upon reading their most recently updated documentation, it appears that the optimization function currently used by Mesa — the only actual implementation of Gnosis Protocol, as far as I know — uses trader surplus as a single term in their more complex, updated objective function:

Objective function = Traders’ Surplus - Disregarded Utility + Collected Fees / 2

That is, the objective function seems to have been updated since the original whitepaper to include Disregarded Utility (explained below) and, less importantly, collected fees by solvers.

Disregarded Utility

From the Gnosis Protocol Developer Guide:

Disregarded Utility : Sum of disregarded utilities of all (executed) trades in the solution (i.e. untouched orders are ignored). The disregarded utility of a trade is defined as the difference between maximal and actual utility … The maximal utility is the utility if the trade had been fully executed at the computed prices (trade.buyAmount == order.buyAmount).

That is, the objective function has an extra term that “penalizes” solutions that fulfill only partial orders, thereby creating a preference for fully filled orders — reminder: users create limit orders on Mesa that specify the maximum amount n of Token A that they are willing to sell (for some minimum amount m of Token B).

All else equal, the objective function would prefer a fully executed order; that is, an order that executes a trade for an amount as close to n as possible.

Now, why include Disregarded Utility in the objective function? Good question, I asked myself the same thing … The answer isn’t exactly intuitive.

Rationale for including Disregarded Utility

In the Developer Guide, the authors admit that:

… it could intuitively suffice to optimize for this metric [trader surplus].

Then they add:

However this might adversely affect “market orders”.

Like previously mentioned, Gnosis Protocol, by default, only allows for limit orders. Then what about users that want to make a market order — an order to be executed immediately at the current market price? There is a hacky solution to this: market (sell) orders can be simulated by specifying very low limit price far away from the current market rate.

Here’s a simple example. Consider some stablecoin STABLE that typically trades at around 0.95–1.05 USD. Let’s say you want to make a market buy order for STABLE. Assuming that sellers are selling via reasonably priced limit sell orders (e.g. in some range 1.00–1.05 USD), you can create a limit buy order for n STABLE at 1.10 USD (i.e. a high limit buy price). Then you’d expect a settled order for somewhere in the range 1.00–1.10 USD, probably at some midpoint that maximizes both the buyer and sellers satisfaction (i.e. “utility”) with the trade. That is, the market buyer overbids assuming that actual trade will get settled lower, closer to the market rate.

An illustration of overshoot. This image is sourced from the Wikipedia article on overshoot, a topic from signal processing, and only somewhat related to the topic at hand :)

Now, I guess this is fine assuming there’s one market trader, and that the trader on the other side of the trade is creating a reasonable limit order taking into consideration outside market information, thus “dampening” the final settled price towards some reasonable market price. The issue is, then, when there are two opposite “market orders”. Again, from the docs:

In the extreme case there might be two opposite “market orders”. Any price within the large range of their limits would be valid and the sum of utilities would be equal across the price range.

That is, if clever user A creates a “market buy” at 1.20 USD for STABLE and clever user B creates a “market sell” at 0.80 USD, any price between 0.80 and 1.20 would result in the same total utility score for user A and user B when considering trader surplus alone. I omit mathematical formulation and I urge the reader to refer to the trader surplus formula to convince themselves of this (left as an exercise for the reader, as they say).

I guess you could say this is a problem because these “market orders” don’t appropriately simulate what market orders should do — namely, clear at around the market price (~1.00 USD).

This is where Disregarded Utility comes in. The idea is that, while there may be these hacky “market orders”, most orders will be reasonable limit orders (by reasonable, I mean taking into account outside market information, thus be made at around the market price). Thus, a user making a “market sell” of STABLE at 0.80 USD will be outnumbered by limit sell orders at around the market price of ~1.00 USD. So, although an individual buyer would want to partially fill their order at this appealing price of 0.80 USD, this would only partially fill their order, resulting in disregarded utility — the difference between the total trader utility gathered if their order was filled completely and how much was actually filled.

To think about it another way: adding disregarded utility to the objective function pushes the price of the settled order towards the price represented by the largest limit orders. This forces the price to be on the end of the “larger” order.

So, disregarded utility is meant to allow for reasonably executed market orders. Yes, it’s not exactly intuitive.

The exploit

With the above clarification out of the way, I can now get into exactly what the exploit was during the API3 token distribution event. The incident was described in detail here by Felix Leupold from the Gnosis team.

There is a naive assumption that isn’t exactly made explicit in the Gnosis Protocol documents (as far as I can tell): off-chain solvers and traders are disjoint sets. More specifically: off-chain solvers aren’t simultaneously making orders on the platform. Perhaps less naively, one might acknowledge this assumption as faulty, but make the arguably less naive assumption that: even if a solver is also an order maker, they cannot simply include their order since they are competing against other solvers to make the “best” solution (which is not guaranteed to include their own order).

This exploit disproves both of these assumptions. The attacker was a solver that placed an order and then created a workaround to include their own order (for a relatively low price) in their solution, which ended up being the “best” solution, albeit for the wrong reasons.

Before the auction started, EOA 0x9cde created and listed 24 fake ERC 20 tokens (FT1 … FT24) on Gnosis Protocol.

The giant ring trade the attacker created in order to create a high utility score. (Source)

The essence of the exploit: the attacker created a giant ring trade, consisting of 26 tokens — API3, USDC, and 24 newly-minted shitcoins (FT1, …, FT24). The orders in the ring trade consisted of:

  1. The “legitimate” order for ~1.6 million API3 for 0.56 USD (by way of selling FT1, see diagram above).
  2. The order to sell USDC at $1 for FT24.
  3. All other orders — for FT[n+1] → FT[n] — were made with limit prices of $0 (and filled at $1).

Note that the total “real value” coursing through this ring trade is only $924k and that the ring trade could have increased in size without actually costing the attacker. (Reminder that batches in Mesa can be solved with max 30 orders, so the size of a settled ring trade is bounded.) That is, you can arbitrarily inflate the utility of a ring trade by increasing the size of the ring without actually increasing the total value exchanged. I believe this is one of the cruxes of the issue here.

The user was offering to sell FT_x for FT_y at a price of basically $0, yet they received a price of $1. This generates a lot of surplus for them (referred to as utility and one of the major factors in the optimization criterion): almost $1M in utility per fake token.

Note that, because the attacker set the limit price of each FT[n+1] → FT[n] trade to $0, they were able to inflate the utility of their solution quite significantly — read: according to the formulation of the objective function, the seller sold their tokens for a price much higher ($1) than they had requested ($0) thus resulting in a high trader welfare/surplus. Note that the actual trades settled at $1 (and not less) and this is related to the disregarded utility term in the objective function discussed at length above, allowing a user to “take” other orders exactly at their limit price.

Thus, the exploit combined two “known issues” defined in one of the dev docs: market order exploit and fake token utility. The attacker’s winning solution generated a total utility of >22M whereas the best solution by a benign solver would have been around <5M.

I should also note that this sort of attack is much more likely in an IDO scenario — where sell and buy orders are made much ahead of time — because the best genuine utility score can be approximated and thus the amount of utility needed to win with a fake ring trade can be computed ahead of time.

Rand0m and not so random final thoughts

Here are some of my thoughts and take-aways from this event. Note that most of the things I list here are ultimately in the form of a question, meant to spark conversations rather than being my set-in-stone thoughts on such issues.

  • Documentation  — Is it sufficient to simply add a short addendum to some infrequently read documentation to alert users of existing exploits? How should the creators of various blockchain protocols sufficiently alert users of known exploits?
  • Independence of solvers  — In my previous article, I had noted the ingenuity of solving an NP-hard problem via “crowdsourcing” and having solvers compete to come up with a solution. However, in my amazement, I now realize I made the assumption (like I’m sure many have) that solvers are using independent techniques thus resulting in a thorough search of the nonconvex search space. However, it seems that most solvers use the same code to come up with a solution. Although there is some randomness embedded in the code, the otherwise similarity of search technique would undoubtedly cause solutions to the best matched order to be biased in some way. In other words, the solution for a given batch probably isn’t as nondeterministic and obscure as we might initially imagine, likely biased towards certain local minima. This point would undoubtedly require a more thorough analysis; I’m just jotting the general idea for now.
  • Complexity and obscurity  — How advantageous is a protocol when very few people know how it works? On one hand, the obscurity makes it so the average user can’t really “game the system” (as noted in my last post) — on the other hand: a very clever user could find major exploits previously undiscovered due to the complexity of the protocol. Complexity increases attack surface area. Attacks lurk in the dark corners of complex mathematics, especially when corners need to be cut (e.g. max 30 orders per batch) in order to implement high-dimensional mathematical complexity on the blockchain, with its numerous constraints and restrictions.
  • Off-chain computation and black boxes  — To what extent are black box solutions appropriate for the otherwise transparent blockchain? That is, how much “work” can be safely obscured by off-chain computation without creating hard-to-see attack surfaces?
  • Solution improvement vs solution complexity — Does the gain provided by Gnosis Protocol — front-running resistance and arguably “fairer” solutions (via increasing trader utility) — outweigh the costs provided by the ever-increasing complexity of the solution? (Indeed, Gnosis Protocol v2 will likely involve changing the optimization criteria yet again.) How do we even measure this tradeoff?
  • User-friendliness?  — Users have an innate (and understandable) distrust of systems they don’t understand (an oh-so common complaint of the cryptosphere/blockchain space at large). Should creators of such systems be aware of such complexity/user-friendliness trade-offs when designing such systems? Perhaps I should have taken the outpouring of questions from users/distribution participants as a smoke signal — a canary in a coal mine kind of thing?
  • Historical technical documentation —I thought about editing “How Mesa DEX Works” in light of this exploit, which occurred after having written my post. However, I choose to leave it as-is, with a link to this post, and treating it as a historical document. Technology does not emerge in a vacuum, and I think historical context should be more often integrated in technical discussion/documentation.

Final note

I wrote this article by reading scant documentation and without any external editors (it’s the holiday break!) so, if there are any inaccuracies in this post or any major missing pieces, please feel free to comment and I’ll update accordingly.


Top comments (0)