DEV Community

Cover image for Food Trucks and Async Programming
DealerOn Dev

Food Trucks and Async Programming

edamtoft profile image Eric Damtoft ・4 min read

A few days ago, we had "Food Truck Day" at DealerOn. Say Cheese, a grilled cheese food truck set up shop out front of our Rockville office and word spread quickly. As a line formed, one person in the back of the truck took orders, and two cooks worked the griddle. When an order was received, they used a queue of "tickets" to track orders, and both cooks worked simultaneously to prepare a variety of grilled cheeses. For such a confined space, the system was efficient and well-orchestrated. It struck me that this was a perfect example of an asynchronous and parallel system architecture. We'll look at some concepts of async, parallel, and distributed programming using a hypothetical food truck as an example.


Imagine a food truck with only a single cook who serves burgers. The line forms and they take your order, put the patty on the grill, sit by the grill waiting for it to cook, add a bun and toppings, and serve it. Then they take the order from the next customer and repeat the same process. While this is the simplest setup, it has obvious inefficiencies. Both the cook and the customer spend a lot of time standing around idle while their food is being cooked and the cook can only cook one patty at a time. Taking an order takes some time, so the grill sits empty for a few minutes between each order. However, this is how we write most code. It executes in a single line and everything must sit and wait for any dependent tasks to complete.


The cook spends lot of time waiting for the food to cook. It must be on the grill for a few minutes, so after putting a patty on the grill, the cook steps away and takes the next order. A dozen patties can fit on the grill at the same time, so the cook can get started on the next order while the first one is cooking. Although the single cook can only actually flip or serve one patty at a time, multiple can be on the grill concurrently. This is now an asynchronous system.

Waiting for HTTP requests, database queries, or any other kind of external system is a opportunity to gain time back instead of blocking while you wait for the task to complete. In many languages, this is done with async/await syntax to declare that you're letting the worker move on to other tasks until the next step is ready. Javascript operates with this model by having a single-threaded event loop, but requiring that API calls, accepting user input, and other "wait" operations to have "callbacks" to asynchronously continue when they are completed rather than locking up the browser while they stand idle.


As an alternate approach, instead of improving the process of a single cook, you could simply hire a second cook. The second cook goes through the same "single-threaded" process as our first example, but both cooks work at the same time. This is the equivalent of introducing multi-threading. Work is still done in a single step-by-step process, but there are more workers operating in parallel. This is handled by running code on multiple threads which can execute at the same time as each-other on a multi-core system.

Thread Pooling

When asynchronous programming joins forces with parallel programming, you can have the best of both worlds. Imagine both cooks can look around and take the next order, work the grill, or serve a customer. Whenever they are idle, they look for the next task that needs to be done. Their efficiency is only limited by the size of the grill. In computing, parallel threads can be "pooled" into a collection of workers. When one task is completed, the next available worker can pick up where it left off. Many languages with asynchronous programming leverage some kind of thread pooling. Different languages have different semantics, but in C#, a Task represents a unit of work that can execute on the runtime-managed thread pool and asynchronously awaited.

Race Conditions

Parallelism is easy when the two processes don't need to share resources, but that's rarely the case. In the example of our food truck, there is a single grill shared by both cooks. Normally, this works fine, but once in a while when the truck is busy, the one cook might put down or flip a patty without the other one seeing it. They loose track of which patties have been on the grill and overcook a few. When this happens in a parallel system, shared memory can become easily corrupted if it's not designed to handle parallel requests. This can lead to particularly difficult-to-diagnose bugs as they may happen only under a high volume of concurrency such as a production environment, and even then only very occasionally. It can be almost impossible to reproduce the exact sequence of events in small controlled development environment.

Distributed Computing

Once the food truck has minimized idle-time, the only remaining bottleneck is the grill itself. As the food truck becomes more popular, the owner buys another food truck. Each new food truck adds a new grill and new cooks. In addition to being able to serve more burgers, if one food truck breaks down, service can still be provided. Likewise, adding additional server instances for a service can improve performance and system resiliency. This works best when each instance can operate independently of each-other, as is the case with the food truck. The complexities of race conditions and concurrency increase drastically as you add network latency and reliability issues to a system.

Although each step can add additional complexity, making a system asynchronous, parallel, and distributed can vastly improve it's performance and efficiency. Any time spent idle can be an opportunity to optimize. From lines for food trucks to complex distributed data processing systems, the core principals tend to stay the same.

Discussion (0)

Editor guide