DEV Community

Toche Camille
Toche Camille

Posted on

How a Single Line of Code Saved My SaaS from Major Financial Loss


Lets start with a short introduction: My journey starts in April 2021 when I create a small side project in 4 days, its a SaaS platform who provide an easiest way to receive emails on your API.

With Mailhook, you can generate a unique mail address exclusively for your use, and any new email received is seamlessly redirected to your webhook.

Our platform offers a range of features, including the ability to receive emails as HTML, attachments, evaluate spam scores, and retry failed webhooks.


During the past 12 months, Mailhook has been quite successful. We currently have approximately 300 users, including around 35 business customers.

We currently have over 600 webhooks connected, handling a staggering 45,000 emails per week along with approximately 400 GB of attachments.

No marketing, only the SEO power of a single landing page.

Growth and Challenges:

What started as a small side project has now evolved into a full-fledged endeavor, presenting us with various traffic constraints.
Despite this, we have managed to handle the influx of traffic, ensuring our server costs remain relatively low.

We have also been able to keep our database size under control by promptly deleting emails once they are successfully received by a webhook.

A Close Call:

However, a recent incident brought us perilously close to a major financial loss.

In June, I received a message from AWS, informing me that I needed to update my payment information to pay the invoices cause my card was expired.

The AWS Interface:

Curious about the situation, I logged into the AWS interface and was confronted with a shocking sight.

This is the Amazon S3 Buckets list with over +999 objects.

AWS Buckets list

The Discovery:

During the past 4 weeks one of our biggest customer receive a huge number of attachments. Upon further investigation, I made a startling realization: the files in our S3 storage were not being deleted after the designated 72-hour period, but instead, they were persisting for a whole week.

This anomaly had potentially catastrophic consequences for our finances.

The Guilty:

After digging into the code, I discovered the root cause of the issue.

This is approximatively the code used to delete attachements saved on AWS S3.

export class AttachemementsDeletionCron implements Cron {
  public readonly expression = process.env.IS_STAGING ? CRON_EVERY_SUNDAY : CRON_EVERY_72_HOURS;
  public readonly name = 'AttachemementsDeletionCron';

    private readonly _logger,
    private readonly _awsS3Service,
    private readonly _attachementsRepository
  ) {

  async run(): Promise<boolean> {
    const attachements = await this._attachementsRepository.findObjects({
      status: AttachemementsStatus.UPLOADED

    const deletedIds: string[] = [];

    for(const attachement of attachements) {'Delete attachement', attachement);
      const isDeleted = await this._awsS3Service.deleteObject(attachement.object);

      if(!isDeleted) {
        logger.warning(`Attachemement wasn't deleted`, attachement);
      }`Attachemement is deleted`, attachement);

    await this._attachementsRepository.updateMany(deletedIds, {
      status: AttachemementsStatus.DELETED,
      deletedAt: new Date(),

    return true;
Enter fullscreen mode Exit fullscreen mode

I'm sure you have found the issue very quickly ;)

public readonly expression = process.env.IS_STAGING ? CRON_EVERY_SUNDAY : CRON_EVERY_72_HOURS;
Enter fullscreen mode Exit fullscreen mode

Why this is an issue ?

First the process.env.IS_STAGING variable is always a string and is evaluated as true in this condition.

Enter fullscreen mode Exit fullscreen mode

The Life-Saving Line of Code:

When I've start this journey I wrote a plain NodeJS code, without any Flow of Typescript.

During the last summer I've rewrite all the backend in Typescript to have something better to work with and easier to maintain.

This single line of code escaped my attention.

I've set up a verification based on the environment name, I also use the ZOD library to validate the environnement variables types and values.

public readonly expression = env.NODE_ENV === 'production' ? CRON_EVERY_72_HOURS : CRON_EVERY_SUNDAY;

Enter fullscreen mode Exit fullscreen mode

With this single line of code I rectified the storage duration problem ensuring that files were deleted after the intended 72-hour timeframe on production.


In this blog post, we've shared the story of how a single line of code saved me from significant financial loss. As a small side project turned into a real SaaS platform.

We experienced the challenges of managing substantial traffic and ensuring efficient data management.

With this valuable lesson learned, we remain committed to providing a seamless and secure email-receiving solution through

If you are curious don't hesitate to try the product, feel free to ask me for a discount !

Top comments (1)

mqdev profile image
Marco Quintella

I ran into something similar before but on a small scale. It was ignoring all long time cache. While I would have 4 requests to a 3rd party API I was making it every time the info was need for some endpoint.
Well I noticed it easily and fast cause I was logging on DB the amount of time the 3rd party API was used to help understand what could be cached more/less time to save costs.