DEV Community

gleamso
gleamso

Posted on

From 10s to 100ms: A Developer's Journey in Image Processing

It started with a seemingly simple request: "Can we add social preview images to our blog posts?" Little did I know this innocent question would lead me down a path of deep technical exploration and performance optimization that would fundamentally change how I approach image processing.

The Initial Challenge

Ten seconds. That's how long it took to generate a single social preview image when I first implemented the feature. The console mockingly displayed the execution time, and I could almost hear our users' collective sighs as they waited for their previews to generate.

The initial implementation was straightforward but naive:

async function generatePreview(text: string) {
  // Load image and fonts
  const background = await loadImage('background.jpg');
  const font = await loadFont('Inter-Bold.ttf');

  // Create canvas
  const canvas = createCanvas(1200, 630);
  const ctx = canvas.getContext('2d');

  // Draw background
  ctx.drawImage(background, 0, 0);

  // Add text
  ctx.font = `bold 64px Inter`;
  ctx.fillText(text, 100, 315);

  return canvas.toBuffer();
}
Enter fullscreen mode Exit fullscreen mode

The problem wasn't immediately apparent during development, but it became glaringly obvious in production. Each preview generation was loading resources from scratch, processing images synchronously, and handling text rendering inefficiently.

The First Breakthrough

The first significant improvement came from understanding the importance of resource caching. Instead of loading fonts and background images for each request, I implemented a preloading system:

// Preload resources once
const resources = {
  background: await loadImage('background.jpg'),
  font: await loadFont('Inter-Bold.ttf')
};

async function generatePreview(text: string) {
  const canvas = createCanvas(1200, 630);
  const ctx = canvas.getContext('2d');

  ctx.drawImage(resources.background, 0, 0);
  ctx.font = `bold 64px Inter`;
  ctx.fillText(text, 100, 315);

  return canvas.toBuffer();
}
Enter fullscreen mode Exit fullscreen mode

This simple change reduced generation time from 10 seconds to 3 seconds. Progress, but not enough.

Understanding the Image Pipeline

The real breakthrough came from deeply understanding the image processing pipeline. Every operation - loading, processing, encoding - had optimization potential. I discovered that many operations could be parallelized or eliminated entirely.

Through careful profiling, I identified these key bottlenecks:

  1. Image decoding
  2. Canvas operations
  3. Buffer encoding
  4. Memory management

The Technical Evolution

The solution emerged as a multi-layered approach to optimization:

class ImageProcessor {
  private readonly cache = new LRUCache({
    max: 100,
    maxAge: 1000 * 60 * 60 // 1 hour
  });

  async generatePreview(text: string): Promise<Buffer> {
    const cacheKey = this.getCacheKey(text);

    if (this.cache.has(cacheKey)) {
      return this.cache.get(cacheKey);
    }

    const image = await this.createOptimizedPreview(text);
    this.cache.set(cacheKey, image);

    return image;
  }

  private async createOptimizedPreview(text: string): Promise<Buffer> {
    const canvas = await this.getPrewarmCanvas();

    // Batch canvas operations
    await this.batchProcess(canvas, [
      () => this.drawBackground(canvas),
      () => this.optimizeText(canvas, text),
      () => this.applyEffects(canvas)
    ]);

    return this.optimizedEncode(canvas);
  }
}
Enter fullscreen mode Exit fullscreen mode

Each component of the system was optimized:

  1. Canvas operations were batched to reduce context switches
  2. Text rendering used pre-calculated layouts
  3. Image encoding utilized hardware acceleration when available
  4. Memory was carefully managed to prevent leaks

The Results

The improvements were dramatic:

Generation Time:

  • Initial implementation: 10,000ms
  • After basic caching: 3,000ms
  • After pipeline optimization: 800ms
  • Final optimized version: 100ms

Memory Usage:

  • Initial: 250MB per generation
  • Final: 50MB with efficient reuse

Key Learnings

This journey taught me several invaluable lessons about image processing and performance optimization:

  1. Understanding the entire processing pipeline is crucial
  2. Resource preloading and reuse dramatically impact performance
  3. Memory management is as important as processing speed
  4. Caching strategies need to match usage patterns

Moving Forward

These learnings haven't just improved one system; they've influenced how I approach all performance optimization challenges. The principles of resource management, operation batching, and pipeline optimization apply broadly across many development scenarios.

I've since applied these concepts to build more efficient image processing systems, including my work on Gleam.so, where we've achieved consistent sub-100ms generation times for dynamic social images.

Conclusion

The journey from 10 seconds to 100ms wasn't just about making images load faster. It was about understanding systems deeply, questioning assumptions, and continuously iterating toward better solutions.

For developers facing similar challenges, remember that dramatic performance improvements often come from understanding and optimizing the entire system, not just individual components.

What performance optimization journeys have you experienced? Share your stories and learnings in the comments.


Working to make the web faster, one optimization at a time.

Top comments (0)