As developers, we often face challenges when dealing with large-scale data processing and delivery. At Kamero, we recently tackled a significant bottleneck in our file delivery pipeline. Our application allows users to download thousands of files associated with a particular event as a single zip file. This feature, powered by a Node.js-based Lambda function responsible for fetching and zipping files from S3 buckets, was struggling with memory constraints and long execution times as our user base grew.
This post details our journey from a resource-hungry Node.js implementation to a lean and lightning-fast Go solution that efficiently handles massive S3 downloads. We'll explore how we optimized our system to provide users with a seamless experience when requesting large numbers of files from specific events, all packaged into a convenient single zip download.
The Challenge
Our original Lambda function faced several critical issues when processing large event-based file sets:
- Memory Consumption: Even with 10GB of allocated memory, the function would fail when processing 20,000+ files for larger events.
- Execution Time: Zip operations for events with numerous files were taking too long, sometimes timing out before completion.
- Scalability: The function couldn't handle the increasing load efficiently, limiting our ability to serve users with large file sets from popular events.
- User Experience: Slow download preparation times were impacting user satisfaction, especially for events with substantial file counts.
The Node.js Implementation: A Quick Look
Our original implementation used the s3-zip
library to create zip files from S3 objects. Here's a simplified snippet of how we were processing files:
const s3Zip = require("s3-zip");
// ... other code ...
const body = s3Zip.archive(
{ bucket: bucketName },
eventId,
files,
entryData
);
await uploadZipFile(Upload_Bucket, zipfileKey, body);
While this approach worked, it loaded all files into memory before creating the zip, leading to high memory usage and potential out-of-memory errors for large file sets.
Enter Go: A Game-Changing Rewrite
We decided to rewrite our Lambda function in Go, leveraging its efficiency and built-in concurrency features. The results were astounding:
- Memory Usage: Dropped from 10GB to a mere 100MB for the same workload.
- Speed: The function became approximately 10 times faster.
- Reliability: Successfully processes 20,000+ files without issues.
Key Optimizations in the Go Implementation
1. Efficient S3 Operations
We used the AWS SDK for Go v2, which offers better performance and lower memory usage compared to v1:
cfg, err := config.LoadDefaultConfig(context.TODO())
s3Client = s3.NewFromConfig(cfg)
2. Concurrent Processing
Go's goroutines allowed us to process multiple files concurrently:
var wg sync.WaitGroup
sem := make(chan struct{}, 10) // Limit concurrent operations
for _, photo := range photos {
wg.Add(1)
go func(photo Photo) {
defer wg.Done()
sem <- struct{}{} // Acquire semaphore
defer func() { <-sem }() // Release semaphore
// Process photo
}(photo)
}
wg.Wait()
This approach allows us to process multiple files simultaneously while controlling the level of concurrency to prevent overwhelming the system.
3. Streaming Zip Creation
Instead of loading all files into memory, we stream the zip content directly to S3:
pipeReader, pipeWriter := io.Pipe()
go func() {
zipWriter := zip.NewWriter(pipeWriter)
// Add files to zip
zipWriter.Close()
pipeWriter.Close()
}()
// Upload streaming content to S3
uploader.Upload(ctx, &s3.PutObjectInput{
Bucket: &destBucket,
Key: &zipFileKey,
Body: pipeReader,
})
This streaming approach significantly reduces memory usage and allows us to handle much larger file sets.
The Results
Before: Node.js Implementation
After: Go Implementation
The rewrite to Go delivered impressive improvements:
- Memory Usage: Reduced by 99% (from 10GB to around 100MB)
- Processing Speed: Increased by approximately 1000%
- Reliability: Successfully handles 20,000+ files without issues
- Cost Efficiency: Lower memory usage and faster execution time result in reduced AWS Lambda costs
Lessons Learned
- Language Choice Matters: Go's efficiency and concurrency model made a massive difference in our use case.
- Understand Your Bottlenecks: Profiling our Node.js function helped us identify key areas for improvement.
- Leverage Cloud-Native Solutions: Using AWS SDK for Go v2 and understanding S3's capabilities allowed for better integration and performance.
- Think in Streams: Processing data as streams rather than loading everything into memory is crucial for large-scale operations.
Conclusion
Rewriting our Lambda function in Go not only solved our immediate scaling issues but also provided a more robust and efficient solution for our file processing needs. While Node.js served us well initially, this experience highlighted the importance of choosing the right tool for the job, especially when dealing with resource-intensive tasks at scale.
Remember, the best language or framework depends on your specific use case. In our scenario, Go's performance characteristics aligned perfectly with our needs, resulting in a significantly improved user experience and reduced operational costs.
Have you faced similar challenges with serverless functions? How did you overcome them? We'd love to hear about your experiences in the comments below!
Top comments (2)
Can zip creation happen in parallel? Can you really merge multiple streams into one zip stream without corrupting data?