Assuming I have a project to create a 1000 wordpress websites using the top 1000 web hosting company in the world for research purpose (ttfb, latency etc), I will create a sample blog and write a post on each blog, then do some lighthouse and load balancing test on each site to compare them against each other.
The technology:
- Docker (Ubuntu with Xvfb)
- NodeJS
- Puppeteer
The best choice so far seemed AWS lambda or Google Cloud, but they have a pretty nice limitation of 9 to 15 minute per run and I will have to re run the instances again after that time.
While it's impossible, I ran several cost calculator for lambda, and it says it will cost me the following for 15 minutes even though I need it to run for an hour,
$15 for 1000 executions, 1cpu, 1024MB ram, 15 minutes => 900000 ms
I also cannot run them one after another, using same chrome window, nor I can stop them after 15 minutes.
If I create $5 digitalocean/linode instances to do same amount of task, I would need to pay $7 for that one hour, though managing those instances seems like another level of headache.
That leaves me with kubernetes, and docker swarm.
If I buy CPU optimized droplets, even the max 32vCPU package looks like I will need to buy 31 of them at once to get 1 CPU for each chrome instance. Linode has 48 cores package, only 20 nodes enough to hit the 1k limit, though spawning and managing them might come at a cost.
So, my questions is, how do you think you should scale 1k instances that must be running for 1 hour?
Discussion (0)