Seems the bottleneck is hardware so the solution should be hardware
Would it be possible (and practical) to fetch and write files from for example s3 buckets and then scale processing using apis such as lambdas or ec2s running a tiny node app, like below;
Your main app could parse the file list and distribute jobs to parse one file at a time to a scalable "parser endpoint" that gets a s3 object path, converts it and puts it in another bucket. This parser service would then scale with load on the endpoint.
You could probably write an MVP or POC with like 5 jar files
Your main app would then, as you were suggested, await the async ajax call and fire away like ~ 20 concurrent ajax calls and then wait until one is done before firing the next, should be a fairly simple loop
How’s it going, I'm a Adam, a Full-Stack Engineer, actively searching for work. I'm all about JavaScript. And Frontend but don't let that fool you - I've also got some serious Backend skills.
Location
City of Bath, UK 🇬🇧
Education
10 plus years* active enterprise development experience and a Fine art degree 🎨
Looks like I'm going to need to learn some GCP we don't use AWS where I work 😑 not that GCP is bad.
It sounds reasonable to make an app like this, maybe I can get the whole thing running on my crappy 16 logical cores 😅 (I'm from the duel core era do anything above 4 sounds outstanding) then once I understand how my so far cobbled together node app works, take that and part it out in GCP, it's a great suggestion actually, I did wonder how far optimization of my code would go, but as I said in other threads, it's now slower and more stable because you correctly point out hardware is the bottleneck here. Still, what a learning experience this is
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Seems the bottleneck is hardware so the solution should be hardware
Would it be possible (and practical) to fetch and write files from for example s3 buckets and then scale processing using apis such as lambdas or ec2s running a tiny node app, like below;
Your main app could parse the file list and distribute jobs to parse one file at a time to a scalable "parser endpoint" that gets a s3 object path, converts it and puts it in another bucket. This parser service would then scale with load on the endpoint.
You could probably write an MVP or POC with like 5 jar files
Your main app would then, as you were suggested, await the async ajax call and fire away like ~ 20 concurrent ajax calls and then wait until one is done before firing the next, should be a fairly simple loop
Looks like I'm going to need to learn some GCP we don't use AWS where I work 😑 not that GCP is bad.
It sounds reasonable to make an app like this, maybe I can get the whole thing running on my crappy 16 logical cores 😅 (I'm from the duel core era do anything above 4 sounds outstanding) then once I understand how my so far cobbled together node app works, take that and part it out in GCP, it's a great suggestion actually, I did wonder how far optimization of my code would go, but as I said in other threads, it's now slower and more stable because you correctly point out hardware is the bottleneck here. Still, what a learning experience this is