The solution to coldstart is to author small single responsibility cloud functions. This is the best practice advice from AWS on Lambda. Coldstart is directly correlated to function payload size and to avoid it: write smaller functions. We've found sub 5mb will load sub second. Usually 150ms cold. (Aside: pinging/lambda warmers DO NOT fix coldstart. They hide it. If you get 2 concurrent requests you will still coldstart 1 of them. Pinging only keeps 1 Lambda warm.)
Thanks for the feedback. Is this only specific to Lambda or have you found this to be the case on all the major cloud function as a service tools? I found on Vercel that this was always multi second even with single line functions. How difficult is it to keep to the sub 5MB size given that you might use a NPM package that's bigger than that (thinking of some of the Node database clients)?
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
The solution to coldstart is to author small single responsibility cloud functions. This is the best practice advice from AWS on Lambda. Coldstart is directly correlated to function payload size and to avoid it: write smaller functions. We've found sub 5mb will load sub second. Usually 150ms cold. (Aside: pinging/lambda warmers DO NOT fix coldstart. They hide it. If you get 2 concurrent requests you will still coldstart 1 of them. Pinging only keeps 1 Lambda warm.)
Thanks for the feedback. Is this only specific to Lambda or have you found this to be the case on all the major cloud function as a service tools? I found on Vercel that this was always multi second even with single line functions. How difficult is it to keep to the sub 5MB size given that you might use a NPM package that's bigger than that (thinking of some of the Node database clients)?