DEV Community

Why local development for serverless is an anti-pattern

Gareth McCumskey on June 02, 2021

In the serverless community, individuals and teams spend a lot of time and effort attempting to build an environment that is a replica of the cloud...
Collapse
 
dam profile image
kj🊝💕

Personally, flying blind is not my favorite use of time.

You can still do dirty ( untested, blind ) deploys with arc.codes if you like, but it is a much better idea to test your own logic locally before piling on the complexity of many distributed systems.

The only "anti-pattern" you outline here is using anything other than serverless framework which feels like a bad faith argument.

Collapse
 
garethmcc profile image
Gareth McCumskey

I am not suggesting you "pile on the complexity". You can still test just your code while it is in Lambda by sending it test data using something like sls invoke or other tool. Its also less blind because you ARE seeing it potentially interact with all the other issues that can crop up being in a remote, ephemeral environment such as AWS Lambda which is not similar to a local machine.

Collapse
 
dam profile image
kj🊝💕

You need local development to speed up the feedback loop.

Once you've gotten all of the simple bugs you wrote out of the way then you can start fixing the bugs you didn't write.

No one but you has ever said it's a bad idea to test your code before deploying because that is just a best practice that is beyond rebuke.

Similarly, no one had ever said it's a bad idea to do integration tests against the actual services.

It isn't an anti-pattern to run emulations to save time, but it is an anti-pattern to try to confuse developers with marketing in an attempt to paper over a products shortcomings.

Thread Thread
 
garethmcc profile image
Gareth McCumskey

Even if the feedback loop is still fast without local? Using tools such as the Serverless Framework that allows you to deploy code changes in 3 seconds or less to AWS with the serverless deploy function command means you're feedback loop is still blinding quick AND you are still testing on the actual 100% production equivalent.

This is also not marketing in any way. I am not sure what you believe I am marketing. This is my personal blog post written by me after 20 years in web development and 5 years building serverless applications, 2 of those years happen to be at Serverless, Inc.

I am sorry you feel so offended by my personal opinion but I stand by everything I said; we started doing development locally because to have every developer develop against a 100% equivalent of production in the past would be cost prohibitive. We have now gotten used to local even though, in my opinion its no longer needed; replicating a production serverless app is free.

I would appreciate less of the ad hominem attacks please.

Thread Thread
 
frugoman profile image
Nicolas Frugoni

It seems that unit tests are not important to you.
What kind of tests are you talking about?
Tests that target databases (production or not) are slower than mocking.
If you unit test, you can test your business logic regardless of what infrastructure you are using.
Test on a staging deployed environment is still needed for integration testing, but imho this is a step to take after unit testing.
Build and test locally (unit testing with NO connection to infrastructure (http, dB, filesystem,etc)) is much much faster than deploying to any service and also it gives you almost instant feedback and you can navigate in the code instantly using the ide.

Sorry I don't agree with this post.

Thread Thread
 
garethmcc profile image
Gareth McCumskey

Personally I am not a fan of unit testing in a serverless environment; and I wrote a blog post 2 years ago about how to do it too ... how times change. In a serverless application the amount of code you write is minimal compared to a traditional web application as the cloud services you use end up replacing a lot of the code for you. This is a good thing. And in that case, integration testing is far more important than unit testing 10 lines of code that insert an object into a database.

Build and test locally is much much faster than deploying to any service

As Yan Cui recently said in a reply to one of my tweets "Speed of feedback is great, but only when it gives you the right feedback. e.g. if you're mocking AWS SDK and supposedly testing integration with DynamoDB then the test just tells you if your mock is working.

Learning the wrong thing faster is counter-productive."

The same is true testing on your Intel i7 with 8 GB of RAM and a 4k display. That is not the equivalent of a highly distributed application run across multiple machines in potentially multiple data centers, etc.

Thread Thread
 
frugoman profile image
Nicolas Frugoni

I can see your point.
But the amount of code and complexity always grows. Unit tests helps you test your logic on isolation, it doesn't really matter if your using a potato or a Xeon processor, it should be fast since the code that's being run, should in theory don't have much dependencies and you must mock each class' dependencies. You don't test your mocks here, you test your class in isolation of things outside of it, making different mocks for representing different cases.

This not only allows for writing tests as of it self, but allows you to have a good, scalable, easy to maintain codebase since you must use dependency inversion principles. Also TDD is possible, ci/cd is more reliable, etc.
Aaanyhow, this is deviating a bit I think from the post's argument.

Thread Thread
 
garethmcc profile image
Gareth McCumskey

In a serverless application, the logic is often a part of the infrastructure. An example of this is that you may have two services; lets call them a customer service and an order service. If the customer service receives a request to update customer details, all unfulfilled orders for that customer need to also have the customer details updated. In that case you use API Gateway to receive the initial PUT request to update customer data, the change is made in DynamoDB by a Lambda. That insertion triggers a DynamoDB stream entry which triggers a Lambda function. The Lambda pushes the DynamoDB action into EventBridge. The order service has a Lambda listening for customer change events and queries the orders table for all orders for that customer and performs the update. No unit test can test that entire flow and each lambda is perhaps 8-10 lines long.

Thread Thread
 
frugoman profile image
Nicolas Frugoni

Your right about non-unit tests capable of testing that entire flow, but again, that's not even a unit test.

Unit tests in this cases would be something like this:

On the order service you'll test the method 'onCustomerUpdated(customer)'

There's a bunch of things you can test depending on the use case.
For example, if you support in memory cache as well as the dynamo, you can unit test that when something calls the onCuatomerUpdated method (in production the caller would be the infrastructure itself), then check if the cache is updated as well as the dB, but using abstraction on this method.

Example using pseudocode-js

const cache = inmemoryRepository
const dB = DynamoDBRepository()
Function onCuatomerUpdated(cust) {
DB.update(cust)
.then(cache.update(cust))
.then (return 'success')
}

In a unit test environment, when you create that class to test it you would replace the dB and cache variables with mocks and you can verify if they were called in order, with what data and so on.

Sorry if there's anything badly written, I'm on my phone rn

Thread Thread
 
frugoman profile image
Nicolas Frugoni

You can imagine that if those services belong to different teams, each team would like to be sure that their code works as they intend regardless of how it's called. Some case would call that method my using a rest API from a client, maybe some other case would be the infrastructure itself calling it.

Abstraction and independence are key here I think

Thread Thread
 
frugoman profile image
Nicolas Frugoni

Side comment: I do use serverless and I absolutely love it

Collapse
 
lxxyx profile image
Lxxyx

In fact, we develop and test locally precisely because the tools in the cloud are not yet perfect. Especially for special scenarios like debugging.

So I think local development is going to be the dominant approach for a long time.

Collapse
 
jayair_20 profile image
Jay

Yeah debugging Lambda functions can be really painful. It's one of the reasons why we created SST (github.com/serverless-stack/server...). It hot reloads your functions while testing against the resources that've been deployed to AWS. This allows you to set breakpoints in VS Code. Here's a short clip of it in action — youtube.com/watch?v=2w4A06IsBlU

Collapse
 
garethmcc profile image
Gareth McCumskey

There is a lot more to my blog than just that. You may be able to execute code locally but it doesn't make it accurate compared to the environment it will eventually run inj. We have had this problem for years; I have never had a local development environment even 10 years ago I relied upon to give me accurate results. I always had to remote test to be sure. With the easy of Serverless deployments and them being 100% accurate to production means that issue no longer exists ... if I test in the cloud instead.

Collapse
 
ludofleury profile image
Ludovic Fleury • Edited

The ecosystem is not there yet. All the dev tools: runtime profiling, debug mode, discoverable dependance & "browsability".

At large scale, because lambda encourage large base code to be spread, it would be a real cost to deploy fully operating dev env (with profiling, verbose logging, etc) for every branch of every dev.

What would be interesting is to actually connect the "local env" to remote one, allowing lambda hot swap & hot reload. Should it be "coding in the cloud"? probably, as connectivity is becoming better and better, we wouldn't face the issue we had before, by excluding devs who doesn't have access to stable high speed connectivity.

As we are all targeting the shortest feedback cycle ever, targeting 15 min max (ex github, netflix, honeycomb), this is a real challenge for serverless.

Collapse
 
jayair_20 profile image
Jay

If you are looking for the Lambda reload approach, check out SST (github.com/serverless-stack/server...). It deploys your entire stack to AWS but hot reloads your Lambda functions, so you get a tight feedback loop and the logs show in your terminal. This approach also allows you to set breakpoints in VS Code. Here's a short clip of it in action — youtube.com/watch?v=2w4A06IsBlU

Collapse
 
ludofleury profile image
Ludovic Fleury

Amazing :) I will check this! thank you

Collapse
 
garethmcc profile image
Gareth McCumskey

The ecosystem is not there yet. All the dev tools: runtime profiling,

Interestingly, because lambda functions are so ephemeral and self contained, the impact of a single bad thread on the environment as a whole is entirely removed, meaning requiring complex runtime profiling isn't strictly necessary. However there are tools that let you profile the execution of Lambda functions as they run in the cloud

debug mode,

Debug mode imo is overrated when you can have logs streaming directly from a Lambda function in the same environment that your production Lambda functions execute in and invoke them from your local machine with a single command

discoverable dependance

This is no less possible with serverless deploying directly into the cloud. My IDE still shows me my dependancies etc locally.

& "browsability".

I am not sure what this means, but I assume you mean the ability for a developer to understand the application the first time. I would say that because serverless architectures tend to reduce the amount of code written that understanding by other developers becomes easier, whether or not it runs locally or remotely. And you can "browse" a remote deployment just as easily as a locally emulated, the remote just doesnt have the possibility of failures due to unforseen differences between local and remote.

At large scale, because lambda encourage large base code to be spread, it would be a real cost to deploy fully operating dev env (with profiling, verbose logging, etc) for every branch of every dev.

serverless deploy --stage gareth. I have just deployed my own personal version of the stack to my cloud provider to play and experiment with. It likely costs nothing at deployment time to do so and probably nothing the entire time I am testing due to AWS free tier.

Do that 1000 times as I have done, you should see the number of experimental projects I have deployed to AWS over the years and my last bill was $5 because I happen to have an EC2 instance running for a little while I was playing with.

Collapse
 
ludofleury profile image
Ludovic Fleury • Edited

you have legit points and your expressed opinion in the article is totally valid.

Just to clarify, I do run lambda on prod for 2 projects. and I did build serverless with gateway as soon as aws released them. one project at medium scale (team & load). I totally value the benefits of serverless.

Yet there is a wide reality:

  • for instance remote developers relying on bad connectivity(country side, far away from quality 4G nor landline).

  • we are nowhere close to hot reload instant feedback cycle. I do understand that you have to develop practice & workflow, and that serverless isn't to blame for the lack of mastery of TDD for example. But a lot of "bootcamp 3-weeks to pro dev school" or "3 years bachelors to fullstack web & native dev" do relies on "console.log" driven dev.

At the end, intuitive programming with "WYSIWYG" is still largely spread across junior to medium level developer.

The change management to move this legacy practice to new age one is long and costly. So when I say "the ecosystem is not there yet", it has this paradigm of operational cost & time to value as main factor.

I really do enjoy serverless, the possibility, the new paradigm. I was a prime user of managed service 15 years ago and still advocate them to people trying to run their own xxSQL home, messing up with 24H "backup". Yet serverless impose a (too) big step for devs, and we just need to make it affordable. Meeting us halfway :)

Collapse
 
filippyrek profile image
Filip PÃœrek • Edited

Great thoughts @garethmcc 👏 totally agree.
But we as a community should start figuring out how to “hot reloading” in the cloud. 😀

Collapse
 
andreidascalu profile image
Andrei Dascalu

"...added huge amounts of complexity for developers to have to handle and worry about" - no. Containers encapsulate the things an application needs to run. If as a developer knowing and working with the stuff that makes your work run means added complexity and stuff you can't handle, good luck in the future.

"...Running Nginx, ElasticSearch, Redis and MySQL on a single machine apparently uses a lot of memory" - somewhat true. Nginx needs only 64Mb of RAM to run, same for Redis. MySQL container needs 384Mb as a bare minimum and 512Mb makes a decent environment. ES needs 1Gb and 1 full CPU though. That's 2Gb right there, which means you may want to allocate about 3Gb to that particular Docker Desktop.

"...they siloed a single developer off for two months to replicate production as a collection of docker containers" - totally the wrong approach. The whole point of using containers is that they carry the environment. If you replicate production instead of replacing production with containers, then you're not aiming for an actual benefit. You should be running those containers (at least on application level) in production. External dependencies might vary as long as the underlying platform is similar but if you're not running the application containers in production then all you're getting is a way to share environment between devs. It's nice but maybe not worth this particular effort.

On the pure serverless side, I agree. Local solutions just aren't there. But it doesn't mean they're not worth investing effort in. Traveling contractors are still a thing. I myself need to develop every other day on train, plenty of spots without connectivity along the way. At home it's reliable but the occasional outage always comes to disrupt my state of flow or when there's an important task to do.

Collapse
 
garethmcc profile image
Gareth McCumskey

If as a developer knowing and working with the stuff that makes your work run means added complexity and stuff you can't handle, good luck in the future.

Analogy time. A business is like a home buyer. If I am the average home buyer I couldn't care less what techniques were used in the construction of the house I am buying; whether they were the latest and greatest in modern marvels or hand crafted by a neanderthal, I want a home that is well appointed, stays up and keeps me sheltered. In the same way a business couldn't care less HOW the developers build the solution, just that its done as fast as possible, as cheaply as possible and as reliably as possible. Also turn around time for adding new features should be good if possible. Developers then shouldn't need to have to learn how to do the plumbing and electrics if they don't need to. Its about solving the problem not trying to play with the latest tech.

... which means you may want to allocate about 3Gb to that particular Docker Desktop.

Right now my RAM usage is 0 unless you count my IDE which I didn't in the original example so I won't here. I can build (and have) Serverless applications on my Raspberry Pi Model B+ from 2014.

The whole point of using containers is that they carry the environment. If you replicate production instead of replacing production with containers, then you're not aiming for an actual benefit.

The attempted benefit was to replicate production. Production was spread across 17 different virtual machines using multiple layers of caching and load balancing. My point was ... you cannot replicate production this way. You can with Serverless. serverless deploy --stage mynewstackname and production is replicated.

Local solutions just aren't there. But it doesn't mean they're not worth investing effort in

My point was that local solutions have NEVER been "there". We have, over the years, required local development environments as a best effort emulation of the production environment because to ACTUALLY replicate production in the past was far too costly and time consuming. Serverless changes that entirely. You can have an EXACT replica of production up in a few seconds.

Local testing began as a necessary evil because all other alternatives were untenable. Local testing has now become this sacrosanct feature that all developers are taught is an absolute requirement for you to ever want to work in the field. For traditional development, we are, unfortunately, stuck with an inaccurate representation of production we need to test against locally. In the serverless world, an exact replica of production is a single command away.

Collapse
 
andreidascalu profile image
Andrei Dascalu

" Its about solving the problem not trying to play with the latest tech". True, but that doesn't mean you shouldn't know what it takes to run your application.

I've encountered plenty of react developers that had no idea about the differences that come when running a development environment via "yarn start" and running a static build via nginx. Or PHP developers that have no idea about the impact of various PHP configurations.

If you don't go ahead and choose how your application runs, then the choice will be made for you and you might not like the outcome. Real life example: developers working with nodejs microservice, having no idea what tracing is, how to instrument their own application or how to customize logs.

This has nothing to do with playing with the latest tech. This has everything to do with knowing your tools and running your application. And if the optimised environment travels with the application, then it's all the better.

Thread Thread
 
garethmcc profile image
Gareth McCumskey

That only strengthens my point. If a react developer or PHP developer was testing against the exact 100% replica of what the production environment looked like they wouldn't be worried about "the choice will be made for you and you might not like the outcome. ". They are testing against the decisions from day one!

Serverless allows you, as a developer, to know EXACTLY how what you are building is going to operate in the cloud from the moment you start if you deploy to and test in the cloud.

Tracing and instrumentation? I shouldn't need to worry about that stuff! Let it be auto instrumented for me which it is in a serverless application.

This has everything to do with knowing your tools and running your application.

My tools are the services I consume in the cloud, the code I write. My tools are not the OS, application software and myriad of potential container management options out there. As a developer building solutions for a business I need to concern myself with output and features, not the minutiae of implementation details. Thats where Serverless excels and the point of the article is to point out to other serverless developers that they are potentially missing an opportunity by not just developing against a deployed replica of production.

Collapse
 
winstonn profile image
Winston Nolan • Edited

Early on in serverless days, deploys took ~10 minutes for a small change. Remember those days? That's one reason why developers wanted to work locally..

Collapse
 
megaproaktiv profile image
Gernot Glawe

If you tweak AWS a little bit you can deploy under a second without the sls framework aws-blog.de/2021/04/cdk-lambda-dep...

Collapse
 
winstonn profile image
Winston Nolan • Edited

True, but I did say "early on in serverless"

Collapse
 
jayair_20 profile image
Jay

If you are using CDK, try out SST. It hot reloads your Lambda functions, so won't even need to deploy them.

github.com/serverless-stack/server...

Collapse
 
sheldonhull profile image
Sheldon

I love that you also use a nice task runner like Go Task. :-) That goes above and beyond most blog articles. It's basically a fully featured solution out of the box to try! nice work.

Collapse
 
garethmcc profile image
Gareth McCumskey

Hey Winston. Times change. You can now deploy a code change in 3 seconds or less. You actually could back then too when we were hacking on things together but sometimes the little tricks elude you. serverless.com/framework/docs/prov...

Collapse
 
nilamo profile image
Alex Winfield

For some things it can make sense. But how do you do basic debugging, like setting a breakpoint?

Collapse
 
jayair_20 profile image
Jay

If you're looking to set breakpoints and debug Lambda functions locally checkout SST — github.com/serverless-stack/server...

It connects directly to what's been deployed on AWS without mocking or emulating them.

Collapse
 
nilamo profile image
Alex Winfield

Oh now that looks exciting. Thanks :)

Thread Thread
 
codenameone profile image
Shai Almog

Check out lightrun.com in that case... Get literally the experience of a local debugger in production environment (I work there).

Collapse
 
garethmcc profile image
Gareth McCumskey

My code is written to capture errors and log out useful error messages if an issue occurs. If I am testing against an exact replica of production I may as well make my error management as expressive as it would be for production. You can't breakpoint production and still need to debug if an issue occurs:

try {
  //Do something here that may error out
} catch (error) {
  console.log('An error occurred')
  console.log(error)
  return error
} 
Enter fullscreen mode Exit fullscreen mode
Collapse
 
nilamo profile image
Alex Winfield

I mean, we have incredible ides and debugging tools available. It'd be a huge loss to just not use them for anything at all anymore.

Why not just spin up duplicate cloud resources for development, and connect to those for local development? It solves the issue of mocking the cloud resource, without sacrificing any of the development tooling.

Maybe I misunderstood in your article, but people aren't really running mysql or redis locally, are they? It's just as easy to have pared down dev versions running in aws. I use scheduled batch scripts to turn them on/off each day, so they don't cost anything when nobody would be using them, and are fully up and running when they would be needed (so there's no waiting for it to spin up).

The only really challenging thing is event-driven. There's close to no examples of what an SQS payload looks like when triggered from S3 (for example), so creating a dummy lambda just to log the message, so you can mock it locally, seems to be the only way to start a project that consumes those events. But running the whole project in the cloud wouldn't solve that issue, either.

Collapse
 
marcello_h profile image
Marcelloh

I don't see it as an anti-pattern, but with a bridge pattern in between, you will be able to test your stuff without any cloud solution behind it, which makes local development a snap and debugging as easy as normal.

Collapse
 
garethmcc profile image
Gareth McCumskey • Edited

Thanks for the feedback. I see it more than just about making developers lives easier. I see trying to execute code, even without the cloud services attached (I even wrote a blog post about setting this up two years ago), as inherently dangerous since it means developers are building for what "works on their machine" instead of directly against the 100% equivalent of production infrastructure. And with the existing tooling that's been around for 5 years now and is built into the framework by default its not even necessary

Collapse
 
marcello_h profile image
Marcelloh

I know we come from far, but when I have to debug a Lambda by watching logfiles, it feels like 1980 all over again.

Thread Thread
 
garethmcc profile image
Gareth McCumskey

I understand that sentiment. Personally I prefer to use whichever method is the most accurate. The ability to tail logs and see debug output is about as rewarding as using a debugger. Its just different. Having an inline debugger would be great and that may come but until then I'd prefer sacrificing what I am used to.

Thread Thread
 
rcoundon profile image
Ross Coundon

It's possible now using SST, see recent comments from Jay in this thread

Collapse
 
megaproaktiv profile image
Gernot Glawe

I would not say local deployment is an antipattern. But when you use proper unit tests locally you need it only in special cases.
Each developer should find her own fit.
I myself really like unit tests, try to avoid local emulation of lambda and then run integrations test in the cloud.
Especially with aws lambda you will not get the iam access denied errors catched testing locally...

Collapse
 
jayjeckel profile image
Jay Jeckel

Sounds like web development has built itself up a mountain of technical debt that needs to be addressed. If you can't test a piece of software locally, then that's a problem with the system, not with the concept of local testing.

Collapse
 
horaceshmorace profile image
Horace Nelson • Edited

I don't think it's an anti-pattern. I think it's impossible. There is no local dev environment for serverless architectures. You cannot imitate [most] AWS resources, with the exception of compute environments. Thus, you can test AWS Lambdas locally (specifically via sam local invoke), fully simulating the execution environment (SAM does this itself via a container), but all other resources require a real deploy, as would end-to-end testing.

Collapse
 
garethmcc profile image
Gareth McCumskey

But imitation is likely to be invalid. At least once a week I have a user with an issue that ends up being tracked down to either some library that is available on their local environment that isn't on Lambda, one of the vagaries about how Lambda handles the call stack, an issue with connections to database and many more all because they were attempting to imitate compute locally.

Just test execution of the Lambda in the cloud and save yourself the hassle.

Collapse
 
horaceshmorace profile image
Horace Nelson

No, sam local invoke will test your function in a Docker container running the correct versions of the OS and your language runtime, and while the hardware constraints my not be able to match exactly, it's close enough to be good enough 99% of the time. It's insane to suggest that because the testing environment is not 100% that of production, it is not useful, particularly since I'm talking about one provided by AWS itself. As someone who is constantly building on AWS Serverless technologies, sam local invoke saves a non-insignificant amount of time compared to waiting for sam deploy to finish. I say unit test correctly, and use sam local invoke to test side effects of your functions on other AWS resources in your stack.

Collapse
 
kayis profile image
K

Nowadays, I don't even run my IDE locally, so remote dev environments are the next logical step.

Sure, updating stuff with CloudFormation is a pain, but solutions like Pulumi and the Serverless Framework are much quicker.

Collapse
 
boskovisch profile image
boskovisch

All said and done, I understand the pains behind trying to get serverless working locally, it's a pain to setup properly, doesn't work as well it should and it still let a lot of bugs going through the cracks. That said, it stills worth it the extra work.
Why? Shift left!
As soon as you can get rid of idiotic bugs and get your code up to shape, the better product you'll have at the end. Because it's also a pain to do that in the cloud no matter the tools you use (elk stack, new relic, data dog, splunk etc.). Turns out that in the end is also cheaper and faster. It's not a matter of being afraid to test in production, but how can you test it during all cycle, without impacting business and clients. Bugs in production are usually way more expensive than the latest MacBook pro (just high price hardware reference). I have friends that to this day still work on mainframes and shit is not pretty on production, mainly because of the arguments that you are defending. Also a big problem of cloud is conflict between using the same environment for different changes (and to spin up a clone environment in cloud is not a good alternative depending on the architecture).

So to sum it up, even though local environment is not even close to the the production environment, if it is well configured it can get a headstart for the devs to be able to start developing sooner and fixing bugs and constantly refactoring the code without worrying too much about other complex relationships you'll have in the cloud. Better to start small and then add a new layer of challenging new problems,cross that bridge only when you get there.

Collapse
 
garethmcc profile image
Gareth McCumskey

Why wait? By delaying pushing stuff to the cloud you may be fixing bugs to only create more bugs; I've seen it happen! Write your code locally, serverless deploy function -f functionName and 3 seconds later you can serverless invoke, curl or use Postman against that updated logic and see the logs stream live to you using serverless logs -t. There is no complexity, only feedback and the ability to get what you need working doing so right away in the ACTUAL environment it will eventually generate revenue in from day one!

Collapse
 
jayair_20 profile image
Jay

Having used Serverless Framework for years, there's a problem with the deploy function approach. If you make a change to some common code that multiple functions rely on, it becomes really hard to figure out which ones need to be updated.

This is why we created SST (github.com/serverless-stack/server...), it hot reloads your Lambda functions, so you won't have to individually update them. And you can use Postman or curl while checking out your logs directly in the terminal.

Collapse
 
cassiolacerda profile image
Cássio Lacerda

The first time that I saw an AWS Lambda Function, was almost impossible to create/test/emulate a Node.js function locallly, and I gave up. Some time later, I discovered the Serveless Framework with serverless-offline-* packages utils to running locally serverless offline start and deploy functions with serverless deploy function --function helloWorld when we would wish to put in qa/production. Maybe, the ease and speed of updating a function with serverless deploy and the ability to point functions endpoints and calls directly to AWS environment as a dev stage, perhaps , are the next paradigm to be broken.

Collapse
 
togakangaroo profile image
George Mauer • Edited

So why are we still fighting so hard to maintain the local environment

Because not everyone had fiber internet direct to their house. You know how every call there's at least one person "dealing with connection issues all morning"? That's in VoIP traffic prioritized by ISPs. It's bad enough having people not be able to sync up as needed, let's not make it so that you can't do work when latency spikes after 4pm.

No local development would lock out a decent chunk of remote workers even within the US.

But why else? Because debugging is still an incredibly important process and remote debugging still takes heroics to set up for many environments.

And because setting up IAM and network/vm configuration properly to support junior developers cranking out a cloud-first prototype requires expertise that far from every team lead has. (Localstack works great for this)

And because exploring an idea in an unfamiliar codebase/framework shouldn't have you praying you throttled billing well enough to not accidentally end up homeless

There's more but let's stop there. I don't want every single day to day thing I do in development to be coupled to concerns over billing.

Collapse
 
garethmcc profile image
Gareth McCumskey • Edited

Because not everyone had fiber internet direct to their house

Testing in the cloud can be done on dial up. A full deployment of an entire service is usually only a few MB's. Using tools (such as the serverless deploy function) command to push ONLY code changes to test is usually less than kb's and only text. This is not equivalent to VoIP which is streaming and far larger in volume so to compare the two is false analogy.

Because debugging is still an incredibly important process and remote debugging still takes heroics to set up for many environments.

sls deploy function -f functionName and after 3 secomnds or less the code is in AWS. Then sls logs -t functionName and you have a tail of remote logs to view on your terminal. Use sls invoke -f functionName to execute it and get feedback on the logs you are currently tailing. This is FAR easier than pretty much ANY local development environment that I have had the pleasure of using in the past.

And because exploring an idea in an unfamiliar codebase/framework shouldn't have you going you throttled billing well enough to not accidentally end up with a mega bill.

I understand this fear but in practicality it is over blown and just not true. I have had "accidental" mega bills myself twice in the past and both times AWS had 0 issue reversing the charges. If developers want security, the org they work for can provide them an AWS account to play in instead.

And because setting up IAM and network/vm configuration properly to support junior developers cranking out a cloud-first prototype requires expertise that far from every team lead has.

The framework can do a lot of that heavy lifting for you and can be configured as a part of the deployment process so that it doesn't need to be manually configured by "someone in the know" each time. And traditionally we see more and more Serverless applications not using VPC and complex network architectures.

The entire point of the post is to point out that because serverless allows us to deploy easily, quickly and with 100% accuracy in multiple environments we can leverage off of that to finally be working in an environment that doesn't result in "it worked on my machine" when stuff inevitably breaks in production because of some nuance of the production environment no one expected or experienced.

Collapse
 
peibolsang profile image
Pablo Bermejo • Edited

Good article, Gareth! It was a good read. I do agree with many things you said and the overall vision that with Serverless, the cloud is the asset and code is a liability. The cloud is a system you are programming, where you put your code to glue-up services. Not a collection of servers where you drop your apps.

However, I still think that some degree of local development is necessary. Not for testing, but for debugging. And I don't necessarily agree with the following:

You cannot test all the other events that can trigger your Lambda functions.

I think you can. I think you can run a Lambda Function locally as ... what it really is ... a NodeJS app or a Python app. Does your function respond to an S3 event? Mimic the event locally by firing some tests that filll your event object and then call your function. Does your function respond to an API GW event? You can do the same or even run a small Express app mimicking the GW. This helps a lot with shortening the feedback loops for developers and be more confident before they push code.

I agree that you need to deploy soon, really soon. But you need to debug locally. As I said, your code is now a liability, and you need to make sure it is low risk.

Moreover, there is nothing such as "Cloud Technology". There is "Standard Technology on the Cloud". With this I mean that integration between Serverless services are fully based on ages-old protocols such as HTTP, MQTT, DNS .... Does your Lambda Function integrate with DynamoDB? Mock the HTTP integration with Mock Service Worker (or the equivalent) and you can DEBUG locally faster, and gain more confidence.

So yes, agree with most of it, but there are nuances that I believe are important

Collapse
 
anonymouse_35 profile image
anonymouse

I came across this randomly and wanted to say I totally agree with your stance. I don't want to identify myself, but I worked at aws on containers and we came to the same realization: emulating all of these services locally is a fool's errand.

In the long term, it works out better for each developer to have their own account where they can be as destructive as they want without worrying about inconveniencing their teammates or breaking production. To set up a working dev stack, you just build your code locally, push out your build artifacts (images/lambdas), and run a cloudformation deployment to deploy your containers/functions. Then you've got an isolated, almost full-fidelity replica of production. You can then apply the same automated verification/monitoring on your dev stack as you would on any other stack.

I can see this becoming a problem if you have a ton of services/functions. Cost may also be an issue.

Collapse
 
giacomorebonato profile image
Giacomo Rebonato

Thanks for the article.
It's still unclear to me how multiple members of a team are supposed test different feature of the same project they are working on.
I am thinking that it's not viable to have a share environment because each deploy would override the deploy of somebody else who's still testing their work.

Collapse
 
garethmcc profile image
Gareth McCumskey

How would they do it otherwise? You would need to coordinate together as a team to combione things as needed and deploy to the shared environment. This happens at pretty much any architecture type; collab is hard and needs to be done manually in a lot of situations

Collapse
 
simonireilly profile image
Simon

Why not give serverless-stack.com/ a try.

Local hot code reloading, but without emulating the cloud.

Collapse
 
jayair_20 profile image
Jay • Edited

Yeah we'd like to think SST solves this local vs prod development issue.

Collapse
 
emil profile image
Emil

I really like the idea. We are always in discussion about offline testing but it turned out with proper unit testing and deployments to a development stage in api gateway, lambda aliases and resources prefixes is much faster then always fixing offline testing issues. Each developer can now easily deploy their own stack and run integration test on the full application. But this only works out if you have a well designed code structure where you can test everything with unit testing before. But still we rely sometimes on local dynamo docker images to be faster when you try something new

Collapse
 
pieterhumphrey profile image
Pieter Humphrey

would you have a different opinion if the dev environment was an exact copy of prod, with a recent copy of prod data? Feels like that would be worth the pain of not doing it locally.

Collapse
 
garethmcc profile image
Gareth McCumskey

If that dev environment is running on a data center with multiple services connected via network connections and the exact code used to run services like Lambda, S3, DynamoDB, IAM and others, then sure. But I don't have a few million $ to replicate that in my home office. I'd rather just serverless deploy to my nearest AWS data centre with no charges while testing

Collapse
 
copikmarcin profile image
Marcin Copik

While I agree with your assessment on the limited ability of local cloud stacks, I can't agree with the statement on quick and easy online testing. It seems that your statement holds true for clouds such as AWS, where the process of deploying a function and reading logs is relatively easy and quick.

I've had the opportunity to work with Azure Function as well, and my experience with their Python Linux apps was very different. It could take good 10-20 seconds to deploy a function, and the logs were not immediately available - it could take up to 5 minutes for them to show up in the monitoring stream. Even with local code linting, verification, and static typing, simple bugs can slip into Python code; it's even more frequent when we have no static definitions of data interfaces between functions. Debugging becomes a challenge when you can perform only 10-15 invocations per hour.

Even though most of the community is heavily focused and invested in the Amazon cloud, serverless is not only AWS. And just like offline and local stacks are very limited, so are the online testing possibilities in other clouds.

Collapse
 
richwandell profile image
Rich Wandell

No.

Collapse
 
rafaelsales profile image
Rafael Sales

As a developer, I'd like to be able to have some feedback even during a power or internet outage.