DEV Community

Ben Halpern
Ben Halpern

Posted on

How does deployment work at your organization?

What is the process to get code into prod?

Top comments (72)

Collapse
 
nataliedeweerd profile image
𝐍𝐚𝐭𝐚𝐥𝐢𝐞 𝐝𝐞 𝐖𝐞𝐞𝐫𝐝 • Edited

Honestly - it's just FTP & manual database pushes 🤷‍♀️
It's not sophisticated or fancy, but it works.

Collapse
 
nicolus profile image
Nicolas Bailly • Edited

Thank you for your answer, it's important to keep in mind that even though we read all day long about fancy new techniques and tools, most of us are working on legacy codebases and deploying manually.

That said, Continuous Deployment is not just a fad. I recently changed jobs and moved from gitlab CI/CD (which is really nice) to a mix of "git pull" on the server, SFTP, rsync, and running the migrations manually... And it's a huge pain and a huge waste of time (not to mention that if something goes wrong we don't have an easy way to rollback to the previous version).

I haven't yet setup CI/CD pipelines because we use on premise Bitbucket and it doesn't seem to offer CI/CD (so it means we'll need to install Jenkins or something and I'll have to learn that), but it's pretty high on my todo list.

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

I used to be on BitBucket too, but i definitely changed to GitLab and I find no reason to use something different, i recommend you to take a try. I don't use self-hosted but i guess you will have same options.

Collapse
 
kbariotis profile image
Kostas Bariotis

It does, it’s called pipelines I think. It’s pretty descent.

Thread Thread
 
nicolus profile image
Nicolas Bailly

As far as I can tell pipelined is only available on bitbucket cloud, and not the self hosted version (bitbucket server) ? I'd love to be wrong though.

Thread Thread
 
kbariotis profile image
Kostas Bariotis

Ah ok, I don't know more about that.

Collapse
 
ben profile image
Ben Halpern

No shame in not using “fancy” CI tools. Whatever does the job.

Collapse
 
joelbonetr profile image
JoelBonetR 🥇

Obviously you don't have to be ashamed for not using "fancy" CI tools, but when you do, you'll see why people are using it.

I learned on last 10 years that technologies that meet a need stay, and technologies that don't, disappear or remain in legacy projects.

Git isn't something new (as you should know). CI scripts aren't new too, it only simplified the two-step task - where you were using git, svn, mercurial or wharever with a Rundeck or similar automation that needed to be fired manually - into a single step one where devs only need to push to master (if permissions) and it all rolls smooth into production and able to roll-back easily if needed.

If you are not using a version control service, then yes, you need to be ashamed.

Collapse
 
felipperegazio profile image
Felippe Regazio

I agree with Ben, "Whatever does the job". I worked on a company that had this approach too with huge legacy products. I wrote an script to automate deployments like that with ssh, maybe could be useful for you: github.com/felippe-regazio/sh-simp...

Collapse
 
andrewbrown profile image
Andrew Brown 🇨🇦

AWS CodePipeline + AWS CodeDeploy + AWS CodeBuild

Collapse
 
rinzler profile image
Rinzler

Same here, only our stack is HTML/JS/CSS + Python/Django + MongoDB/MariaDB. Every code merged into develop branch on Github Repo is immediately deployed to our dev/staging environment also on AWS, same process on master -> production counterparts.

Collapse
 
fvaldes33 profile image
Franco Valdes

What stack? I have run into issues using NextJS with this deployment approach. TIA

Collapse
 
andrewbrown profile image
Andrew Brown 🇨🇦

Ruby on Rails, though the process is identical because NextJS is just a nodejs app.
I had a course I made on Udemy last year for creating a pipeline with Rails but you could just ski the Rails part. I've been meaning to release that video course for free.

Collapse
 
peiche profile image
Paul

I would love to get to this point with my job.

Collapse
 
jep profile image
Jim • Edited

The coolest and most frustrating thing about DevOps is there's a hundred different ways to do something. I say this in hope I won't be judged too harshly for how we do deployments.

I should first mention that we're not a company in the web app space. The company I love working for primarily creates cross-platform C++ applications that run on Linux/Windows appliances. Also, as a DevOps Engineer, my customers aren't always actual customers. More often than not, they're developers. When we deploy, we remotely update the Linux or Windows platform, then uninstall anything existing software, reboot, then install the most up to date software, license it, and verify the installation was successful.

We accomplish this primarily through Ansible playbooks that deal with the actual deployment, and use Jenkins jobs as the self-service mechanism for our developer customers. When devs want to upgrade their systems to test or do whatever, they can go to Jenkins, enter their IP and select the version to install and click 'Build'. The rest of the process is seamless to the customer, with the exception of the 'DevOps is deploying' screen we run during the deployment to let the remote user know the system is doing something.

I know we could look into Ansible Tower or FOSS alternatives, but people got used to Jenkins so I try to let that be the common interface for self-service tasks performed by our developer customers that need an automated capability.

Collapse
 
shenril profile image
Shenril • Edited

AWX should meet your needs , it s basically Tower for free and integrates with your existing ansible roles
github.com/ansible/awx

Collapse
 
matteojoliveau profile image
Matteo Joliveau

We run a lot of workloads on Kubernetes nowadays. When you put the internet hype aside, it's a very solid platform to automate and manage lots of applications at once. It allows us to cut down infrastructure costs for many clients we provide hosting for.

Our standard deployment procedure is git push on a particular branch (usually master) which triggers a pretty standard CI/CD pipeline: run tests, run linters, build & push Docker image, apply Kubernetes manifests. If anything goes wrong, Kubernetes allows us to roll back the deployment.

We handle different environments (dev, QA, prod) either with different branches or with manual env promotion, depending on the pipeline provider.

Collapse
 
benmechen profile image
Ben Mechen

Do you use a separate cluster for each environment, or just one cluster with multiple namespaces? We're moving to kubernetes and currently just have 1 cluster (for staging while in development) but we're not sure whether to add another cluster for prod. It's more expensive, but gives us better separation.

Collapse
 
htnguy profile image
Hieu Nguyen • Edited

It depends on which environment you are trying to deploy to. At my company, we have multiple environments of the same application. One for Dev, QA, and Production.

For the sake of brevity, lets take a deployment from QA to Production. Note:
Local Machine -> Dev (Do it as many time as your heart's wish 😄)
Dev-> QA (OK with some restrictions) ,
QA-> Production (OK with a lot more restrictions),
Dev->Production ( A BIG NO NO, could get me fired!).

  1. Once the code has been peer reviewed and QA Tested, we create a deployment folder that contains all project files and dependencies that are needed to perform the deployment.
  2. We create a deployment ticket in TFS with instructions for the DevOp team on how to deploy it. Install this and delete that.
  3. I sit and cross my finger. If all things goes well, they reply back with some feedback.
  4. If the deployment fails, I usually have to work with DevOps on figuring out why and attempt to redeploy.

This process is very cumbersome at time and deployments can often span days. However, I have heard talks of going fully automated deployments 😄, but they are still trying set up the bolts and nuts for the whole operation.

Collapse
 
jessekphillips profile image
Jesse Phillips • Edited

instructions for the DevOp team on how to deploy it. Install this and delete that.

So, you have an operations team which is named devops?

I bet everyone at the company is annoyed at how "devops" has made things more complicated for little benefit.

It seems one of the biggest challenges with these new development processes is that it requires a true collaboration, something not heavily prioritized and actively avoiding. It is so much easier to create definitions for interface handoff. We do it in good software architecture all the time.

Collapse
 
aghost7 profile image
Jonathan Boudreau • Edited

There's more than one application which we serve at my company.

The first application uses a dated deployment, which goes like this:

  1. Bring up the maintenance page.
  2. Bring down all running web servers.
  3. Migrate the database schema.
  4. Bring up the web servers with the new release.
  5. Remove the maintenance page.

There's a couple of issues with this kind of deployment. For some customers we incur business loss because they've got people around the globe working at different hours.

The second application uses a rolling deployment, which goes like this:

  1. Migrate the database schema.
  2. Bring up the new web servers.
  3. Add the new web servers to the load balancer.
  4. Remove the old web servers from the load balancer.

There are some special considerations with regards to how migrations need to be written since the old application will still be running. For example removing a column needs to be split into two releases instead of one.

To answer your second question, our SDLC (software development life-cycle) looks for the most part like this:

  1. Open a PR.
  2. CI runs tests.
  3. Code review.
  4. Deploy to QA environment.
  5. Changes are tested internally.
  6. Deploy to UAT (user acceptance testing) environment.
  7. Customer validates that changes are OK for production.
  8. Deploy to production.
Collapse
 
crimsonmed profile image
Médéric Burlet

A simple process:

I use release-it
github.com/release-it/release-it

Since I use gitmoji and karma syntax it generates a github release changelog that is very easy to read for us and for clients.

changelog

After wards in the after:git:release hook of release-it I have a set of commands that does the following:

  • ssh to dev server & zip latest release & push to s3
  • ssh to live serverX & download latest release from s3 * unzip & do database migrations

This is quite practical as I just have to run release-it in the folder of the project and it generates and does everything. It also means dev and live server are a perfect file copy even installed packages.

We still have a staging server as well for all ongoing testing.

Collapse
 
yo profile image
Yogi

Wow! I like your GitHub dark mode, can you share the extension, please!

Collapse
 
divee789 profile image
Divine Olokor

you can use chrome dark reader extension

Collapse
 
crimsonmed profile image
Médéric Burlet

This is just the Github Desktop app:
desktop.github.com/

Collapse
 
kildareflare profile image
Rich Field

At the day job we have several projects that are deployed independently using BuildKite.

For a freelance client I use CodeShip to handle the deployment of a Firebase hosted site, Firebase Functions and Firebase Database migrations - triggered by a push to the repo. Each branch in the repo deploys separate site/functions/db.

For most small personal projects I use react-static and Netlify; so it's simply a push to the repo.

Collapse
 
architectak profile image
Ankit Kumar • Edited

AWS + BuildKite Pipeline ( for Uploading, building and deployment)

Collapse
 
molly profile image
Molly Struve (she/her)

How has your experience been with BuildKite? Do you like it?

Collapse
 
architectak profile image
Ankit Kumar

I like it alot, easy to use and set-up.

Collapse
 
dmahely profile image
Doaa Mahely

For our web app, I would merge changes into master, pull the changes into my local and use rsync to sync between the files in my local and the files that are in our staging server. After testing, I would sync between the files in my local and our production server.

It works well enough, but it's annoying when I have to deploy a quick fix and there are changes in staging that are not yet tested or ready for production. When that happens, I'd revert that MR and pull again, only if it's an MR with a lot of changes. Otherwise, I do it manually on production but be sure to create an MR for it that is merged and pushed to staging so that the next time I deploy to production the fix doesn't get lost.

I really want to change this deployment process because I don't have a lot of trust in it, so hopefully when I have some time.

Collapse
 
_garybell profile image
Gary Bell
  • SSH to one server. Set node to offline.
  • SSH to other server. Set site to offline.
  • On SSH for first server, do git pull
  • On second server, do git pull
  • If needed, manually apply database changes.
  • On first server, put node online
  • Hope everything works

Amazingly, that's better than when I started and took over. It was a case of ftp to the first server, and just hope it didn't break stuff, but also that the files would get rsync'd to the second server. If it didn't, it needed firewall changes to allow SSH access to the server to then restart the rsync process.

Our new platform is going to do the deployments automatically using Gitlabs CI/CD stuff. Mainly because I don't want to have to keep doing it. But also because there's going to be more server nodes

Collapse
 
david_j_eddy profile image
David J Eddy • Edited
  • developer commits change to feature branch locally
  • developer pushed code to GitLab
    • triggers a pipeline of tasks
  • development team reviews
  • branch merged to target / environment branch
  • branch is deployed to environment
  • personnel responsible for the environment confirms changes
  • branch is merged to master
  • On deployment day, master is deployed to production

We are trying to move to a more development -> new ephemeral environment per branch -> integration -> production deployment process. That is our current goal to give the development team more flexibility in there workflow.

Collapse
 
iamschulz profile image
Daniel Schulz

At Work we use Bitbucket and Jenkins to push into Google's cloud services.
For private projects I try out all sorts of things. One site is pushed manually by FTP, one has GitLab CI, one is on GitHub and Travis... I think I like GitLab most, because it's one integrated and very versatile solution.

Collapse
 
patryktech profile image
Patryk

At work? ssh, cp, vim, hope for the best. We have automated backups, but no source versioning, or CD of any kind.

My portfolio I'm working on uses Gitlab-CI to build docker (compose) containers, test, and deploy them.

Collapse
 
ryantenorio profile image
Ryan

Jenkins with GitFlow for larger, high-risk products that require more gates to be crossed, and plain ol' jenkins plus github hooks to automatically build and deploy for smaller products and products with less risk.

Whatever works for you, the tool chain should match the need!