In this article, we’ll look at how you can take advantage of Laravel Forge’s APIs to provision and configure servers using automation. We’ll also look at why you might want to do this, or more specifically, why I decided to do this.
I’ve released Pulse — a friendly, affordable server monitoring tool specifically for developers. It includes hardware & custom monitors, various notification channels, as well as API support. There’s also a referral program where you can earn a 30% commission. Take a look… https://pulse.alphametric.co
Around late 2017, I began developing a bespoke real estate system for a client of mine. Over the next year, it grew from a relatively straightforward CRM / CMS into a system that now supports the client’s entire business.
Last year, we reached a decision that the system was of sufficient quality that it could be made generic and spun off into a separate product we could sell. That process remains ongoing, but when I first started to think of approaches for its deployment, I began to encounter some obstacles...
The common thinking, is that you should use a multi-tenancy arrangement for your application, either with a single database’s records scoped to each tenant, or a separate DB for each client... of the two, I prefer the first option.
Learning how to implement multi-tenancy in Laravel is quite an involved process requiring a combination of middleware, Eloquent scopes and subdomain routing. The real issue I was facing though, actually had nothing to do with Laravel, but was related to SSL certificates & vanity urls.
For those unaware, a vanity URL refers to a means of allowing your clients to link their domain to the subdomain you've created for them on your app e.g. [https://www.acme.com](http://acme.com) => [https://acme.app.com](https://acme.app.com)
Enabling vanity URLs and coding the necessary routing within Laravel was all perfectly do-able. However, being able to do it with SSL security proved to be a nightmare. Essentially you’re masking a subdomain using another domain, which means the certificate ends up not pointing to the right place and is therefore rightly rejected by the browser.
There are ways around this, but they are either complicated, or require you to funnel your infrastructure through a third party. Combined with the inherent pitfalls of multi-tenancy, I decided to try a different approach…
You might be thinking to yourself, really? A server for EVERY client? How do you go about managing something like that? Surely, it’s more complicated?
Yes, it would be, were it not for Laravel Forge. Forge includes a full suite of APIs that allow you to do almost everything that you can do in the browser. There’s even a Forge SDK created for PHP to make this more easier.
So, how do we go about doing this? The process will take a reasonable amount of time, so we can’t accomplish it using a single request. Instead, we’ll need to take advantage of a queued jobs chain.
Okay, let’s dig into what’s going on here. Essentially, each task that you would perform on Laravel Forge using the browser, is isolated within a job. These jobs are then placed in a chain and executed sequentially. If one fails, then no further jobs in the chain are executed. The jobs also include delays that ensure the previous job has enough time to complete properly e.g. install Linux.
There are a couple of other jobs in the queue that are more specific to my needs, but for completeness, here’s what they do:
- CreateBugsnagProject creates a new project on BugSnag to record errors that are specific to each installation. Otherwise, I’d have no idea where any errors actually came from.
- SetInternetAddress updates the user’s internal server profile with the IPv4 address that Forge receives from the VPS provider.
- SetLinodeIdentifier and SetLinodeVolume use the Linode API to update the user’s internal server profile with the ID fields that I can use to upgrade their server automatically at a later date (if required).
The next step is to get the user to create an A record to link their domain to the IPv4 address we received via the SetInternetAddress job.
We can verify that the DNS record is present using a simple snippet of PHP code thanks to the native dns_get_record function:
Now that the domain links to the server, we can go ahead and run the next set of jobs to create the site, pull in the repository, install dependencies etc. This is where you usually spend most of your time in Forge, so I always find it very satisfying to see all of these steps happening automatically.
You might be wondering why we did the DNS record verification before doing the site deployment. After all, wouldn’t it be better to run the process as one long set of jobs? The answer is… yes, it would , however since I want to take advantage of LetsEncrypt for SSL protection, the domain must first be linked to the IP address.
The same principle is followed. Each job is fired off in succession, using an adequate delay where required and terminating in the event of a problem.
There are a few things you might notice:
- SetEnvironmentFile is run twice. That’s not a mistake, but is largely due to a quirk of Forge’s. I always make a point of setting the environment file prior to performing a deployment to avoid any unexpected issues. However, on the first ever deployment, it sometimes seems to wipe the environment file after completing, so I set it again for good measure.
- CreatePulseProfile exists in the queue. What is it? What does it do? Don’t worry, we’ll get to that in just a second.
If you were to now visit the domain name, you’d have a fully functioning site that is ready to go. Awesome! You can learn all about the APIs involved in the above steps by checking out the documentation.
You might be thinking, what happens when you want to make a change to the code? No problem! Thanks to Forge’s use of Auto-Deploy, when you push up your code changes, they will automatically be rolled out to all your clients!
As you can imagine, if you’re successful and onboard dozens, hundreds or even thousands of customers, then your infrastructure would end up being equally impressive. You’ll need a tool to keep on top of that…
Pulse was designed specifically for the purpose of being used by developers to automate server monitoring. It includes the APIs required to create a new server profile, configure the relevant monitors for hardware and key services, set the alert notification channels and retrieve server logs.
You can learn about Pulse’s APIs through the documentation.
You might be wondering how the job to create the profile works. Actually, it uses another great feature of Forge… recipes.
Essentially the job generates the code for a shell script and then stores / runs it on the server using a recipe. The recipe sends the API requests, downloads the monitoring script, makes it executable, then creates a CRON entry for it.
Hopefully, this short article has shown you that it is indeed possible to run a server per client strategy that doesn’t involve a manual process. Nevertheless, using this kind of strategy has both pros and cons. Let’s take a look at them:
- You never need to mess with / worry about the problems that go with creating an application that uses multi-tenancy. Data is forever isolated and you don’t need to wrap your application logic within scopes.
- Your clients are protected from server failures… to an extent. If you run a multi-tenant application / server infrastructure and something brings it down, all of your clients’ sites go down. With a server per client, this can’t happen (unless the entire data centre goes down).
- Creating backups of the database and application files is considerably easier. Likewise, purging a customer is also easy… just delete the server.
- Performance is generally better. With a distributed architecture, no single set of servers is being hammered by thousands of users.
- Developing custom features is generally easier thanks to a separate file system for each client, and may provide a vital source of revenue.
- You can charge your clients when they wish / need to upgrade their infrastructure, which may serve to supplement revenue.
- The on-boarding process takes about 30 minutes. Depending on the type of application you’re building / what your customers are prepared to accept, that may not be viable.
- Your cost per customer is higher as they each require a server, which you will be charged for. You’ll need to factor that into your app’s pricing, but if you’re charging them hundreds or thousands, it’s not much to swallow.
- If you need to fundamentally change the nature of your application, its dependencies, the services it needs etc. you might be in for a long road to prepare all your customer servers for the change.
Generally, I would say that if you’re not offering clients’ subdomains for their accounts / vanity URL integration, then you don’t NEED to use the server per client approach, but it doesn’t mean you can’t.
As I highlighted above, there are other hurdles involved in multi-tenancy applications. Your code is also a lot more complex than it would otherwise be, so that may also be something you want to consider.
Thanks for reading, and if you have any questions, please do feel free to share them. I’ll do my best to answer them!
Be sure to follow me here, and on Twitter, for further articles and updates on what I’m currently up to. Oh, and if you feel so inclined, please check out / spread the word about Pulse. It would mean a lot to me 😀
Remember that even if it’s not for you, you can still earn a 30% commission by referring friends, colleagues or followers to the platform.
Happy coding… 👨🏻💻