Introduction
Performance is such an important part of a web application. A slow-running application can frustrate users and lead to a poor user experience. This can be especially bad if your website's main goal is to convert traffic to paying customers (such as a SaaS application or an e-commerce store) as it might lead to a loss in revenue.
Poor application performance can also lead to an increase in spending on your infrastructure as you might need to add more resources (such as more servers or database instances) to handle the increased load. You might notice this more if you're using a serverless system, such as AWS Lambda, which costs more money the longer it runs.
So having an overview of your application's performance is important. If your application starts to slow down, you need to know which parts are running slow and why. But without the right tools and data, it can be difficult to know where to start. This is where application performance monitoring (APM) comes in.
In this article, we're going to look at what application performance monitoring is and the benefits of using it. We'll also look at how you can use Inspector, an APM service, to identify performance bottlenecks in your Laravel application.
What is Application Performance Monitoring (APM)?
Application performance monitoring (APM) is the process of monitoring the performance of your application - just like it says on the tin. This can include monitoring the speed of your application, the number of requests it's handling, and the resources it's using. Depending on how you're monitoring your app's performance, you might also be able to drill down into the lifecycle of a request and see how long each part is taking. For instance, you might be able to get an overview of things like:
- How long it takes to boot up your application
- How long it takes to make the database queries
- How long it takes to render the view
- How long it takes to run some logic in your application
With the correct tools, all these things can be monitored so you can see where your application is slowing down. You can then use the results from the APM to identify performance bottlenecks and fix them.
What is Inspector?
One of the tools that you can use to monitor the performance of your application is an online tool called "Inspector".
Inspector is an APM service that allows you to monitor the performance of your Laravel application's:
- HTTP requests
- Artisan commands
- Jobs/Queues
- Background tasks
It's really easy to get started with Inspector, and they even offer a free tier too. So if you're not sure if it's for you, you can give it a try without any financial commitment, which is pretty cool!
To use it, you just need to install the Inspector package into your Laravel application and add your Inspector API key to your .env
file. It will then start collecting data to monitor your application's performance. We'll come back to this later in the article.
First, we need to understand the benefits of using an APM service (especially if you're going to want to make a compelling case to your boss or clients to let you start using it!).
Benefits of Application Performance Monitoring
Using an APM service, like Inspector, can provide you with several benefits, including:
Use Real-life Data to Get a More Realistic View
How many times have you heard someone say "The application's running really slow today"?
You usually have to take their word for it and investigate the issue blindly hoping to run into the same issue. But performance issues aren't always consistent and they can be difficult to recreate. They aren't like bugs that you can typically reproduce on your local machine or have automatically reported to your logs or error tracking system.
One approach you could take would be to find the user/tenant/team in the system that's experiencing the issue and then create a lot of data (using something like factories and seeders) on your local machine to try and match the same conditions. But this is usually quite tedious to do and, unless you have some really in-depth seeders and factories, it can be hard to match the same conditions. Actual real-life data is always going to be better than randomly generated data because it represents the actual conditions of your application.
Another approach that you can take could be to take a copy of the production database and import it on your local machine. This would definitely give you a more realistic set of data to work with. But you have to be careful with this approach! It involves you having to connect to the production database outside of your application, which opens you up to making mistakes (such as accidentally deleting or updating data). There's also a privacy risk in the sense that you may have sensitive data or personally identifiable information (PII) on your local machine. So unless you have a really good reason to do this, or at least a way to anonymise the data, I'd recommend against this approach.
Both of the approaches above would only be useful if the issue is with the data. So if the issue is unrelated to the data and is being caused by something else, such as a slow-running database server, then the approaches above wouldn't be useful.
This is where an APM service, such as Inspector, can come in handy. It can seamlessly run in the background of your production application and collect real-life performance data. This means when someone reports a performance issue, you can have a look at the data to investigate the cause of the problem.
This can be a huge time saver and can help you to identify the cause of the issue a lot quicker.
Improve SEO and User Experience
Page load speed is an important factor in terms of search engine optimisation (SEO) and user experience. If your web pages are slow to load, you're likely going frustrate your users and it might lead to them leaving your site and going to a competitor's site. This can lead to a loss in revenue.
For this reason, Google uses page load as a ranking factor for Google, according to Semrush. This means that if your pages are slow to load, you might not rank as highly in the search results as you'd like.
Semrush also mentions in their article that "the probability of bounce almost triples if your page takes longer than three seconds to load, according to Google.".
So by having an overview of your application's performance, you can identify slow requests and make them faster. This can reduce the likelihood of being penalised by Google and improve your user experience.
Identify Code That Needs Refactoring
When you first built your application, you might have only imagined that you'd have a small number of users. So you might have written the code in a way that can easily handle a small number of users, but maybe not a large number of users.
In my opinion, there's nothing wrong with this. Writing code that can scale to a large number of users can sometimes be completely different to writing code that can handle a small number of users. Sometimes the approaches require a different way of thinking about the problem. So if you don't ever expect to have more than 100 users for your application, it might be a bit overkill to write your code in a way that can handle 100,000 users.
Of course, this doesn't mean you have an excuse to write poor-performing code. You still need to take pride in what you're building. It just means you could be excused for not writing massively scalable code if it's not needed.
But let's say that your application does start to grow. There may come a time when you realise that your initial solution isn't going to cut it anymore. You might have some slow requests that are slowing down your application.
Without experiencing the slow system yourself, the only way you might know that you need to refactor your code is if you get complaints from your users. But by then, it might be too late. You might have already lost some users and you might have a bad reputation.
However, by using an APM service, you can look at the history of your system and notice the gradual increase in the time it takes to process a request. You can then use this data to identify where the bottlenecks are and refactor the code to make it faster and more suitable for the increased load.
As a side note, if you're looking for ways to improve your Laravel code, you might be interested in my book "Battle Ready Laravel" which shows you how to audit, test, fix, and improve your Laravel applications.
A bonus of using the APM service is that you can also compare your application's performance before and after the refactoring process. This can help you to see if the refactoring process has made a difference.
Justify Refactoring to Stakeholders, Bosses, or Clients
Following on from the point above, you can use data from an APM service to justify refactoring to your stakeholders, bosses, or clients.
For a lot of developers, removing technical debt or refactoring code is something that they want to do, but it's not always easy to justify to the people who hold the purse strings. This is especially true if the code is working and there are no complaints from the users. It can be difficult to justify spending time and money on something that doesn't seem to be broken.
However, by using an APM service, you can collect figures and statistics to make a compelling case for why you need to refactor the code. Something along the lines of "Everything's running now, but in a few months, we're likely going to start seeing a lot of problems.".
Facts and figures will always be more compelling than just saying "I think we should refactor this code.".
So being able to spot performance bottlenecks and identify potential downtime before it happens can be a great benefit.
Service Level Agreement (SLA) Compliance
When your web application was sold to your clients, you might have agreed to a service level agreement (SLA). This is a contract that defines the level of service that you're expected to provide. This can sometimes include things like uptime, response times, and the number of requests that you can handle.
By using an APM service, you can keep an eye on your application's performance so you can be sure you're meeting the terms of your SLA. This can be especially important if you're working with clients who are paying you a lot of money for your service. You don't want to be in a position where you're not meeting the terms of your SLA and you're not aware of it.
Using Inspector with Laravel
Now that we have an overview of what Inspector is and the benefits of using an APM service, let's take a look at how you can use Inspector with your Laravel application.
Installation
Before we touch any code, you'll first need to sign up for an account with Inspector. You can do this by visiting: https://app.inspector.dev/register.
Once you've signed up, you'll want to create a new "application" in Inspector. Make sure to select "Laravel" as the application type.
After you've done this, we can then install the Inspector Laravel package into your application. You can do this by running the following command in your project root:
composer require inspector-apm/inspector-laravel
Once you've installed the package, you'll then want to add your Inspector API key (this will be displayed in the Inspector dashboard) to your .env
file like so:
INSPECTOR_INGESTION_KEY=your-api-key-here
Following this, you'll then need to add Inspector's middleware to your app/Http/Kernel.php
file. This is the code that's responsible for collecting data about your application's performance. You can add it to the $middlewareGroups
array like so:
/**
* The application's route middleware groups.
*
* @var array
*/
protected $middlewareGroups = [
'web' => [
// ...,
\Inspector\Laravel\Middleware\WebRequestMonitoring::class,
],
'api' => [
// ...,
\Inspector\Laravel\Middleware\WebRequestMonitoring::class,
]
]
That's it! Inspector should now be installed and ready to start reporting data about your application's performance.
To check that it's installed and running correctly, you can run a command that ships with the package to ensure that it's working. You can do this by running the following command in your terminal:
php artisan inspector:test
You can now visit the Inspector dashboard to see the data that's being collected about your application's performance.
Monitoring Requests
Now that Inspector is installed and running, you'll need to wait for some data to come in for you to start comparing requests. Depending on your application, you might want to do this yourself by interacting with your app just like any other user would. Alternatively, you might want to wait for a few hours, days, or weeks and let the data come in naturally.
It's best to remember that the more data you have, the better insights into your app's performance you'll get.
Comparing Requests
I left Inspector to run on my own personal blog for a few days to see what kind of data I'd get and what kind of insights I could gain from it.
Note: In the charts, you'll notice a sudden drop-off in collected data. This is because I ran out of transactions for the month while using the free tier.
In the image below, we can see the data that Inspector collected for my blog. It shows the average execution times for the requests to the different routes. As well as this, it also shows us other useful information such as the number of transactions (in this instance, these are the requests made to the routes), the average execution time for each route, and the average amount of memory used.
By using the data in the above chart, we can see that the /blog/{post}
route is the most visited route on the blog. This is the route that's responsible for displaying the blog posts. We can also see that the average execution times are relatively consistent but that there was a spike in the average execution time on the 20th February 2024. This might be something we'd want to investigate.
Inspector also provides another chart that gives information about the number of transactions over a given period of time. This information can be handy for pairing up with the average execution time. For instance, you may find that the average execution time spikes around the same time that the number of transactions spikes.
Inspector then allows you to drill down into a specific route to see the performance of the requests to that route. In our case, we're going to look at the /blog/{post}
route.
The image below shows the average execution time for the /blog/{post}
route:
In the image, you can't see the specific numbers (they're only displayed when hovering on the bars in the chart), but I can tell you that the /blog/{post}
route received 5492 requests in the space of 3 days. This is a breakdown of the average execution times for the requests to the route:
Execution time | Number of requests |
---|---|
0-85ms | 5241 |
85-170ms | 170 |
170-255ms | 59 |
255-340ms | 17 |
340-425ms | 3 |
425ms+ | 2 |
As we can see, the majority of the requests are being handled in a reasonable amount of time. But there are a few requests that are taking longer. This might be something we might want to investigate to see if there are any improvements we can make.
For the purposes of the article, we're going to compare the requests that are being handled in the 0-85ms range to the requests that are being handled in the 425ms+ range. We're doing this purely so we can see how Inspector can be used to compare requests and highlight the differences between them.
In a real-life scenario, I'd be tempted to compare the faster requests to the requests in the 170-255ms range. This is typically because after the 255ms mark, there aren't really a large number of requests. So it's likely that the requests taking longer than 255ms are outliers and might not be worth investigating. This is all dependent on your application though. In this instance, we're just monitoring the performance of a blog, so if an article takes slightly longer to load than the others, it's not the end of the world. But if you're working with a SaaS application or an e-commerce store, you might want to investigate the requests that are taking longer than 255ms.
In the image below, we can see a comparison of the requests that are being handled in the 0-85ms range (on the left) to the requests that are being handled in the 425ms+ range (on the right):
As we can see on the left-hand side, the longest part of the request is the rendering of the Blade view which takes 12.28ms. This is the part of the request that's responsible for rendering the HTML that's sent to the user's browser.
However, on the right-hand side, we can see that the longest part of the request is the database statement that's inserting a new row into the session
table. This is taking 371.93ms.
This is useful because it has highlighted that this particular interaction with the database is what slowed down the request the most. In this particular instance, I'm not too concerned because it only happened twice. But if it was happening more frequently, I might want to investigate it further.
Possible Reasons for Slow Requests
As we've briefly mentioned, there are many reasons why a request might be slow in comparison to other requests.
Slow or Expensive Database Queries
You may have a slow database query that's running on your application (like in our example above). This might be due to a missing index, a slow-running database server, or a large amount of data that's being processed.
It may also be that you're making a large number of database queries which could be reduced. This could potentially be improved by using eager loading or by using a cache to store some of the data.
Slow or Expensive API Requests
Your application might be slowing down if you're making calls to an external API. Unfortunately, the performance of an external API is out of your control. But, depending on the API and the nature of the request, you might be able to make some performance improvements in your application.
For example, let's say you're sending an email to 500 users using Mailgun. Rather than sending 500 API requests to Mailgun to send the emails (for call for each recipient), you may be able to use Mailgun's batch-sending feature to send all the emails in a single request. This would reduce the number of requests that your application is making to Mailgun and could potentially speed up the process.
Another approach to reducing the impact that the external API has on your project's speed could be to make use of your Laravel application's cache. For example, let's say you want to fetch the exchange rates for a currency on a given date in the past. You could make a request to the API to fetch the exchange rates and then store them in the cache. This means the next time you need to check the exchange rates for that date, you can fetch them from the cache rather than making another request to the API.
Side note: If you're interested in learning how to build robust, powerful, and testable API integrations in Laravel using Saloon, you might be interested in my 440+ page book called Consuming APIs In Laravel.
Processing Large Amounts of Data
An unfortunate reality of working with data is that the more data you have, the longer it takes to process. This is especially true if you're working with huge amounts of data.
As we've mentioned earlier, there's a difference in the way you write code for a small number of users and a large number of users. And I also think that as code is refactored to be more performant, it can sometimes get more complex and harder to understand at a quick glance.
There isn't really a one-size-fits-all solution to this problem. It's going to depend on the nature of the data and the type of processing that you're doing. But some potential solutions could be to use a queue to process the data, to use a cache to store the data, or to use a batch processing feature if it's available.
Other Processes Running on the Server
Although this isn't tied directly to your application, it's worth checking whether other processes running on your server. For example, you might have some queue workers that are running on the same server as your application. If these queue workers are using a lot of resources, it could slow down your application.
Should I Use Inspector?
Yes! My honest answer is that you should use Inspector, or at least some other form of APM. Although I only gave Inspector a trial on my blog, I can see the potential benefits of using it on a larger application. So I'll definitely be bringing it up in conversation with my clients and explaining the benefits of using it.
My favourite part of using it is being able to compare requests side-by-side. For me, it helped to visualise the differences between the requests and made the bottleneck (in this case, the database statement) really stand out.
In the past, I've had to blindly investigate performance issues and had to just keep making changes to the code until the performance improved. I'm kicking myself now though, because if I'd used Inspector, I likely would have been able to pinpoint the issue a lot quicker.
Conclusion
Hopefully, this article has given you an overview of what application performance monitoring is and how you can use Inspector to identify performance bottlenecks in your Laravel application.
If you're interested in learning more about Inspector and the other features they offer, I'd definitely recommend checking out their website.
Keep on building awesome stuff! 🚀
Top comments (0)