DEV Community

Cover image for Optimising a web application (I): seeing
Rubén Rubio
Rubén Rubio

Posted on • Updated on

Optimising a web application (I): seeing


This year, a client contacted me because they were having performance issues in one of their applications: slow responses, server halting...

This concrete application is seasonal: it is only used during the summer, and its peak usage occurs on weekends within that season. It is an application for managing the occupation of buildings on different schedules: it allows check-in and checkout users, viewing the real-time occupation of buildings, purchasing tickets, etc.

The application consists of a mobile application for end-users, another for administrators, and their web equivalents, besides a back-office. The backend is written in PHP using Symfony, with a hexagonal architecture and no event systems in place.

Their current infrastructure consists of one server for the database, a MariaDB instance; and another for the web server: PHP, Apache and Nginx. Both are dedicated servers.

This post is the first of a 3-part series on how to tackle bottlenecks in this application and boost its performance.


In order to improve the performance, we need to know where the issues reside. We need to see what is happening in an objective way with measurable metrics, mainly for two reasons.

First, we need metrics on which paths of the application are slow to attack and prioritize what to improve. It does not make sense to optimize paths that are not important for the end user. For instance, in this case, it is acceptable to the client that the back office is not as fast as the rest of the application.

Second, we need metrics so we can measure whether the improvements we make are effective. In the end, we will present these metrics to the client so he can know how much we boosted performance.

In summary, we need observability, we have to monitor our application. There are several services for that purpose: Datadog, New Relic, Tideways… These services must be correctly limited, as they can become quite expensive:

Datadog meme

We chose New Relic because it has a free tier of 100 GB per month, which is enough for this use case. Besides, there is a bundle for Symfony that allows the metrics to be more integrated, such as having Symfony’s route names in New Relic transactions.

We needed to install New Relic's APM1 on the server and set it up, following the official documentation. Once done, New Relic started to ingest data, so we only had to wait to have enough data to analyze.


We waited until after a weekend to analyze the metrics so they could have significance, as the client told us the peak usage happened during the weekends.

We can proceed to analyze some of the charts New Relic offered us in its default dashboard for PHP.

General load

This chart shows the load of the system through the day, segmented by the usage of each service: PHP, MySQL...

Web transaction time chart

  • There is a high load during the day and a low load on nights; it matches what we expect.
  • MySQL has a high charge, so it may be a possible optimization point.
  • There is an unusual peak on July 26th that may be due to a temporary load.


The transactions chart shows the 5 slowest transactions grouped by Symfony's route (thanks to ekino/newrelic-bundle)

Transactions chart

  • There are endpoints with high averages, of almost 2 seconds.
  • Some endpoints have traces of more than 24 seconds, which is too much.
  • These endpoints should be the first to be optimized.

Top 20 database operations

The top 20 database operations chart groups the queries per main table in the query, showing the percentage it amounts in the application.

Top 20 database operations chart

  • There is one query that amounts to 43% of the queries in the application.
  • There are another two queries with 19% and 11% of the total time.
  • These three queries account for 73 % of the database load!


After displaying these metrics, we may conclude that the database may be a bottleneck. As sometimes it is easy to perform database optimizations just by indexing columns or rewriting queries, this will be the first step to optimize. It would also be a quick win.


  • We reviewed the need for a monitoring system when optimizing applications.
  • We listed different monitoring services that would suit our use case and gave the reasons to choose New Relic.
  • We analyzed New Relic’s metrics to find the application bottlenecks and have some points where we can start optimizing.

  1. Application Performance Management 

Top comments (0)