DEV Community

Cover image for Optimising a web application (II): indexing
Rubén Rubio
Rubén Rubio

Posted on

Optimising a web application (II): indexing


In the previous post, we analyzed New Relic’s data to find the possible bottlenecks in a web application written in PHP using Symfony.

We saw that reviewing and optimising database queries could be a good starting point. In this post, we will perform different optimisations in the code and analyze the results.

Queries to optimise

We optimised different queries, both by adding indexes and rewriting queries. For brevity's sake, we will only show three examples: two that consist of adding an index to a column, and one that consists of rewriting a query. The rest of the optimisations we made are similar to these.

Repetitive queries

We had some queries that did not have a high performance penalty, but they were executed on each request. So, even if they did not take a lot of time within the execution, it was convenient to optimise them.

For mobile applications, users are loaded using a token, both for end-users and administrators. This token is sent with all API requests to authenticate the user, so the application executes a query like this one1:

FROM `user`
WHERE token = ?;
Enter fullscreen mode Exit fullscreen mode

Nonetheless, the token column did not have an index, so it must read all rows. So, as the number of users grows in the application, this query will get slower.

The solution is to add an index to this column:

Enter fullscreen mode Exit fullscreen mode

We had more cases like this one, so we applied the same solution to them all.

Slow queries

We had another case of poor performance. To generate a QR code for the user, the application queried the table with a log of all the orders using the order number. This value was stored using a VARCHAR, and it was queried query like this one:

FROM `orders_log`
WHERE order_number = ?
Enter fullscreen mode Exit fullscreen mode

At the beginning, this query performed well. Nonetheless, as the application grew and there were more rows in the table (millions), the performance decreased because the database had to scan all the rows in the table.

To solve the issue, like in the previous case, we added an index to the order_number column:

CREATE INDEX IDX_TOKEN ON `orders_log ` (order_number)
Enter fullscreen mode Exit fullscreen mode

Rewrite a query

To check in users, the application checks the table where all payments are stored, called orders (from which the orders_log is generated). It had a TEXT column where it persisted all the related data of a payment as JSON, as it was only meant for logging purposes.

However, the application had several iterations, and it ended up querying that JSON, as it was the easiest way to deliver value in time:

FROM `order`
WHERE data LIKE '%value%'
Enter fullscreen mode Exit fullscreen mode

As in the previous cases, it did work well at the beginning, when there were few rows in that table. But as there were more rows in that table, the performance decreased, as it had to read all rows to perform the query.

In this case, we could not query the JSON fields being a TEXT column. And we could not add a simple index to it like we did in the other cases. TEXT columns require a full-text index, but for it to work, we need to rewrite the query to use the needed full-text search, using MATCH AGAINST.

Before rewriting the query, as we were refactoring working code, we wrote an integration test —a test that writes to the database. This test ensured the changes we made did not impact the domain logic. As the code was following hexagonal architecture with the repository pattern, the query was isolated in a method of a class. Thus, we only needed to write a test for a method.

With the test in place, we proceeded to add the FULLTEXT index and to rewrite the query:

Enter fullscreen mode Exit fullscreen mode
FROM `order`
WHERE MATCH (data) AGAINST (:parametrized_values IN BOOLEAN MODE)
Enter fullscreen mode Exit fullscreen mode

When we passed the test, we were sure the query was optimal and following the expected domain logic.


We deployed these optimisations in three times, on the 28th of July, the 1st of August, and the 10th of August. We analysed the performance after one full day, as the peak happens during the daytime.

Next, we will review some of New Relic’s charts after the deployments, and we will discuss the numbers we get in comparison.


Web transactions time 2023/07/31

Throughput 2023/07/31

  • We see that the peak is lower after the first deploy.
  • In a complementary way, the throughput is higher; the less time it takes our application to give responses, the more requests it can serve.

User select performance

  • This is the chart of the query to get the user by token. We can see that, after the deployment, the query time decreases abruptly. From an average of around 100 ms, it decreases to less than 20 ms, with some peaks.


We now show the analysis of the response time, the throughput, the CPU usage and the database transaction time, three days before and three days after each of the deployments.

The database transaction time is so small that we can not see differences by checking the numbers, so we will trust New Relic's internal calculations to measure the difference.

1st deployment

Date Response time (ms) Throughput (%) CPU usage (%) Database transaction time (ms)
3 days before 578.83 41.58 7.85 0.02
3 days after 443.56 59.7 8.73 0.02
Difference (%) -22.7 +43.69 +11.13 -10.27

2nd deployment

Date Response time (ms) Throughput (%) CPU usage (%) Database transaction time (ms)
3 days before 448.34 58.33 8.98 0.02
3 days after 363.69 50.91 13.27 0.01
Difference (%) -18.88 -12.73 +47.75 -26.13

3rd deployment

Date Response time (ms) Throughput (%) CPU usage (%) Database transaction time (ms)
3 days before 378.72 53.1 13.98 0.01
3 days after 328.41 60.6 14.01 0.01
Difference (%) -13.28 -14.12 +0.21 -7.79


Date Response time (ms) Throughput (%) CPU usage (%) Database transaction time (ms)
Before 573.83 41.58 7.85 0.02
After 328.41 60.6 14.01 0.01
Difference (%) -57.23 +45 +22 -44.19


After reviewing the results, we can conclude that:

  • We improved the response time of the application by 57 %.
  • We increased the throughput of the application by 45 %.
  • We reduced the database transaction time by 44 %.

We see that the CPU usage increased by 22 %, up to 14 %. We suppose that it may be due to the new required indexing. As its absolute value is not high, we consider it acceptable.

Even though we improved the performance of the application, we still have high response times. In the next post, we will see how we can further improve the application's performance by tackling the slowest remaining endpoints.


  • We reviewed how to optimise poor-performing queries using indexes.
  • We rewrote a query so it uses a FULLTEXT index.
  • We saw the results, concluding that we improved the application's performance.

  1. We are not showing the real query for simplicity's sake. 

Top comments (0)