In this story, I’m going to tell you how we developed a structured approach to solving complex technical issue and successfully improved our app’s performance up to 30 times.
We started the development of a new enterprise app with gathering technical requirements. Initially, we found out that our app should include those primary features:
- Reports containing charts
- A simple task tracking system
- List of all company’s employees
All these data should also be available for offline use. Sounds not so hard, right? We thought so. We picked up our initial technology stack basing on JSON for client-server interactions and SQLite for offline data storage on a mobile device. On iOS, we used MagicalRecord, a CoreData wrapper, very popular few years ago. Today it would definitely be a wrong choice because it’s written in Objective-C and hadn’t got any updates over the last 3 years. However, in 2014 it was free, open-source and had 10K+ stars on GitHub, but the main reason was lack of any alternatives. On Android, we decided to use ORMLite for the very same reasons.
Everything was going fine in the early stages of the development process. We had some demo reports and a few dozens of employees on our development server. However, we’d got serious performance issues right after selling our product to the first customer. Our colleagues from the backend team imported a part of the client’s production database to our test environment. And we found out that our app is not ready to operate with a list of 200K employees. It took between 5 to 10 minutes to download the full list, scrolling performance was poor and search just didn’t work at all. That’s not what our end-users expected. Another issue was reports. Some of them contained charts with millions of points, so we got the same issues: slow downloading and poor performance on both platforms.
We tried to make some hotfixes, introducing pagination to client-server requests and batch writing to the local database, but we were unable to increase performance high enough. Lucky for us, our sales department signed 6-months long contract with the client, and the first part was updating some internal systems, so we’d got a month or two to fix our issues. We understood that we needed to perform full research of what’s going on wrong instead of writing chaotic bug fixes. Below I’m going to tell you about our approach in details.
We started with optimization of employees list performance. After few failed attempts of fast bug fixing, we decided to divide the process of downloading employees list and showing them in UI into smaller steps and measure a duration of each step to find out what needs to be fixed.
Finally, we found that waiting for a server response (30s) and writing to the local database (5–10m) is two longest parts of the entire process, while other parts took just a few hundreds of milliseconds, therefore didn’t require our attention.
Before any optimizations, we had a single request to download the full list of employees. This approach was chosen intentionally because any single part of this list doesn’t carry any value for end-user. Nobody wants to see just employees with last names starting with letter A. Our users need a full list to access all contacts and perform an offline search.
We also had a favorite contacts feature with sync between user’s devices through our server. First of all, we moved favorite data to separate request and separate table in our local database. That didn’t give us huge performance increase, but let us enable in-memory storage of full employees list on the server side.
The final and most important part of response time optimization was introduction of delta-updates. As I stated above, we stored a full list of employees in-memory on server side. It reduced response time from ~30s to just 1–2 seconds, we were unable to reduce it anymore due to network delay. After discussing our solution with backend and sales teams, we also found out that employees list usually could be updated just once a day, there is no need for more frequent updates. So, to reduce a load on server side and speed up database writing on a mobile client we introduced delta-updates. Right after the first install our app downloads full list, but the next day it asks the server for just a small delta-update providing a version (timestamp) of the latest data app has.
Delta-updates dramatically reduced employees list update time. However, we still had two issues:
- Initial load was too slow due to SQLite writing speed
- Apply few delta-updates (for example, after 2 weeks inactivity) also took much time due to the exact same reason
After batch writing didn’t increase performance, we decided to try different approach. We had a hypothesis that either SQLite or MagicalRecord was a bottleneck in our case. So, we proposed 2 possible solutions:
- Switching the ORM
- Switching SQLite with other DB
We didn’t find any replacement to MagicalRecord, so decided to use CoreData. As SQLite replacement we used Realm. Then we developed a synthetic test to measure writing performance of 200K entities. Results were almost the same. That’s why we also decided to include MagicalRecord to our test. We were surprised finding that all 3 tests were running for almost the same amount of time. Here we finally found our issue: creating relations between entities. In our synthetic test, we implemented writing to a single table, while in the production app we had 2 tables with relations. See image below:
Each employee had approximately 3 contacts: email, mobile phone, and work phone. So, we needed to write 200K employees to one table, 600K contacts to another and also create 600K relations between these two tables.
We updated our synthetic test and found that CoreData (or SQLite itself) is working with relations extremely slow. We were able to achieve up to 30x performance increase switching from CoreData to Realm. The primary reason for that is Realm’s no-SQL nature. You could find more technical details and explanations here. See the chart below:
So, we were able to achieve great downloading performance decreasing initial load time from a few minutes to just 15 seconds. Scrolling was now smooth too. However, we still were unhappy with search performance. In some cases, it took minutes to reveal all search results. Below I’m going to explain to you why it was so complicated.
Here is the list of all search fields in 2 different tables:
It was required to develop full-text search over first name, middle name, last name, any combination of those three, position, company, division, email and phone. Remember, we had 200K employees and almost 600K contacts (both emails and phones) in our database. We had to run 8 search queries for each user’s input. It took too much time to process a final response from the database. Another complication was wide usage of Cyrillic characters. All names, positions, companies, and divisions were Cyrillic-only. While emails were a mix of Latin characters and digits, and phones were digits-only. I would skip deeper details on phone extensions.
First, we decided to limit a minimal search query to 3 symbols. Then, we performed some basic analysis of those symbols. If there were Latin characters, it definitely was an email address. So, instead of running 8 queries we were able to run just one. We did the same with phone numbers. However, some emails contained digits, that’s why we run 2 queries over emails and phones. With this optimization, we cut the number of search queries twice.
We still needed to increase name, position, division and company title search performance. After another research over the latest publications over different websites, we found that our main issue is case insensitive search. It worked very fast for Latin-only characters and 8–12 times slowly for all others due to checks for similar characters like
a, à, á, â, ä, ã, æ, å, ā. There are 9 different A letters! And the same for many others. So, we had to turn off case insensitive search and create lowercase fields in our tables. That gave us 8 to 12 times performance increase depending on a specific word.
We also removed some cases of first, middle and last name search combinations using our knowledge of Russian writing specifics. Users usually use 1 of 7 combinations:
- First Last
- Last First
- First Middle
- First Middle Last
- Last First Middle
Nobody in Russia would search for
Middle First or
Middle Last combination. That knowledge let us remove, for example, search of single word entry over Middle name field. With this, final update, we achieved another 1.5–2 times performance increase.
We created a synthetic test to measure search performance. It contained ~ 100 different search queries, mostly focused on name and position search, like it is in real-life usage. And here are our results:
After applying all new search optimization algorithms we increased search performance 23 times!
After succeeding with employees list performance improvements, we started a full transition to Realm. Eventually, we found that some reports are still working very slow. We implemented the same storage model as on our server side: storing each report in 4-5 different tables. We had 5 different report types, and each had different data structure. For example, one of the reports consisted of chapters, chapters contained charts, charts contained series, and series contained points. We created 5 tables to store all these data. Some charts contained millions of points, it was really huge amount of data.
After making a time measurement research, we found that our bottleneck in this case was the database, again. However, we had already switched to Realm, which gave us some performance increase, and we, actually, didn’t have other databases to switch to. Simplest solution we found was sending mapped data from server directly to UI and writing to DB in background thread for future usages. It should work very fast, but would create architectural hell. Think about it, you would need to map DB’s managed objects to some plain objects every time you try to pull data from database. Creating some protocols (interfaces) wasn’t considerable option due to managed objects behavior: it could change or even be deleted in every moment, and we need to observe that. However, plain objects would always stay the same.
Finally, we found that if we could use plain objects in our UI, we don’t need managed objects at all. We could store raw JSON data directly in database and map it on demand. We still had some fields on reports table, like title, update date, author, etc., but report’s body didn’t contain any relations to other parts of database, that’s why could be stored as raw JSON. This tweak gave us up to 300 (yes, three hundred) times performance increase and amazing user experience.
We spent almost 3 weeks trying to optimize CoreData queries, and we failed that. However, we were able to stop these attempts and found a better solution. Switching database was extremely hard process. It took about 2 months to complete this task, and I’m still not sure if it’s possible to complete this faster. In our iOS app we used VIPER architecture and implemented Service-Oriented Approach, but due to CoreData’s nature (we widely used
NSFetchedResultsController to improve performance) we still had parts of our service (database) layer in Interactors. However, in terms of VIPER, Interactor shouldn’t know anything about specific database adaptor we use, which required us to drop
NSFetchedResultsController usage. What we cannot afford due to performance. Yes, it’s a cycle reference :) So, we decided to leave part of our service layer in Interactors replacing
NSFetchedResultsController with Realm notifications mechanics. It was the hardest part, we hadn’t change database entities at all, just replaced NSManagedObject inheritance with RealmObject.
Both my iOS and Android teams really enjoyed working with Realm. It’s working fast, easy-to-learn and easy-to use. It requires much less code, especially on Android comparing to old ORMLite and even new Room.
What we learned:
- Before making any optimization, be sure that you know the specific place needs to be improved
- Don’t try to squeeze every last bit of performance if you’re not sure what to do
- Make experiments and try new technologies
- Always look for suitable solution for your specific task
Maybe Realm wouldn’t work for your specific task such well as it worked for us, but before making your choice, try to run some performance tests on your tasks. Costs of switching local database would always be bigger than the costs of running a few tests.
Realm would be usually faster, much faster than CoreData due to its no-SQL nature. The main advantage is extremely fast management of relations between tables. Also, it’s much easier to use comparing to existing SQL ORMs on both iOS and Android platforms. However, you could experience some bugs and lack of documentation in deeper areas. Personally, I would go for Realm on my next app.