A lot of web apps are struggling with heavy data rendering problems. As your project evolves it becomes a bottleneck. Users' attention is really hard to catch, and nobody is gonna wait for a long response, but rather would go to Google and will find an alternative to your product. That's the reality. So as a web developer you need to be smart about this. And in today's article, I'm going to cover 3 ways in which we can improve the data rendering on your web apps drastically which will lead to more users staying on your site which will lead to higher revenue.
To simplify the examples, let's imagine we received a task from a manager to fix the problem with a long time to wait before all articles on the company website are rendered. So our initial data is a list of articles. You started investigating the problem and find out that you received 1000 articles from the server and the browser is trying to render all 1000 at once. How can we fix that?
The first approach seems the most obvious one, but still, I've seen a lot of websites without this feature.
Pagination - is a fancy word for splitting data into chunks, which means instead of getting and showing all 1000 articles at once we can ask the server for the first 20 or even 50 articles, and let us know how many there are left, so we can know where to stop.
The basic info we need from the server to successfully set up the pagination is a limit and an offset. Some implementations can contain totalPages or hasNext.
Be aware, that this feature would require additional changes from the server side.
The next approach uses the same pagination logic under the hood, by splitting data into chunks, but instead of showing controls to the user, you should automatically request the next chunk of data when a user reaches the end of the list. For better UX it's better to start requesting the next chunk a bit earlier than the end of the list, so for the user it's going to feel like everything was already loaded and we just showed her everything right away.
Honestly, of all 3 ways, this one seems the most user-friendly to me and I like it the most. But for all tasks, there is the right technique. Some users like to click the button, and on some projects, instead of automatically loading the next page you might need to add a "Load More" button, so always clarify the expectations with your client/manager.
For this approach, I couldn't find the right image, but what is better than seeing something? Right, is to try it by yourself. Instead of an image, I prepared a demo of how the virtualized list(VL) works. In this example, I randomly generate 10 000 users with images and render them using virtualized list(VL) approach. Check, how fast it is. Without VL it could even freeze the page, especially on mobile. But how it works?
Virtualized list does what it says --- creates a virtual list of items. That means, that instead of rendering all items right away it defines the visible view size based on the parameters we provided --- height and width of the visible area and height of each item. Based on that info VL can calculate how many items should be rendered right away and what should be rendered when a user scrolls the through the list.
Sounds like magic, but it's not. And there are plenty of ready-to-use solutions for different web stacks. In my example, I used a ready-to-use component from the
library which you can check here.
For the example with the articles optimization, first of all, you need to clarify:
- How many more articles are we expecting to have?
- How fast the solution should be?
- How much effort we can make and whether we can update the server(since in the first two ways we would need to adjust it as well as the frontend)?
- Are there any specific requirements for UI?
And based on those answers the to choose will be obvious.
Thanks for spending your time with me and if you'll have any questions, please, let me know, I'll be glad to help you.