Happy to see you here and even happier to announce something new we’ve been working on lately!
I am a part of the Flexmonster team - a company that works on several data visualization pivot grid products such as Flexmonster Pivot Table & Charts and WebDataRocks. We continue to develop and improve these products, but we cannot stop moving forward with our other ideas and inventions that come to us in the process.
As you can see, all our work is focused on data and its representation in tabular format. Having broad experience and a large customer base, we started noticing one specific request from our users.
Many companies need to visualise such a large number of records at a time that the browser sometimes simply cannot cope with or it takes hours to process and draw it on the grid.
And even if eventually everything ends up on the page, any subsequent action, be it filtering or searching for a concrete value, will take a lot of time again. Such software performance does not contribute to work efficiency at all, and even more so – to good results.
Thus, even the mention of analyzing a large amount of data leads to an instant association with a boring, long, and irritating process.
Looking back on all our development experience, we decided to enter the game.
We researched 61 components on the market, and only 28% of them wrote something about working with big amounts of data. At the same time, the meaning of the term "large dataset" is different for everyone - for some, it's 10,000 records, and for others, a million. Similarly, the meaning of the term "fast performance" is different for everyone. But none of their "understandings" meet our requirements.
Therefore, it was decided to develop such a solution ourselves - a super fast and powerful data grid that can work with millions of records instantly so you can access all your data without waiting.
A little additional market research, study of different approaches, development of principles and goals of our product, and some practical experiments - and here we have what to present to you - meet DataTable.dev!
It's a grid library, but with that dreamy ideal performance. The component reacts so quickly to any actions that the user seems to be directly interacting with the data without a computer as an intermediary. At the same time, it does not matter how much data you upload there - the demo shows 11 million rows from a 1.6 GB file, but the grid can actually handle a lot more than that!
We focused on creating a solution that can rightly be called **the destroyer of the stereotype of slow data analysis **and providing you with an ideal fast and convenient software.
We developed our own approach to our data table operation and functioning.
We reviewed the principle of loading and drawing data in the browser window of our previous products and found several optimizations that could significantly improve and speed up these processes.
We managed to specify the structure of frames and different behaviours of processes in different situations, defined their execution time and sequence, and managed to optimise free time in frames.
We are continuing the development of our idea and want to implement such a scheme in our other products.
But right now, you can check our current progress, give us feedback, and sign up for our newsletter, where we'll be showing you the progress of the component in live. On the website, you can now read more about our product, try playing with a demo and evaluate the scale of data that the table can handle, as well as delve into our approach to understand it better from the inside.
We have also launched our idea on a Product Hunt and hope to get enough feedback to continue developing and improving our approach and product.
I'm really excited to finally be able to show you our results and hear your comments and feedback because that's what motivates us to move forward!
So write in the comments all your thoughts and suggestions - we will be very happy to expand our horizons and get more points of view!