DEV Community

Cover image for Intro to Redis
Amil Amirov
Amil Amirov

Posted on

Intro to Redis

  • Suppose that, you as a client send request to the server and it is clear fact that performance is very important to you(No one would want to wait long time to get response from server). For example, you want to fetch some data and can get it from disk storage. As it is known to us that data read from and write to disk storage will work in mechanical and electronic level and it is the reason that it will be somewhat slower to work with disk storage directly. Maybe that for data that requsted rarely can't create problem, but let's say request to get info about Lionel Messi. There is high probability that this data will be requested often and everytime to get this data from disk storage will lower our application performance and at the end of the day we will end with unsatisfied users and bad engineering. One of the solutions to solve this problem is in-memory storage systems. In comparison with disks, memory has higher throughput, lower latency(as it doesn't work in mechanical and electronic level) and better performance. But these systems have also some drawbacks. Not all data will be cached in memory, that's why disk storages have much more capacity than memory and to maintain high performance while increasing memory capacity has been crucial problem. Also, as the data in memory is not persistent there will be data loss risks(when system is down you will lose data). In this article, I'm going to write basic introduction about one of the key-value storage systems called Redis.
  • As our memory capacity is lower compared to disk storage, increasing real-time and interactive data requests will require to increase memory capacity to serve this change. But maintaining performance while increasing memory is also problem. Redis comes with clustering so that data can be stored in distributed nodes to support increasing storage capacity. But again here rises another problem - as the design get decentralized, clients will need two connections to get their request to be responded(It limits performance again). To make it more scalable Redis offers Client-Side Key-to-Node caching method, so that, client will need only one connection to get response(It will request to the right service node - don't worry I will explain it in the next articles). Experiments show that applying this technique can improve the overall performance by near 2 times.
  • There is another notion offered by redis called data replication on slave nodes to ensure data safety. But it doesn't solve problem completely. There is still some chance of data loss due to the weak consistency between master and slave nodes. The reason behind it is that there may be some deviations in order of replication and request response, it will lead to data loss as client will not get notification. To solve it, there has been proposed method called Master-Slave Semi Synchronization. It uses TCP protocol to ensure the order of data replication and request-response. As client will get "Ok" message, the data must have been replicated. It will improve data reliability and experiments show data performance overhead is within 5%.(Performance overhead means additional resources used by the proposed method, such as CPU, network bandwidth and etc.).
  • This article is just basic introduction. In the next articles I'm going to explain some dark points like In-Memory, Key-Value, Scalability, Reliability, Key-to-Node Caching, Semi Synchronization. Good luck to you learner!

Top comments (0)