Developers often initially look to the Elixir language and stack because it's known for being able to handle massive amounts of concurrent requests and scale easily. This makes Elixir a great choice for building highly performant applications.
However, sometimes operations are computationally expensive and can slow down your application. This is where caching comes in.
In this article, we'll explore how Cachex, a powerful library tailored for Elixir, can help you add caching to your application and improve its performance.
Understanding Caching and Its Importance in Elixir
Caching is the art of storing data temporarily to reduce redundancy and improve access times. The primary benefits of caching include faster data retrieval, reduced load on primary data sources, and enhanced application responsiveness. However, caching is not without its drawbacks. Over-reliance can lead to stale data, and improper cache management can result in increased complexity.
Caching can be added to reduce bottlenecks and improve performance in Elixir: for example, when dealing with external systems, databases, or computationally expensive operations. Caching can also be used to store frequently accessed data in memory, such as user sessions or application state.
Now let's take a quick look at some benefits and disadvantages of caching.
Benefits of Caching
Caching comes with several benefits, including:
- Improved performance: Caching can significantly reduce data retrieval times, making applications more responsive.
- Reduced load on primary data sources: By serving data from a cache, there's less strain on primary data sources like databases, reducing the risk of them becoming a bottleneck.
- Cost savings: Reducing the number of calls to external services or databases can lead to cost savings, especially if those calls are billable.
- Enhanced user experience: Faster response times lead to a smoother user experience.
- Scalability: Caching can help applications handle more users simultaneously by reducing the need for resource-intensive operations.
- Reduced network traffic: Serving data from the cache can reduce the amount of data that needs to be transmitted over a network.
- Offline access: In some scenarios, caching allows users to access certain pieces of data even when offline.
Drawbacks of Caching
Even though there are a lot of benefits to caching, there are some drawbacks you should be mindful of:
- Stale data: Cached data can become outdated, leading to users receiving old or incorrect information.
- Increased complexity: Implementing caching introduces another layer of complexity to system architecture and can complicate debugging.
- Memory usage: Caching, especially when done extensively, can consume a significant amount of memory.
- Cache invalidation: Deciding when and how to invalidate or refresh the cache can be challenging.
- Cache warm-up: After a cache clear or system restart, the cache might be "cold" and it can take time to "warm up" to an optimal state.
- Potential for cache thrashing: Rapidly adding and evicting items can lead to cache thrashing, where the cache doesn't provide its benefits effectively.
- Maintenance overhead: Over time, your caching strategy might need adjustments, leading to additional maintenance work.
It's worth noting that while caching offers numerous advantages, it's essential to implement it judiciously, considering the specific needs and characteristics of each application.
Caching Options in Elixir
In the world of Elixir, many different options are available for caching. It is important to understand that each library has its own set of pros and cons. As developers, we need to consider the specific scenario where we intend to use caching and choose the right tool for the job.
Some of the options available to use are:
- Cachex: A powerful caching library tailored for Elixir. It offers features like Time-to-Live (TTL), fallbacks, and locking mechanisms.
- ConCache: A concurrent caching library for Elixir. It's built on top of Erlang Term Storage (ETS) and provides features like TTL and cache partitioning.
- Nebulex: A flexible and highly configurable caching library. It supports different caching strategies and has built-in support for distributed caching.
- Erlang Term Storage (ETS): While not exclusively a caching solution, ETS is an in-memory store that's often used for caching in Elixir applications. It's a core feature of the Erlang runtime system.
- Mnesia: A distributed database management system that comes with Erlang/OTP. It can be used for caching, especially in distributed Elixir applications.
- Redis via Redix: While Redis is not an Elixir-specific solution, it's a popular choice for caching in many applications. The Redix library allows Elixir applications to interact with Redis easily.
-
Least Recently Used (LRU) caches: There are several Elixir libraries, like
lru_cache
, that implement the LRU caching algorithm.
For the rest of this article, we'll focus on Cachex, because it's a robust caching solution that's easy to use and offers a wide range of features. It's also well-documented and has a vibrant community around it.
Introduction to Cachex for Elixir
Cachex offers a suite of features that make it an indispensable tool for Elixir developers. From simple key-value storage to advanced features like TTL settings and fallback mechanisms, Cachex provides a comprehensive caching solution.
Among the many features that make Cachex a powerful tool for caching in Elixir, the following stand out:
- Performance: Cachex is designed for high performance, ensuring rapid data retrieval and insertion. This is crucial for applications that require real-time responsiveness.
- Concurrency support: Elixir is known for its concurrency capabilities, and Cachex is built to handle concurrent operations seamlessly. It ensures that multiple processes can interact with the cache without causing data inconsistencies.
- Advanced features, including TTL (allowing developers to specify how long an item should remain in the cache, to ensure that data doesn't become stale), fallbacks (mechanisms to compute values if they're missing from the cache, so that an application can still function even if a cache miss occurs), and locking mechanisms (ensuring data integrity during operations like cache updates).
- Flexibility: Cachex is not just a simple key-value store. It supports complex data structures, making it versatile for a range of applications.
- Distributed caching: While Cachex primarily operates as a local cache, it can be combined with other tools and libraries to support distributed caching scenarios, making it suitable for clustered Elixir applications.
- Comprehensive documentation: Cachex comes with extensive documentation, making it easier for developers to get started and harness its full potential.
- Integration with Telemetry: Cachex integrates with the Telemetry library, allowing developers to gather metrics and monitor cache performance in real-time.
- Cache eviction strategies: Cachex supports various cache eviction strategies, such as LRU, ensuring that the cache remains efficient even as it fills up.
- Fault tolerance: Built with Elixir's fault-tolerant nature in mind, failures are isolated and don't bring down the entire application.
Adding Caching to Your Elixir Application with Cachex
To get started with Cachex, we'll need to add it as a dependency to our project.
To install Cachex in an Elixir application, follow these steps:
-
Add cachex as a dependency:
Open your
mix.exs
file and add:cachex
to the list of dependencies:
defp deps do
[
{:cachex, "~> 3.3"} # The version number might change over time, so always check for the latest version.
]
end
- Fetch and compile dependencies:
Run the following command in your terminal:
mix deps.get
This will fetch and compile the Cachex library along with any of its dependencies.
- Start Cachex:
Before using Cachex in your application, you need to start it. You can start a new cache instance with the following command:
{:ok, _} = Cachex.start_link(:my_cache)
Here, :my_cache
is the name of the cache. You can choose any name that suits your application.
Calling Cachex.start_link/1
will start a new cache instance with the default configuration. If you want to customize the configuration, you can pass in a map with the desired configuration options.
For example, if you want to set the cache size to 1000 items, you can do so by passing in the :size
option:
{:ok, _} = Cachex.start_link(:my_cache, %{size: 1000})
It's important to note that Cachex is built on top of GenServer, so you can use the same techniques to start and supervise it.
- Add to application supervision tree: If you want the cache to be supervised and automatically restart in case of failures, you can add it to your application's supervision tree. This ensures that the cache is always available throughout the lifecycle of your application.
In your application's supervisor, you can add:
children = [
{Cachex, name: :my_cache}
]
This will start the Cachex cache when your application starts and supervise it.
Note: There are some scenarios where you might not want to start Cachex automatically. For example, if you're using Cachex in a Phoenix application, you might want to start it only when the application is running in production. In such cases, you can start Cachex manually in your application's supervision tree.
That's it! You've successfully installed Cachex in your Elixir application. You can now use its various functions to cache data, retrieve cached data, and manage your cache. Always refer to the official Cachex documentation for more detailed information and best practices.
Working with Phoenix
A common use case for Cachex is to cache data in a Phoenix application. Phoenix doesn't have a built-in caching mechanism, so developers often turn to third-party libraries like Cachex.
Before we can use Cachex in a Phoenix application, we need to add it as a dependency and initialize it. We can do this by following the steps outlined in the previous section.
Once Cachex is installed, we can use it in our Phoenix application. Let's look at a few examples of using Cachex in a Phoenix application.
Caching Database Queries
One common use case in Phoenix applications is to cache the results of database queries to reduce database load and speed up data retrieval. After fetching data from the database, you can store it in the Cachex cache:
def get_user(id) do
case Cachex.get(:my_cache, id) do
{:ok, nil} ->
user = Repo.get(User, id)
Cachex.put(:my_cache, id, user)
user
{:ok, user} -> user
end
end
The function get_user/1
retrieves a user by their id. It first checks if the user is available in the cache, and if not, it fetches them from a repository (likely a database) and then caches the result for future calls. Let's break down what's happening here:
-
Function definition: The function
get_user/1
is defined to accept a single argument,id
, which identifies a user. -
Cache lookup: The function starts by trying to retrieve the user from a cache named
:my_cache
using the provided id as the cache key.Cachex.get(:my_cache, id)
attempts to get the value associated with the key id from:my_cache
. -
Handling cache results: The result of the cache lookup is pattern-matched using a case statement to determine the next steps.
-
Cache miss
{:ok, nil}
: If the cache returns{:ok, nil}
, it indicates a cache miss, meaning the user is not present in the cache. - The function then fetches the user from a repository using
Repo.get(User, id)
. This is likely a call to a database to retrieve the user data. - Once the user data is fetched, it's stored in the cache using
Cachex.put(:my_cache, id, user)
. This ensures that subsequent calls for the same user id will find the user in the cache and won't need to hit the database. - Finally, the fetched user data is returned as the result of the function.
-
Cache hit
{:ok, user}
: If the cache returns{:ok, user}
where the user is not nil, this is a cache hit. The user data has been found in the cache. The function simply returns the cached user data without making any database calls.
-
Cache miss
Caching Views
Sometimes, we might need to cache rendered views to improve performance. For example, if we have a view that's expensive to render, we can cache it to reduce the load on the server and speed up data retrieval.
There are many reasons why views might be expensive to render. For example, they might contain complex logic or require multiple database calls. In such cases, caching can be a useful technique to improve performance.
We can use a plug to cache views in Phoenix. Here's an example:
Create a plug that checks if the view is cached and sends the cached content if it is. Otherwise, it continues with the pipeline and caches the response later.
defmodule MyApp.CacheViewPlug do
import Plug.Conn
def init(opts), do: opts
def call(conn, _opts) do
cache_key = "view_cache:#{conn.request_path}"
case Cachex.get(:my_cache, cache_key) do
{:ok, nil} ->
# Cache miss, continue with the pipeline and cache the response later
conn
{:ok, cached_content} ->
# Cache hit, send the cached content and halt the pipeline
conn
|> put_resp_content_type("text/html")
|> send_resp(200, cached_content)
|> halt()
end
end
end
Let's break down what's happening here:
-
Module definition: the
defmodule MyApp.CacheViewPlug do:
line defines a new module namedMyApp.CacheViewPlug
. -
Importing Plug.Conn:
import Plug.Conn:
imports functions from the Plug.Conn module, which provides utilities for working with connection structs in the context of a Plug. -
Initialization function:
def init(opts), do: opts:
is the initialization function required by the Plug behavior. It takes an opts argument (options) and simply returns it unchanged. In many Plugs, this function sets default options or validates the provided options. However, in this case, it's a simple pass-through. -
Call function:
def call(conn, _opts) do:
is the main Plug function that gets executed for every request. It takes two arguments:conn
(the connection struct) and_opts
(the options, which are ignored in this case). -
Generating the cache key:
cache_key = "view_cache:#{conn.request_path}":
constructs a cache key based on the request path. The cache key is prefixed with "view_cache:" to namespace or differentiate it from other potential cache keys. -
Checking the cache: The case
Cachex.get(:my_cache, cache_key) do
statement checks the cache(:my_cache)
for content associated with the generated cache_key. -
Handling cache results:
-
{:ok, nil} ->
: This pattern matches a cache miss. If the cache doesn't have content for the given key, it returns{:ok, nil}
. In the event of a cache miss, the function simply returns the unchanged conn, allowing the request to continue through the Plug pipeline. The intention is that the response will be cached later, presumably by another part of the application. -
{:ok, cached_content} ->
: This pattern matches a cache hit. If the cache has content for the given key, it returns{:ok, cached_content}
. In this case, the function sends the cached content as the response, sets the response content type to "text/html", and then halts the Plug pipeline to prevent further processing.
-
Now, add the plug to your pipeline in router.ex
:
pipeline :browser do
...
plug MyApp.CacheViewPlug
...
end
Finally, cache the view after rendering it:
def some_action(conn, _params) do
content = render(conn, "some_template.html")
cache_key = "view_cache:#{conn.request_path}"
Cachex.put(:my_cache, cache_key, content)
send_resp(conn, 200, content)
end
Caching API Responses
Another common Cachex use case for Phoenix is to cache API responses. This can be useful to reduce the load on external services and speed up data retrieval. Similar to caching views, we can use a plug to cache API responses in Phoenix.
- Create a plug that checks if the response is cached and sends the cached content if it is. Otherwise, it continues with the pipeline and caches the response later.
defmodule MyApp.CacheAPIPlug do
import Plug.Conn
def init(opts), do: opts
def call(conn, _opts) do
# Create a cache key based on the request path and query parameters
cache_key = "api_cache:#{conn.request_path}?#{conn.query_string}"
case Cachex.get(:my_cache, cache_key) do
{:ok, nil} ->
# Cache miss, continue with the pipeline and cache the response later
conn
{:ok, {status, headers, body}} ->
# Cache hit, send the cached response and halt the pipeline
conn
|> put_status(status)
|> put_resp_header("content-type", List.keyfind(headers, "content-type", 0, {"content-type", "application/json"}))
|> send_resp(status, body)
|> halt()
end
end
end
- Add the plug to your pipeline in
router.ex
:
pipeline :api do
...
plug MyApp.CacheAPIPlug
...
end
- Cache the response after sending it:
def some_api_action(conn, _params) do
# Process the request and generate the response
response = %{data: "Some API response data"}
# Convert the response to JSON
json_response = Jason.encode!(response)
# Cache the response
cache_key = "api_cache:#{conn.request_path}?#{conn.query_string}"
cache_value = {200, [{"content-type", "application/json"}], json_response}
Cachex.put(:my_cache, cache_key, cache_value)
# Send the response
json(conn, response)
end
Checking for Stale Cache Data
One of the main challenges with caching is ensuring that the data in the cache is up-to-date. If data becomes stale, it can lead to incorrect results and a poor user experience. Therefore, it's crucial to check for stale data and refresh the cache when necessary.
Cachex provides built-in mechanisms to handle stale data, primarily through its Time-to-Live (TTL) and fallback features.
TTL
TTL allows you to specify how long an item should remain in the cache. Once the TTL expires, an item is automatically removed from the cache. This ensures that you don't serve stale data older than a specified age.
# Store data with a TTL of 3600 seconds (1 hour)
Cachex.put(:my_cache, "key", "value", ttl: 3600)
When you attempt to retrieve this key after its TTL has expired, it will return as if the key does not exist in the cache.
Fallbacks
Cachex's fallback mechanism is a powerful feature that allows you to execute a function when a cache miss occurs. This can be especially useful for handling stale data. If data is not in the cache (either because it was never cached or because it was evicted due to TTL expiration), the fallback function can fetch fresh data.
fallback_fn = fn key ->
# Fetch fresh data for the given key
{:commit, fetch_fresh_data(key)}
end
# Attempt to get data from cache, use fallback if cache miss occurs
Cachex.fetch(:my_cache, "key", fallback: fallback_fn)
The :commit
tuple ensures that fetched data is stored back into the cache.
By combining TTL and fallbacks, Cachex provides a robust mechanism to ensure that stale data is not served and that fresh data can be fetched and cached automatically when needed.
One thing to note is the difference between Cachex.get
and Cache.fetch
. The Cachex.get
function returns the value associated with the given key, or {:ok, nil}
if the key is not found in the cache. The Cachex.fetch
function returns the value associated with the given key, or executes the fallback function if the key is not found in the cache. While both functions can be used to retrieve data from the cache, they have different behaviors when the key is not found. Cachex.fetch
offers more advanced mechanisms to handle cache misses.
It's important to understand that the TTL and fallback features are not mutually exclusive, and can be used together to provide a more robust caching solution and a better user experience.
Wrapping Up
In this post, we've seen that caching is a powerful technique to significantly improve your Elixir application's performance. However, we've also explored how caching is not without its drawbacks.
Cachex, as representative of the vibrant Elixir ecosystem, showcases how community-driven tools can address complex problems with elegance and efficiency. But remember, Cachex is just the tip of the iceberg. The Elixir community is teeming with innovative libraries and frameworks, each solving unique challenges and pushing the boundaries of what's possible.
As you continue your journey with Elixir, I encourage you to explore the many tools and libraries available and discover how they can help you build better applications.
Happy coding!
P.S. If you'd like to read Elixir Alchemy posts as soon as they get off the press, subscribe to our Elixir Alchemy newsletter and never miss a single post!
Top comments (1)
Hey there! Great job! 😊 Just wanted to share that I'm currently working on creating a kv database primarily for caching purposes (not in elixir, but in zig). If you're interested, feel free to check it out! 🚀
github.com/sectasy0/zcached