DEV Community

Cover image for Caching a NetworkResponse from the dart http client using Hive
Neil
Neil

Posted on

Caching a NetworkResponse from the dart http client using Hive

This article will be a continuation of my previous article where we explored Effective handover of api responses to your code in dart. If you haven't read it yet, I would suggest reading through it first so that you can have context into what will be discussed today.

NOTE: This was a great solution to our problem but won't necessarily be the best fit for what you need.


The Repository Pattern

Previously I highlighted that we use the Repository pattern to facilitate our ViewModels with a NetworkResponse containing information that comes back from the rest API call. Our repositories give us a neat connection with the REST client without seeing the guts of what it's made of and also allows us to build on top of it without disturbing the harmony of our REST client world.

When you hear the word repository your first thought might be "it's something that serves data to you from a stored location" and you would be right. According to the Merriam Webster dictionary a repository is

a place, room, or container where something is deposited or stored

If you think about it from a mobile perspective, ignoring any knowledge of the innards of a back-end, your REST client, where the Repository gets its data from, could fit into this description as it provides you with data stored in a specific location and would make sense to be used within a repository. But the Repository could be so much more than that without needing a name change. We could use it to fetch data from the REST service but also use it to improve our application performance by introducing a cache to it so that the data is retrieved from disk when certain criteria is met.

This is a well known pattern, not necessarily with the Repository pattern, but in http clients as a whole. One well known dart http client, Dio, has a plugin that also uses Hive as a disk cache and many others that also provide caching capabilities.

There was no caching to start with 😬

Here is some pretext into our problem before I go into the caching details, which the heading promised you, so that you can understand why we made the architectural decisions that we did.

Initially when our application was built the developers didn't see a need for a cache, financial services applications generally don't do that? So the repositories were built and the REST service endpoints were consumed but we soon realised that we desperately needed a cache seeing that almost every screen had a progress bar. We looked at the REST calls and found the data that was not prone to change as often as something like a transactions list, and took the opportunity to improve our application performance without needing to make drastic changes to the codebase.

The NetworkCache was born

As I mentioned in the previous article we use the dart http package to make our network calls. We decided to build our own network cache on top of the http client using Hive because of the security and speed that it promised.

Before we added caching our Repository method for fetching countries looked something like this.

Future<NetworkResponse<Countries, NetworkException>> getCountries() async {
  return makeRequest(() => mukuruRestClient.getCountries());
}
Enter fullscreen mode Exit fullscreen mode

After attaching our NetworkCache to the call, the same method it ended up looking like this.

final NetworkCache networkCache = NetworkCache(registry: [
  CacheKey.countries,
]);

Future<NetworkResponse<Countries, NetworkException>> getCountries() async {
  return makeRequest(
    () => networkCache.getOrUpdate<Countries>(
      cacheKey: CacheKey.countries,
      networkRequest: () => mukuruRestClient.getCountries(),
      cachingStrategy: ExpiryCache(duration: Duration(days: 31)),
    ),
  );
}
Enter fullscreen mode Exit fullscreen mode

This allowed us to cache some calls and omit caching for others without making changes to our existing codebase except for the repositories.

Let's dive on in

The cache requires us to define a specific key linked to the stored request and also a CachingStrategy, the default strategy is ExpiryCache. I included the cachingStrategy in the above example for the example's sake with a value other than 24 hours, generally it's omitted because we use 24 hours as our default.

Below is our implementation of the NetworkCache, if you're familiar with hive this should make some sense, if not you can read up on it here. In short a Box in Hive can be compared to a Table in SQL.

class NetworkCache {

  NetworkCache({
    required this.registry,
  });

  final List<String> registry;

  Future clear() async {
    var futures = <Future>[];
    for (String cacheKey in registry) {
      futures.add(Hive.openBox(cacheKey));
    }
    await Future.wait(futures);
    Hive.deleteFromDisk();
  }

  Future<T> getOrUpdate<T extends CacheItem>({
    required String cacheKey,
    required Function networkRequest,
    CachingStrategy cachingStrategy = const ExpiryCache(),
  }) async {
    Box cacheBox = await _openCache(cacheKey);
    return await cachingStrategy.get(networkRequest, cacheBox);
  }

  Stream<T> getOrUpdateStream<T extends CacheItem>({
    required String cacheKey,
    required Function networkRequest,
    CachingStrategy cachingStrategy = const ExpiryCache(),
  }) async* {
    Box cacheBox = await _openCache(cacheKey);
    await for (T result in cachingStrategy.getStream(networkRequest, cacheBox))
      yield result;
  }

  Future<Box> _openCache(String cacheKey) async {
    if (Hive.isBoxOpen(cacheKey)) {
      return Hive.box(cacheKey);
    }

    return await Hive.openBox(cacheKey);
  }

  Future dispose() async {
    var futures = <Future>[];
    for (String cacheKey in registry) {
      futures.add(Hive.box(cacheKey).close());
    }
    await Future.wait(futures);
  }
}

Enter fullscreen mode Exit fullscreen mode

The cache consists of a few methods.

  • getOrUpdate fetches from the cache or updates it with the network result, if the data does not exist, and then returns the result.
  • getOrUpdateStream is the equivalent of getOrUpdate but returns a Stream in stead of a Future
  • dispose just closes the cache after use
  • clear will wipe the cache clean.

Breaking it down

I want to focus primarily on getOrUpdate as it's the most commonly used and also the most important.

Future<T> getOrUpdate<T extends CacheItem>({
  required String cacheKey,
  required Function networkRequest,
  CachingStrategy cachingStrategy = const ExpiryCache(),
}) async {
  Box cacheBox = await _openCache(cacheKey);
  return await cachingStrategy.get(networkRequest, cacheBox);
}
Enter fullscreen mode Exit fullscreen mode

Type requirements

Looking at the signature of this method we can see it takes in a type T that extends CacheItem. A CacheItem is an abstract class that we use to identify objects that can be cached, it also provides a field to the concrete implementation that notifies the cache when the data was initially stored.

abstract class CacheItem {
  int cachedMilliseconds = 0;
}
Enter fullscreen mode Exit fullscreen mode

Using our Countries example we can see how the CacheItem along with Hive is implemented on the object and fulfils the type requirements of our getOrUpdate method.

part 'countries.g.dart';

@JsonSerializable()
@HiveType(typeId: CacheTypeId.countries)
class Countries implements CacheItem {
  @HiveField(1)
  @JsonKey(name: 'items')
  List<Country>? countriesList;

  @JsonKey(ignore: true)
  @HiveField(2)
  @override
  int cachedMilliseconds;

  Countries({
    this.cachedMilliseconds = 0,
    countriesList,
  });

  factory Countries.fromJson(Map<String, dynamic> json) =>
      _$CountriesFromJson(json);

  Map<String, dynamic> toJson() => _$CountriesToJson(this);
}
Enter fullscreen mode Exit fullscreen mode

Arguments

  • cacheKey is used to identify the Box in which Hive will store the data
  • cachingStrategy will perform the caching instructions based on its strategy type.
  • networkRequest will be handed to the cachingStrategy

Opening the box

Next we'll look at the body of our method.

Box cacheBox = await _openCache(cacheKey);
return await cachingStrategy.get(networkRequest, cacheBox);
Enter fullscreen mode Exit fullscreen mode

Firstly the _openCache method is called, this method will use the cacheKey to identify which Box is related to our data and then check if the hive Box has been opened and ready to use, if not it will open the Box so that we can retrieve or write data into it. When the Box has been opened we send it along with the network request to the caching strategy for caching procedure.

CachingStrategy

A CachingStrategy is an abstract class that dictates what the caching strategy should look like but also provides some basic functions that each strategy would be using.

abstract class CachingStrategy<T>{
  const CachingStrategy();

  final fieldKey = 'field_type_key';
  Future<T> get<T>(Function networkRequest, Box box);
  Stream<T> getStream<T>(Function networkRequest, Box box);

  Future<void> addToCache<T>(T networkValue, Box box) async {
    if (networkValue is CacheItem) {
      networkValue.cachedMilliseconds = DateTime
          .now()
          .millisecondsSinceEpoch;
      box.put(fieldKey, networkValue);
    }
  }
}
Enter fullscreen mode Exit fullscreen mode

The ExpiryCache serves as our default caching strategy, it provides functionality that allows the Repository to fetch data from the cache up to a certain point in time. If the data has expired it will be discarded and replaced with new data from the REST service. Here is the implementation for this cache.

class ExpiryCache extends CachingStrategy {
  static const _defaultDuration = const Duration(hours: 24);

  const ExpiryCache({this.duration = _defaultDuration});

  final Duration duration;

  Future<T> get<T extends CacheItem>(Function networkRequest, Box cacheBox) async {
    T? cachedValue = cacheBox.get(fieldKey);
    if (cachedValue == null || _hasCacheExpired(cachedValue)) {
      T t = await networkRequest();
      addToCache(t, cacheBox);
      return t;
    } else {
      return cachedValue;
    }
  }

  @override
  Stream<T> getStream<T extends CacheItem>(Function networkRequest, Box box) async* {
    T? cachedValue = box.get(fieldKey);
    if (cachedValue == null || _hasCacheExpired(cachedValue)) {
      T t = await networkRequest();
      addToCache(t, box);
      yield t;
    } else {
      yield cachedValue;
    }
  }

  bool _hasCacheExpired(CacheItem cachedValue) {
    int nowMilliseconds = DateTime.now().millisecondsSinceEpoch;
    int cacheExpiryMilliseconds = nowMilliseconds - duration.inMilliseconds;
    bool hasCacheExpired = cachedValue.cachedMilliseconds < cacheExpiryMilliseconds;
    return hasCacheExpired;
  }
}
Enter fullscreen mode Exit fullscreen mode

In the above code we can see that there are one of two outcomes for both the get and getStream methods.

  1. The data is available and has not expired.
    In this case the data will be immediately returned to the repository for processing.

  2. The data has either not been cached or has expired.
    Here the networkRequest will be executed, if the request is successful the data will be cached and thereafter handed back to the repository for processing.


Wrap Up

To look back at what you just plowed through.

  1. The NetworkCache is created within our Repository.
  2. The Repository call to the http client is wrapped with the cache getOrUpdate method.
  3. The getOrUpdate method calls the strategy that has been passed into it with the network request and Hive Box linked to the cache key.
  4. If the strategy is our ExpiryCache it will check if there is cached data that has not expired and return it otherwise it will fetch it from the rest client and return that in stead.
  5. The repository returns the data to its ViewModel and you have come full circle.

We have found hive reliable, stable and secure. It has worked wonderfully with our http client implementation and has delivered on the performance promises that has been made. The application has also passed quite a few penetration tests by trained professionals. Generally the disk cache of an application tends to be a weak point in a penetration test but with hive that was not the case.


Bonus Round

We have another CachingStrategy called OverridingCache! I'm not going to go into too much detail here but this strategy will essentially immediately return the data contained within the cache but at the same time query the rest client for new data and update the cache with this data immediately after it received a response. Using the Stream part of the CachingStrategy is ideal here because you can immediately return data and moments later update the view with the latest information.

This cache could theoretically work in the transactions list scenario. The last known transactions, retrieved from the cache, would be displayed with a timestamp linked to the stored data. The list will then be updated with the new data retrieved from the rest service as soon as it arrives creating a better experience for your user!

Here is the code for the OverridingCache if you're keen to see that too.

class OverridingCache extends CachingStrategy {
  @override
  Future<T> get<T extends CacheItem>(Function networkRequest, Box cacheBox) async {
    T? cachedValue = await cacheBox.get(fieldKey);
    if (cachedValue != null) {
      getAndCacheNetworkValue(networkRequest, cacheBox);
      return cachedValue;
    } else {
      return getAndCacheNetworkValue(networkRequest, cacheBox);
    }
  }

  @override
  Stream<T> getStream<T extends CacheItem>(Function networkRequest, Box box) async* {
    T? cachedValue = await box.get(fieldKey);
    if (cachedValue != null) {
      yield cachedValue;
    }

    yield await getAndCacheNetworkValue(networkRequest, box);
  }

  Future<T> getAndCacheNetworkValue<T>(Function networkRequest, Box cacheBox) async {
    T networkResult = await networkRequest();
    addToCache(networkResult, cacheBox);
    return networkResult;
  }
}
Enter fullscreen mode Exit fullscreen mode

Thank you!

If you got this far, thank you for sticking it through. I hope this post was of some value to you even if it was just for some relaxation.

Please leave a comment if you have questions or suggestions on the process that we followed.

Come join our conversations on Twitter I would love to connect with you.


Attributions

All Gifs found on Giphy
Header created by Freepik

Oldest comments (0)