DEV Community

Corey Alexander
Corey Alexander

Posted on • Originally published at coreyja.com on

Devicon Lookup - Binary Search Experiment

Originally posted on coreyja.com

Background

Recently I wrote, and blogged about, writing the devicon-lookup tool. Which is intended to be used in VIM and fzf to provide icons for the corresponding file types. It got posted to the Rust sub-reddit, and got a few comments. I chatted a bit with one commenter about some possible speed improvements, and he mentioned that he might use a sorted list, and do a binary search for the lookup instead of taking the time to build a HashMap.

I decided to give this a shot, and this is a blog post explaining what I did, and the results I found!

Existing Implementation

The existing implementation available, used the Rust crate lazy_static to create a static at runtime HashMap, which the program then used to perform the look ups from file extension to devicon symbol.

Experiments

Binary Search

A binary search is an efficient way of searching through a sorted list for a specific value. The first step was to take the initialization of HashMap and turn it into a sorted Array. Since I had already sorted the HashMap insertions for code cleanliness, this was as simple as changing the hash to an array of tuples. Where the first element in the tuple was the extension and the second value was the symbol.

const SYMBOLS: [(&str, &str); 97] = [
  ("ai", ""),
  ...
  ("zsh", ""),
  ];

Enter fullscreen mode Exit fullscreen mode

Then I used the Rust collection method binary_search_by_key which allows you to pass in both the value you are searching for and a lambda that specifies how to retrieve the key to use for the binary search from the underlying objects. The lambda that I supplied simply returned the extension value from the tuple.


  let index = SYMBOLS.binary_search_by_key(&extension, |&(ext, _sym)| ext);

Enter fullscreen mode Exit fullscreen mode

Binary Search with HashMap cache

The results from the standard Binary Search weren't much better, if at all better than the original implementation. So I wanted to see if adding a HashMap, cache would help. The idea being that you would build up the HashMap, slowly as you found the symbols you needed. This would in theory avoid the upfront initialization cost of creating the HashMap.

This part took me awhile to work out correctly, and this was mostly due to fighting with Rust's borrow checker. There was two major things that helped me get over some hurdles.The thing that helped me over my first hurdle was this Rust Lang forum answer that helped me figure out how to use a HashMap as a cache as it relates to ownership. This example gave me a great place to start, especially related to the usages of to_owned(). https://users.rust-lang.org/t/borrow-checker-stopping-update-to-hashmap-cache/5300/3

The second was realizing that I probably wanted the cache HashMap to be <String,String> instead of the <&str,&str> that it was before. Previously I was using the reference type because before all the strings I was dealing with in the HashMap were static. Once I made this change I was able to get my code to compile much easier.

This is the current state of the code that lives on the binary-search branch on GitHub

Results

Gonna have two sections of these results. One of the Chromebook where I did most of the development for this, and one on my Macbook Pro Laptop for a comparison of my day-to-day workstation.

In all of the results I start by preparing a small and large file ~/tmp/$SIZE.txt that will contain a large set of examples. This set WILL be consistent across machines, in an attempt to make them more comparable. The file was generated by listing the files in home directory of my Chromebook, and will not be published for privacy reasons.

All time testing was done with Hyperfine

Chromebook

Small

Variant Mean [ms] Min…Max [ms]
baseline 10.4 ± 6.5 3.3…31.7
plain 10.2 ± 6.2 3.4…30.4
cached 9.5 ± 5.8 2.7…29.9

Large

Variant Mean [s] Min…Max [s]
baseline 3.816 ± 0.082 3.714…3.948
plain 3.847 ± 0.177 3.629…4.164
cached 4.157 ± 0.288 3.815…4.591

Macbook

Small

Variant Mean [ms] Min…Max [ms]
baseline 4.0 ± 0.8 3.0…8.7
plain 3.8 ± 0.7 3.1…11.1
cached 3.7 ± 0.5 3.1…9.8

Large

Variant Mean [ms] Min…Max [ms]
baseline 835.6 ± 27.2 817.0…909.8
plain 818.0 ± 8.0 805.7…832.6
cached 948.8 ± 7.8 942.6…967.0

Thoughts

The first thing that stuck out to me about these results is that my 'improvement' of adding caching was not an improvement in the overall speed of the program. I wonder if it is due to my naive implementation, or if the idea is simply flawed. I bet my implementation could be improved, but these first results were not the most promising.

Overall on most of these results it appears the plain binary search may have outperformed the baseline implementation, but the results are pretty small.

The most surprising thing to me here is that on the Macbook, the large sample runs fastest with the plain binary search. I would have expected this to potentially be true for small sets, where we were doing fewer lookups and this outweighed the cost of the HashMap inserts. However when I have re-run the tests I have not got results like this consistently.

It seems likely to me that between a plain binary search and a HashMap lookup, by bottleneck is elsewhere in code. I would imagine it is in the IO, but I haven't investigated this claim.

Next Steps

This was a fun experiment, and I learned a lot about the Rust ownership model and the borrow checker! However the results to not seem to show a substantial performance improvements. I prefer the HashMap approach conceptually, since it should have the fastest lookups, so I think I will stick with them for the time being. In the future I may look more into the ability to change this from a runtime HashMap to a compile time one.

Oldest comments (0)