While viewing FusionReactor logs for a ColdFusion app, I noticed lots of repetitive JDBC entries in a request that contained the same exact SQL statement and was taking 42ms per execution. I checked the UDF that was performing the query (using QB) and the query was configured to be cached for 5 minutes... but overhead-wise, it was still taking 42ms each time. The "Number of Queries" for the request was 57 and most of the queries were similar cached 40ms lookups and it started adding up to 1,452ms overall. I wondered if there was anything I could do to add self-contained caching to UDFs that could benefit from them. I didn't want to save the response to the session (YIKES! I've seen some code that does this) or use
cacheGet since the caching only needs to live for a single request of "repetitive access".
I've been playing with the java
hashCode() function to convert strings to a unique signed integer and wondered if I could stringify the arguments passed to create a cacheable key and very temporarily store the result in the request scope. This works (for me) and I'm happy with the results, but I wonder if there are any potential issues or side effects that I'm not considering (besides the rare chance of hashing entropy).
Here's a sample UDF with some slow/fake business logic (emulated using
sleep(1000)). The first request is normally slow (as expected) whereas all repeat accesses with the same exact arguments are 0ms. I believe that this approach may also work in environments that use multithreaded processing on a single request, but I'm not 100% sure. (ie, in a multithreaded environment, would it be recommended to use
CFLock with request scope locking?)
Top comments (2)
That's pretty cool. One of the things I like in Lucee CFML (I know you don't use platform-specific stuff) is that you can add
cachedWithin="request"on the Function declaration and it will do this automatically. Actually, now that I say that, I am wondering if I can find out where in the code Lucee is doing the caching.
Oooof, the Lucee CFML Java code jumps through a lot of hoops to get the caching ID for a function. I think this is where it is doing it:
Your way seems much simpler :D