After posting Introduction and Motivation, I've started receiving comments which express concerns related to possible performance penalty caused by the additional classes such as
Before discussing benchmark results, I should note, that even without measurements it is clear that related performance impact should be quite low. There are no complex computations inside these classes, just few function calls, which are rather easy to optimize. Moreover, these classes are designed in a way which eliminates branching completely - there are no
if statements nor conditional expressions (
?) inside, just method calls. Lack of branching eliminates branch prediction misses and positively affects performance.
And, frankly, given that most of the time we're using heavy, slow and resource hungry frameworks like Spring, any performance impact caused by PFJ style code will be negligible.
Nevertheless, all of the above are just general considerations and sensible portion of guesstimate. It will be much safer to do measurements and check results.
The idea behind benchmarks is rather simple - mimic some simple use case similar to what can happen in real life and pass mixed input to it.
The mixed input consists of two possible input values, each of which triggers a different execution path. For example, when
null handling is benchmarked, the method under test received a mix of valid strings and
null values. The valid string is converted to upper case (serves the purpose of business logic). The
null string just returns
null. Similar approach is used for
Optional benchmark and other two benchmarks (exceptions vs
Since in real applications proportions between both cases can be different, benchmarks measures several cases with different proportions of the different input values:
- 0% - no invalid values (only
happy day scenariocases)
- 10% - 10% of input consists of invalid values (which trigger
- same for 25%, 50%, 75%, 90% of invalid values in input
- 100% - no valid inputs at all - shows pure overhead caused by error/missing value handling
Since only valid cases involve some real processing, you might notice that time per iteration is getting smaller when proportion of valid values is decreased.
Below provided results obtained on MacBook Pro M1:
(results provided in pairs for convenient comparison)
OptionPerformanceTest.nullable0 avgt 6 36.107 ± 0.456 us/op OptionPerformanceTest.option0 avgt 6 35.085 ± 0.409 us/op OptionPerformanceTest.nullable10 avgt 6 32.365 ± 0.064 us/op OptionPerformanceTest.option10 avgt 6 31.938 ± 0.323 us/op OptionPerformanceTest.nullable25 avgt 6 26.910 ± 0.828 us/op OptionPerformanceTest.option25 avgt 6 26.347 ± 0.113 us/op OptionPerformanceTest.nullable50 avgt 6 18.158 ± 0.119 us/op OptionPerformanceTest.option50 avgt 6 17.688 ± 0.086 us/op OptionPerformanceTest.nullable75 avgt 6 9.146 ± 0.198 us/op OptionPerformanceTest.option75 avgt 6 8.844 ± 0.181 us/op OptionPerformanceTest.nullable90 avgt 6 3.716 ± 0.022 us/op OptionPerformanceTest.option90 avgt 6 3.599 ± 0.055 us/op OptionPerformanceTest.nullable100 avgt 6 0.084 ± 0.001 us/op OptionPerformanceTest.option100 avgt 6 0.087 ± 0.008 us/op
There is a sensible temptation to say that
Option performs better, but numbers too close to claim real advantage. So, I'm just glad to see that there is no negative performance impact caused by
Conditions are the same as above:
ResultPerformanceTest.exception0 avgt 6 36.621 ± 0.045 us/op ResultPerformanceTest.result0 avgt 6 35.215 ± 0.208 us/op ResultPerformanceTest.exception10 avgt 6 32.651 ± 0.129 us/op ResultPerformanceTest.result10 avgt 6 31.983 ± 0.112 us/op ResultPerformanceTest.exception25 avgt 6 27.373 ± 0.240 us/op ResultPerformanceTest.result25 avgt 6 26.472 ± 0.130 us/op ResultPerformanceTest.exception50 avgt 6 18.239 ± 0.769 us/op ResultPerformanceTest.result50 avgt 6 17.671 ± 0.101 us/op ResultPerformanceTest.exception75 avgt 6 9.213 ± 0.597 us/op ResultPerformanceTest.result75 avgt 6 9.019 ± 0.090 us/op ResultPerformanceTest.exception90 avgt 6 3.705 ± 0.021 us/op ResultPerformanceTest.result90 avgt 6 3.618 ± 0.019 us/op ResultPerformanceTest.exception100 avgt 6 0.087 ± 0.001 us/op ResultPerformanceTest.result100 avgt 6 0.086 ± 0.001 us/op
Conclusions also the same - there is no visible performance impact caused by the
Result. Nevertheless, I should note, that benchmark puts exceptions benchmark is somewhat more convenient conditions than in real life. In real applications, exceptions are often logged and this may trigger execution of some expensive parts of the exception handling code like formatting stack trace. As one could see, this does not help exceptions to outperform
Pragmatic Functional Java provides several advantages over idiomatic Java and does not cause any negative performance impact.
Full benchmark code is available here.