Good points here. I'm pretty sure your timer should be reporting in nanoseconds but chrono seems to have some variation on reporting depending on processor / os specifics. I figured I'd throw this up too because I thought it would be fun.
The idea here is that initialization takes longer but any call made after can be retrieved in about half the time iff the following calls are < the largest seen. I realize that fib sequence tasks are not ones assumed to be called multiple times but its an optimization technique that came to mind when I read your post.
When I ran and timed this the first call takes the amount of time the standard linear solution would take, but following monotonically increasing calls only took the amount of time to calculate the delta of size() and N.
All calls under size() took around 2ms to complete.
The timer reports in whatever it feels like. More boringly, it'll be measuring std::chrono::high_resolution::period::type::Denom ticks per std::chrono::high_resolution::period::type::Num seconds - and helpfully it might go backwards, because it's not explicitly stable.
Also, yes - incremental caching is always a good plan with any expensive calculation (or other resource that takes time to obtain). That said, I think my "variables" version is sufficiently fast that you might not need to bother unless you needed the cached values an awful lot.
I thought we might further improve performance (in my vector version, too) by calling seq.reserve(n) just before the for loop - that ought to reduce the memory allocation workload significantly - but in my tests it didn't do much.
I also think the "variables" version is more than sufficient, just offering another look.
I've noticed that whenever I've used .reserve(n) with vectors nothing really seems to be gained time-wise. Perhaps I haven't used them in a scenario where it would be of real benefit.
Good points here. I'm pretty sure your timer should be reporting in nanoseconds but chrono seems to have some variation on reporting depending on processor / os specifics. I figured I'd throw this up too because I thought it would be fun.
The idea here is that initialization takes longer but any call made after can be retrieved in about half the time iff the following calls are < the largest seen. I realize that fib sequence tasks are not ones assumed to be called multiple times but its an optimization technique that came to mind when I read your post.
When I ran and timed this the first call takes the amount of time the standard linear solution would take, but following monotonically increasing calls only took the amount of time to calculate the delta of size() and N.
All calls under size() took around 2ms to complete.
The timer reports in whatever it feels like. More boringly, it'll be measuring
std::chrono::high_resolution::period::type::Denom
ticks perstd::chrono::high_resolution::period::type::Num
seconds - and helpfully it might go backwards, because it's not explicitly stable.Also, yes - incremental caching is always a good plan with any expensive calculation (or other resource that takes time to obtain). That said, I think my "variables" version is sufficiently fast that you might not need to bother unless you needed the cached values an awful lot.
I thought we might further improve performance (in my vector version, too) by calling
seq.reserve(n)
just before the for loop - that ought to reduce the memory allocation workload significantly - but in my tests it didn't do much.I also think the "variables" version is more than sufficient, just offering another look.
I've noticed that whenever I've used .reserve(n) with vectors nothing really seems to be gained time-wise. Perhaps I haven't used them in a scenario where it would be of real benefit.
No, I think you're right - in most cases, the allocators these days are simply good enough that the marginal benefit of reserving space isn't useful.