While solid state is a step up, I still find cases where I do not need non-volatile memory but could do with some that's faster than solid-state but doesn't have to be as fast as main RAM.
The simplest use might be for swap (virtual memory). Swap is usually an space reserved on storage, which tends to have much more space than RAM where the least frequently accessed blocks of memory in RAM are stored to free up RAM.
Originally for desktop it was not uncommon to allocate a large amount of virtual memory compared to the amount of RAM. Twice the amount of virtual memory or more was common.
This is not safe for all workloads because persistent storage is extremely slow compared to RAM. This is one reason the least frequently accessed memory is stored only. If it's to big then it can store more memory that will be increasingly more accessed. Eventually it will start to store blocks that are too frequently accessed.
Consider you have a server that receives a hundred requests a second. That server has to also finish a request less than every hundredth of a second to keep up. If on average it ends up taking more than a hundredth of a second the server will fill up with pending requests until it has to reject requests.
Typically this might be achieved setting limits. Sometimes these are naive and a guess as to how many requests to permit in the queue to be processed and concurrently.
If too many are allowed concurrently, they may exceed system RAM, for example, you may have 150% RAM usage with the 50% overflow having to go into swap. The more that has to go in to swap the more that has to be swapped to and from disk to main memory.
If no limits are set, the machine will slow to a near halt. It will have queued up hours or days worth of work if not more because swap can be thousands or even millions of times slower than RAM.
We call main memory Random Access Memory. Persistent storage could often be called Sequential Access Memory. It's really not a good replacement for RAM except for large regions of RAM accessed very infrequently (Random is a bit misleading as there are patterns), for example, a thousandth as much at least.
I often thought it might be nice if motherboards offered a slot for a generation or so back memory to recycle as swap. Even a few GB could by handy and save a bit of write wear on solid-state. Most workloads will have a small proportion of RAM, at least a few GB that could be tucked away.
It should be possible to ask the kernel for and dump memory access / swap stats. Though apart from being able to use something such as vmstat and perhaps comparing it against benchmarks of the storage device I've never seen anyway to get intricate kernel details about this (perhaps some of it is hardware based, IE, MMU stats, paging stats).
Swap solutions also appear to be naive to my knowledge. I've not looked at Linux kernel settings closely and perhaps its changed but I do see swap thrashing death happen and betray Linux's reputation for uptime and stability.
I'm not sure if there's a clever swap or if the method of determining access frequency is very accurate. In an ideal world the kernel would time swaps and know when it's time to OOM (Out Of Memory) kill processes based on some criteria to release resources or fail on granting further resources (IE, malloc or reservation of virtually allocated memory).
I suspect it may only rely on naively swapping out a page or crude statistics that might not be per page, that may get swapped back in and by random chance over time swap ends up filled with the least accessed pages (least likely to be swapped out). In that situation, swap thrashing might be worse than anticipated.
Solid state performance gives a lot more breathing room than spinning disk. It is typically a hundred times faster for the worst case (random writes). Though it might only be a moderate gain if the page flipping rates increase at a rate worse than linear.
One extra slot or two might be a decent save, could be cheap but can only be of limited quantity. An extra 8GB spare legacy DIMM lying around for swap would be nice but it's also worth considering that even very old RAM is so much faster than solid state you could probably use a lot more of it for swap.
Many people may still be shell shocked from the performance gain over spinning disk solid state brings. Especially for random access. Modern spinning disks can do typically a hundred to the low hundreds MB/s for sequential but for random access that drops to typically around 0.5MB/s to 2MB/s.
While spinning disks degrade to around 0.25%-2% of the speed for random workloads, solid state degrades only down to around 12.5% to 20%. Solid state also tends to be around twice to three times as fast for SATA and perhaps ten or twenty times for solutions attached more directly to PCI.
But what about versus RAM (volatile memory, non-persistent)?
In terms of performance, a stick of FPM RAM from the early 90s might significantly beat out an SATA solid state drive produced today for certain workloads (worst case for SSDs) while being beat on the best case for SSDs, that is, at least having a respectable decent average case.
We might also not always want to access our slower fallback memory on a page by page (block) based meaning that it could have significantly less overhead for certain use cases and perform a hundred to a thousand times better for a an access pattern of sparse distributed out of order small accesses.
I see 1TB solid state devices now clocking in at $125 - $200. What if I wanted 1TB but:
- I didn't need it to retain after power loss.
- It didn't need to strictly be a block device.
- I needed it to be faster than SSD but not as fast as main RAM.
I don't know the engineering difficulties in that but it makes me wonder. Process shrink would still be desired for density so the question might be what's the price difference to produce slower RAM chips.
It might not have to be 1TB to provide a gain. It depends on what your base amount of main RAM is and your workload (use case). Though it would most likely want to be PCI.
PCI has a slight limitation of still having a speed cap per lane adding a bit of overhead for small access though is not as extreme a divisor as storage blocks.
It may need some new but only moderately different technology or fabrication changes. I suspect however that high density RAM as a third level of RAM maybe a bigger opportunity now than it ever was. This is in part because the size of system RAM tends to be restricted in terms of growth potential as it's also trying to be as fast as reasonably possible at the same time.
HBM is at least one example of a window that opened up though for only a partially similar situation (desire for RAM which is a bit closer to a block device).
The device might possess stripping and a hybrid block mode system of access. For example, it might provide access by block address or list of addresses for each bank. There are likely a few ways to lay it out.
Either way, if they can do what they did for solid state and get similar density/price ratios while not slowing it down too much then the result would be fairly decent.
This one is at least somewhat difficult in that it might require major RAM fabricators to go for it and have an additional tier of RAM production (IE, similar to also producing LPDDR, HDDDR). Perhaps there's a fear it might displace demand for very large amounts of main RAM (servers with 1TB + main RAM).
It might not be immediately useful to the common consumer though software could make use of it in certain ways, for example games with a lot of persistence (dynamic environments, being able to blow real holes in things).
For workstations and servers there are many potential use cases.
It can be used for a relatively large swap as it will be so much faster than SSD for the worst and average case. For bulk access it might even be able to achieve rates that come very close to main RAM but dropping up to one to two orders of magnitude for suboptimal cases.
It would be great for MySQL on-disk operations, etc. It could also be used for a rather large read cache, huge redis store, etc.
It would also be very useful for RAMdisk. For example, the /tmp directory or for compiling particularly large projects.
I am fairly certain that even as a hardware RAM disk (block device) it would be well utilised and make a big difference for a lot of workloads. If it presented more like RAM, all the better.
Although on and off various solutions for this have presented, is has never been standardised and released in a viable manner at least that I've seen. For various reasons but most often being released at absurd prices, bad driver support, failing to reach out to developers of software that could use it and with suspiciously poor performance that just doesn't make sense.