This is something I have been wondering for a while. Is there a place for nondeterministic functions in a purely functional language?
I'm talking about nondeterminism as in nondeterministic algorithms, not NTMs or nondeterministic programming. A function that may return different outputs for the same input.
Traditionally, one would achieve nondeterminism with a pseudorandom function, using a seed for an RNG. The seed needs to be incremented for every call to the function ergo you have a side-effect.
But let's pretend our CPU has a quantum random number generator: it can quantum entangle and collapse 2 qbits to generate a true random number. To the best of my knowledge, this would be truly side effect free. Alternatively, we simply don't give a shit about the underlying implementation of the random function and pretend It Just Works™.
The use of nondeterministic functions obviously breaks referential transparency. Should we care?
It does make memoization impossible for some functions, but if nondeterminism can be statically identified, this would not affect any functions that can theoretically be memoized anyway.
It shouldn't matter for lazy evaluation. Random 'choice' is random, regardless of order, a true random function wouldn't care when it is called.
It shouldn't affect the Curry-Howard correspondence, as it doesn't change the type system.
It shouldn't really affect the isomorphism with category theory, as while we may no longer have equality of values for many cases, we still have equivalence. Two random numbers are equivalent in that they are both random numbers.
However, I cannot help but feel I am missing something important here. I haven't been able to find any academic literature about the subject, but then again I suck at finding academic literature.
(open source and trusted by devs everywhere ❤️)