Purity is not about state, it's about referencial transparency. A function is pure iff (if and only if) given same arguments it returns the same output.
A random generator takes no argument and returns a different result every time, so it is inherently impure.
Would you know of any practical arguments (besides memoization) in favor of referential transparency, insofar as it is lost by introducing non-determinism?
Reverseability for example. Haskell has STM (software transactional memory) which is like a global shared variable between threads, but instead of race conditions of two threads try to write at the same time, the actions get rolled back and then redone one after another. The type system guarantees that you cannot do monadic actions when writing to STM, just pure stuff. If you would have randomness in there, rollback and redoing would not be the same thing.
That's true, but sort-of isn't. The new computation "may as well" have been the old one: it is still equally correct because true randomness has no hidden state that was affected by the rollback and redo.
This is all still ignoring the de-facto pseudorandom / deterministic nature of most of our machines.
For further actions, you may consider blocking this person and/or reporting abuse
We're a place where coders share, stay up-to-date and grow their careers.
Purity is not about state, it's about referencial transparency. A function is pure iff (if and only if) given same arguments it returns the same output.
A random generator takes no argument and returns a different result every time, so it is inherently impure.
That's an excellent point.
Would you know of any practical arguments (besides memoization) in favor of referential transparency, insofar as it is lost by introducing non-determinism?
Reverseability for example. Haskell has STM (software transactional memory) which is like a global shared variable between threads, but instead of race conditions of two threads try to write at the same time, the actions get rolled back and then redone one after another. The type system guarantees that you cannot do monadic actions when writing to STM, just pure stuff. If you would have randomness in there, rollback and redoing would not be the same thing.
That's true, but sort-of isn't. The new computation "may as well" have been the old one: it is still equally correct because true randomness has no hidden state that was affected by the rollback and redo.
This is all still ignoring the de-facto pseudorandom / deterministic nature of most of our machines.