DEV Community

Discussion on: Random Can "Break" Your App

Collapse
 
habereder profile image
Raphael Habereder

I love this topic. Randomness in programming has been fascinating to me for a long time. It just questions my belief in "computers do things exactly as they are told to, they can't be random".

I haven't taken a deeper look under the hood of what exactly happens, so I very much appreciate this post and am hoping for part 2 soon. The wikipedia Article and it's sources about /dev/random in Linux already fried my brain a few times, so I am excited to see where your post goes :D

Collapse
 
bytebodger profile image
Adam Nathaniel Davis

I'll be getting much more practical (diving into some of the problems with "randomness" that I found in the Spotify app) and much less theoretical (like, the basis for computer "randomness" itself). But it is a fascinating subject and I may dive into it again in future articles.

Your belief that "computers do things exactly as they are told to, they can't be random" is pretty much... spot-on correct. For all their sophistication, there are some things that modern computers still just can't do. One of those things is: "Hey, computer. Give me a random number between 1 and 100." As basic as that sounds, there's no part of a microchip dedicated to generating random noise/numbers/whatever that returns independent values every single time it's invoked.

So how do computers create "randomness"? Well, the key lies in what they use as a seed. (I referenced this near the top of this article.) As long as the machine knows where to grab a seed that is constantly changing, it can then do all sorts of standard machinations to mutate that seed into something that looks like a random value. The simplest and most obvious seed that's available to a computer is the system time.

If we look at "regular" time, as humans typically use it, there doesn't seem to be much that's "random". We know that 4:19:59 will shortly be followed by 4:20:00 then 4:20:01 - and so on. But computers can measure time in tiny fragments - microseconds - and if we extend our time stamps to include them, our values start to look a whole lot more... "random". This is true even if we create a basic program that will grab, say, three timestamps (accurate to the point of microseconds). Even though it feels to us as though those timestamps have been created in such short succession that they essentially happened at the same time, all three of the timestamps will actually be quite unique.

Once the computer has a unique(ish), random(ish) value, it can then perform any number of transformations on the value to make it look even more unique and even more random. This is like the process of creating a hash. You can start with any input - of any size - and get back a standard-sized string that looks to the typical user like it's random gobbledygook. Of course, it's not random gobbledygook, is it?

When you create a hash, if you start with the same value (the "seed"), then you'll always get the same hash - every single time. Similarly, in pseudo-randomness, if you start with the exact same "seed", you'll get the same "random" value - every time. But computers spoof this by using values that are constantly changing (like, finite fractions of the system time) to generate a series of unique seeds. And because most people, sitting at a keyboard have no way of ever capturing the exact microsecond when their "random" number is generated, the resulting value is - to them, at least - random.

For applications that cannot settle for pseudo-randomness (think: cryptography), there are many other approaches whereby the computer can grab some kind of random value to use as a seed. These can include measuring the varying resistance inside the microprocessor itself - or the heat that's generated by the processor. Basically, any measurement of changing conditions, when carried out to a sufficient number of decimal places, essentially becomes "random".

FWIW, the problem of "true" randomness isn't confined to computers. We often use a coin flip as an example of a "random" event. But if you could analyze, in real-time, ALL of the forces that have suddenly been placed on that coin (e.g., rotational velocity, angular velocity, wind resistance, air viscosity, the hardness of the surface on which it will land, etc...), then you should be able to calculate, with absolute certainty, whether it will land on heads or tails. We only call the coin flip "random" because it's random to us. Because we don't have the ability to measure all those factors in real time. So the result is, effectively, "random". In other words, a coin flip is a real-life demonstration of pseudo-randomness.

Collapse
 
habereder profile image
Raphael Habereder

I love this reply, thank you very much for taking the time to go to such length explaining this!
I'm already excited for what you got in stores for us :)

Collapse
 
thepeoplesbourgeois profile image
Josh

What really baked my noodle was when I realized, pseudorandom numbers were the only kind that existed in the universe.

Thread Thread
 
bytebodger profile image
Adam Nathaniel Davis

Absolutely. Randomness (or the lack thereof) is a fascinating subject for probability nerds like myself.