DEV Community

Horia Coman
Horia Coman

Posted on • Originally published at horia141.com on

Average Number Of Machine Failures In A Datacenter Given Just The Mean Time To Failure

I’m currently reading Designing Data-Intensive Applications. Quite early in the book an interesting application of basic probability theory pops up. The author says “Hard disks are reported as having a mean time to failure (MTTF) of about 10 to 50 years. Thus, on a storage cluster with 10,000 disks, we should expect on average one disk to die per day”. The point they’re trying to make is that even with a component which is quite reliable, you will see a lot of issues at scale. Therefore large scale distributed systems need to acknowledge this fact and be resistant to the failures. I think there’s some neat math behind the statement itself, irrespective of its sobering effect on engineers. So let’s see how we would go about deriving it.

The first thing to do is lay some groundwork for modeling the problem. Let’s say we have N computers. A computer can be in the working or failed states. All computers start up as working, but later can fail. Failure is temporary however, and a machine which fails will be quickly fixed. We are interested in the behavior of these computers with regards to failures across time. We consider time as flowing in discrete steps, much like it would in a turn-based game. If a computer fails at a given time t, we can assume it will be fixed by time t+1 (where it can fail again, of course).

Failure is a random phenomenon. At the individual computer level we can say that a given machine in a given day will fail with a certain probability p. The whole ensemble of computers then has a probabilistic behavior. It is more complex however, and part of that behavior well study later.

We need to make a series of simplifying assumptions. First we consider the probability of failure to be equal amongst all the computers. Second, this probability is independent of any external parameters, such as time, previous failures, or the order of the computer in the set etc. Therefore the single parameter p can describe the whole ensemble’s behaviour. Note that this is not an entirely realistic expectation. Some failures are correlated. If a whole rack in a datacenter loses power, than all of the machines in that rack will fail. The fact that the machines share some property (being in the same rack), means that their failures are correlated, or not independent. Other machines just have hidden faults which cause abnormally high failure rates after a point. If you’ve seen a failure from it, you’re likely to see another shortly. However, the model is decent enough to capture some good insights.

With this in mind, we can say that a failure for computer i at time t can be modelled as a random variable X_{i,t} with a Bernoulli distribution, with its single parameter set to p. For all i and t we get the same model, and these random variables are independent and identically distributed.

At this point the question from the beginning can be stated as “what is the expected number of failures in a certain day t, given that the probability of one machine failing is p?”. To figure this out, we need to look at T_{t} - the number of failures in that day. T_{t} is the same for all values of t. We can write T_{t} = X_{1,t} + X_{2,t} + \cdots + X_{N,t} since each X_{i,t} produces either a 1 or a 0, for whether the machine is in the failure or in the working state. The resulting model for T_{t} is the Binomial distribution with parameters N and p. The expected or average number of machine failures on a given day is just Np.

So, for 10000 machines and an arbitrarily chosen low p=0.0001 we’d expect to see one failure a day. We can use Wolfram Alpha to get some more insights. The graph is quite bad unfortunately, but there’s some good stuff as well. For example, the probability of at least one machine failure (which is referred to as “success” usually) is 63.21 \approx 2/3. Or, one example of the numbers of failures across time is given as 1 | 1 | 0 | 1 | 0 | 2 | 1 | 0 | 1 | 1 | 1 | 1 | 0 | 0 | 0. So on some days we even see 2 failures, and on others none at all.

The last step is to obtain p from the MTTF. We’ll focus on a single machine i, but look at it across time. The machine starts up in a functioning state, and given its relatively high reliability, chugs along for a number of days F_{i}. However, at F_{i}+1 it stops functioning for the first time. Since this is a random process, F_{i} is itself a random variable. A good model for it is the geometric distribution with its parameter set to p, which is “the probability distribution of the number X of Bernoulli trials needed to get one success”. The mean for this thing is 1/p. But it is also our own MTTF, if we express it in days. So we can compute p = 1 / \text{days}(MTTF).

So, for our initial MTTF of 10 years, we get a p = 1 / 3650 = 0.00027, which is thrice larger than our original test value of 0.0001. So with our model and data we’d actually expect 2.7 failures a day. Wolfram Alpha has more insights into the geometric distribution’s behaviour with these parameters. One example of the times to failures is given as 2259 | 8792 | 2066 | 3770 | 1166 | 1179 | 1428 | 1720. So, again, we can see there’s some variance. Some failures occur within 4 years, while others in 20.

A special note is warranted here. If you pay attention to the sample of failure times, you’ll notice that some are quite big. Which should raise some eyebrows. Indeed, because of our earlier assumptions about independence in time, we missed out on a lot of extra complexity in the failure behaviours of these computers. Failure modelling and reliability engineering is a more complex topic, and more precise results can be obtained.

Finally, let’s put things together to get a nice formula. Starting with a given MTTF in years, provided by the manufacturer, we can compute the probability of failure on a single day, under all our model’s assumptions as p = 1/\text{days}(MTTF). From this, we can compute the average number of failures in a day as T = Np = N/\text{days}(MTTF). Which is quite neat and hopefully easy to remember.

Top comments (0)