DEV Community

Gottipati Gautam
Gottipati Gautam

Posted on

Accuracy - A performance measure to evaluate your model.

Accuracy

By Performance measure of a model what I mean is to know how well our model( classification model or Regression model) is performing with the test data or live data.

Performance of a model is always measured on test data, not training data or validation data.

Performance measure is also called a Performance metric.

Amongst all the available performance measures, accuracy is the most easy-to-understand metric.

So, let's directly dive into the point ...

What is ACCURACY

Accuracy of a model is defined as a number of points classified correctly to a total number of points.

Let's compare this with the marks that you get in your school. Suppose you got 89 marks out of total of 100 marks, this means out of 100 questions you have correctly answered 89 marks. So your accuracy, in this case, is ]80/ 100

Accuracy formula



From the above equations, we can draw a trivial conclusion that the number of points correctly classified can be almost to the Total number of points. So if a modal could classify all the points correctly then we say it is 100% accurate or the accuracy is 1, which is a rare thing to happen if it happens then definitely you need to check if there is any overfitting or any problem with your model XD, if nothing like that then Kudos...

NOTE: Accuracy of a model always lies between 0 to 1. 0 means worst,1 means awesome.

Example :

Consider a model (for simplicity let's think of a KNN model) which takes values in X as an input and gives y as an output. Where Y is +1 or -1
Think that we have already trained our data and now we are testing it on test data.

Table

Consider the above dataset where X is the input given to our model, Y is the actual value and
Y^ is our predicted values.
From the table it is clear then we have a total of 4 points. Out of the 4 points point 1, point 2, and point 4 have been predicted correctly, and point 3 is predicted wrong.
So when we check for accuracy

Total number of points = 4

Total number of points classified or predicted correct = 3

From Equation 1
Accuracy = 3 / 4 = 0.75
Hurray! We have finally learned how to find the accuracy of our modal .0.75 is a decently a good accuracy score, but it is not great. Training the data with more data may further enhance the performance of the data. Improving the accuracy is out of the scope of this blog...

So now you have understood the mathematical part of how accuracy metric works, the coding part is very easy. Sklearn has already provided us with a function names accuracy_score. Refer this .

Accuracy can be applied to both regression and classification models.

Now that we have learned our first performance measure, we might get one question is this metric applicable for all cases.

Well, the answer is NO.

There are some cases where applying accuracy might mislead use.

  1. When we have imbalanced data.
  2. When our model returns probability scores rather than value.



1. When we have imbalanced data.


Consider the above example where we have 100 values for X. Out of the 100 values, 90 values of Y are +1 and the remaining 10 are -1.Such data is called Imbalanced data, where one type of class is higher in a much higher number(here +1) than the other class(here -1).

Let's suppose or model is so weekly trained that it only returns value -1 irrespective of the input.

So now when we calculate its accuracy

Formula

So know that our model is not able to predict values correctly, still, we are getting an accuracy of 0.9 which is very good, but actually, it is not.
So using accuracy as your measure for such type of imbalanced dataset might mislead you.



2. When our model returns probability scores rather than value.


In short probability scores tell us what is the probability that x belongs to class A. In order to get the exact value, we make certain probability as threshold (like let us assume 0.5). So any value greater than or equal to the threshold value belongs to class A else another class.

Lets consider two models M1 and M2 which returns probability scores as output.

Table2

In the above table X is the input value, Y is the actual value, M1 and M2 are the probability scores predicted by model 1 and model 2, y^ M1 and y^ M2 are the predicted values of the model 1 and 2 using probability scores.

We have set a threshold of 0.5. Anything equal to or above 0.5 belongs to +1 and anything below belongs to -1.

On seeing the class labels predicted by two models y^ M1 and y^ M2 both of them look the same so the accuracy of both the models is the same. But if we look at probability scores of M1 and M2

Table3

M1 seems to predict better than M2. Because M1 predicts with better accuracy whereas probability scores of M2 are so close to the 0.5 (threshold value) it is not able to make a proper difference between two classes +1 and -1.

NOTE: Accuracy do not take probability scores as input, it only takes the predicted values.

So for such models which gives probability scores as output, we use another type of performance metric like log-loss which we will see in the next blog.

Top comments (0)