Precision v/s Recall — explained intuitively

Simral Chaudhary
2 min readJan 21, 2021

I’ve been studying/working in the industry for over 4 years now and whenever I’m trying to analyze my ML models, I still end up googling for Precision v/s Recall and I just can’t seem to memorize it! If you’re one of those people, today is your lucky day! I’ve finally found a way to get the intuition behind these concepts and hence, finally understand AND memorize their meanings!

So let’s dive right in..

As we all know the formulas go like..

Precision = TP / (TP + FP)

Recall = TP / (TP + FN)

But what do they really mean?

Precision is the easier one to understand. Just remember to ask yourself — How precise is your prediction? Out of all the examples you classified as positive(TP + FP) how many were actually positives (TP). Simple right? If you think of it, precision is to be calculated on your experiment, and hence looks at your predictions not the actual data, as highlighted in the table below.

Recall on the other hand, can be thought of as, out of all the relevant items(all positives of actual data) how many of those were recalled(classified as +ve)? All positives of actual data would be the ones correctly classified as positive (TP) and the ones incorrectly classified as negatives (FN). So that makes up the denominator of Recall’s equation. When asking for Recall we care about the actual data labels and how many of those were successfully recalled i.e predicted as positive.

In the diagram above, the green encircling covers the parameters that Precision takes in, and the red line encircles Recall parameters.

I hope this article helped in establishing a new understanding of these performance metrics.

Good luck! :)

--

--

Simral Chaudhary

Research in NLP | Google | Walmart | Carnegie Mellon Univ | NIT Jaipur