« Back to Glossary Index

In machine learning, accuracy corresponds to the percentage of correct predictions and it is computed as the ratio of the number of correct predictions to the total number of predictions, i.e. number of correct predictions over total number of predictions.

A high accuracy might indicate the capability of a model to make correct predictions, but it might be highly misleading in case of an imbalanced dataset. In fact, imagine you have a dataset of 100 instances: 95 are labeled blue and 5 are labelled red. A trivial model that always predicts blue is clearly useless (since it always gives the same output regardless of the input), but it has nonetheless an accuracy of 95%.