Evaluation Metrics
Accuracy
Accuracy is an evaluation metric used in machine learning models to measure the proportion of correct predictions made by the model. It's the number of correct predictions divided by the total number of predictions.
Accuracy is calculated as:
Accuracy = (True Positives + True Negatives) / (Total Predictions)
It's a simple and widely used metric that provides a general idea of how well a model is performing.
Precision
Precision is an evaluation metric used in machine learning models to measure the proportion of true positive predictions among all positive predictions made by the model. It's a measure of the model's ability to minimize false positives, which are predictions that the model classifies as positive but are actually negative.
Precision is calculated as:
Precision = True Positives / (True Positives + False Positives)
A high precision value means that the model is making fewer false positive predictions and is more reliable in identifying positive instances. A low precision value means that the model is making more false positive predictions and is less reliable in identifying positive instances.
Recall
Recall, also known as Sensitivity or True Positive Rate, is an evaluation metric used in machine learning models to measure the proportion of true positive predictions among all actual positive instances. It's a measure of the model's ability to minimize false negatives, which are actual positive instances that the model classifies as negative.
Recall is calculated as:
Recall = True Positives / (True Positives + False Negatives)
A high recall value means that the model is correctly identifying a high proportion of positive instances and is less likely to miss actual positive instances. A low recall value means that the model is missing a high proportion of actual positive instances and is less reliable in identifying positive instances.
F-1 Score
F1-Score is an evaluation metric used in machine learning models to balance the trade-off between precision and recall. It's the harmonic mean of precision and recall, and it's a single number summary of the model's performance.
F1-Score is calculated as:
F1-Score = 2 * (Precision * Recall) / (Precision + Recall)
A high F1-Score value means that the model has a good balance of precision and recall, it's making a high proportion of correct positive predictions and a low proportion of false positive and false negative predictions. A low F1-Score value means that the model has a poor balance of precision and recall, it's making a low proportion of correct positive predictions and a high proportion of false positive and false negative predictions.
False Positive Rate (FPR)
False Positive Rate (FPR) is an evaluation metric used in binary classification models, it's the proportion of actual negative instances that are incorrectly classified as positive. It's also known as the fall-out rate.
FPR is calculated as:
FPR = (False Positives) / (False Positives + True Negatives)
It's a measure of the model's ability to minimize false positives, which are predictions that the model classifies as positive but are actually negative. A high FPR means that the model is making a lot of false positive predictions, which can lead to a higher number of false alarms or unnecessary actions.
False Negative Rate (FNR)
False Negative Rate (FNR) is an evaluation metric used in binary classification models, it's the proportion of actual positive instances that are incorrectly classified as negative. It's also known as the miss rate.
FNR is calculated as:
FNR = (False Negatives) / (False Negatives + True Positives)
It's a measure of the model's ability to minimize false negatives, which are predictions that the model classifies as negative but are actually positive. A high FNR means that the model is making a lot of false negative predictions, which can lead to a higher number of missed detections or missed opportunities.
Mean Absolute Error (MAE)
Mean Absolute Error (MAE) is an evaluation metric used in regression models to measure the difference between the predicted values and the true values. It's a measure of the model's ability to make accurate predictions.
MAE is calculated as:
MAE = 1/n * (sum of |(True Value - Predicted Value)|)
Where n is the number of observations.
MAE is the average of the absolute differences between the true values and the predicted values, it's a simple and easy-to-interpret metric. A low MAE value means that the model is making accurate predictions and the differences between the true and predicted values are small. A high MAE value means that the model is making inaccurate predictions and the differences between the true and predicted values are large.
Mean Squared Error (MSE)
Mean Squared Error (MSE) is an evaluation metric used in regression models to measure the difference between the predicted values and the true values. It's a measure of the model's ability to make accurate predictions.
MSE is calculated as:
MSE = 1/n * (sum of (True Value - Predicted Value)^2)
Where n is the number of observations.
MSE is the average of the squared differences between the true values and the predicted values. It's a more sensitive metric than the Mean Absolute Error (MAE), as it punishes large errors more heavily than small errors. A low MSE value means that the model is making accurate predictions and the differences between the true and predicted values are small. A high MSE value means that the model is making inaccurate predictions and the differences between the true and predicted values are large.
Root Mean Squared Error (RMSE)
Root Mean Squared Error (RMSE) is an evaluation metric used in regression models to measure the difference between the predicted values and the true values. It's a measure of the model's ability to make accurate predictions.
RMSE is calculated as:
RMSE = sqrt(1/n * (sum of (True Value - Predicted Value)^2))
Where n is the number of observations.
RMSE is the square root of the mean of the squared differences between the true values and the predicted values, it's a more sensitive metric than the Mean Absolute Error (MAE), as it punishes large errors more heavily than small errors. A low RMSE value means that the model is making accurate predictions and the differences between the true and predicted values are small. A high RMSE value means that the model is making inaccurate predictions and the differences between the true and predicted values are large.
R-Squared (Coefficient of Determination)
R-Squared (Coefficient of Determination) is an evaluation metric used in regression models to measure the proportion of the variance in the response variable that is explained by the predictor variables. It's a measure of the model's ability to make accurate predictions.
R-Squared is calculated as:
R-Squared = 1 - (SSresidual / SStotal)
Where SSresidual is the sum of the squared differences between the predicted values and the true values, and SStotal is the total sum of the squared differences between the true values and the mean of the true values.
R-Squared is a value between 0 and 1, where 1 represents a perfect fit and 0 represents a poor fit. A high R-Squared value means that the model is explaining a large proportion of the variance in the response variable and is making accurate predictions, while a low R-Squared value means that the model is not explaining much of the variance in the response variable and is making inaccurate predictions.
Adjusted R-Squared
Adjusted R-Squared is an evaluation metric used in multiple linear regression models to measure the proportion of the variance in the response variable that is explained by the predictor variables, while adjusting for the number of predictor variables in the model. It's a measure of the model's ability to make accurate predictions while considering the trade-off between the goodness of fit of the model and the complexity of the model.
Adjusted R-Squared is calculated as:
Adjusted R-Squared = 1 - ( (1 - R-Squared) * (n - 1) / (n - k - 1) )
Where n is the number of observations, k is the number of predictor variables, and R-Squared is the coefficient of determination.
Adjusted R-Squared is a value between 0 and 1, where 1 represents a perfect fit and 0 represents a poor fit. A high Adjusted R-Squared value means that the model is explaining a large proportion of the variance in the response variable, while controlling for the number of predictor variables and is making accurate predictions, while a low Adjusted R-Squared value means that the model is not explaining much of the variance in the response variable, or the model is overfitting, and is making inaccurate predictions.
Mean Absolute Percentage Error (MAPE)
Mean Absolute Percentage Error (MAPE) is an evaluation metric used in regression models to measure the difference between the predicted values and the true values. It's a measure of the model's ability to make accurate predictions. It expresses the error as a percentage of the true value.
MAPE is calculated as:
MAPE = (1/n) * (sum of |(True Value - Predicted Value)| / True Value) * 100
Where n is the number of observations.
MAPE is the average of the absolute percentage differences between the true values and the predicted values, it's a simple and easy-to-interpret metric. A low MAPE value means that the model is making accurate predictions and the differences between the true and predicted values are small. A high MAPE value means that the model is making inaccurate predictions and the differences between the true and predicted values are large.
Mean Average Precision (MAP)
Mean Average Precision (MAP) is an evaluation metric used in information retrieval and machine learning models to measure the average precision of a set of ranked items. It's commonly used in image retrieval and image classification tasks, where the goal is to retrieve or classify the relevant items among a set of items.
MAP is calculated as:
MAP = (1/N) * (Σ(P(k) * rel(k)))
Where N is the number of relevant items in the dataset, P(k) is the precision at cut-off k, and rel(k) is the relevance of the item at rank k.
MAP is the average precision of the relevant items in the dataset, it's a single number summary of the model's performance. A high MAP value means that the model is returning a high proportion of relevant items among the top-ranked items, while a low MAP value means that the model is returning a low proportion of relevant items among the top-ranked items.
Intersection over Union (IoU)
Intersection over Union (IoU), also known as Jaccard index, is an evaluation metric used in object detection and image segmentation tasks. The goal is to evaluate the degree of overlap between the predicted bounding box or segmentation mask and the ground truth bounding box or segmentation mask.
IoU is calculated as:
IoU = (Intersection of predicted and ground truth) / (Union of predicted and ground truth)
Where the intersection is the area of overlap between the predicted and ground truth bounding boxes or segmentation masks, and the union is the total area of the predicted and ground truth bounding boxes or segmentation masks.
IoU is a value between 0 and 1, where 1 represents a perfect match and 0 represents no overlap. A high IoU value means that the model is accurately predicting the position and size of the object, while a low IoU value means that the model is making inaccurate predictions.
Receiver Operating Characteristic (ROC) Curve
Receiver Operating Characteristic (ROC) Curve is an evaluation metric used in binary classification tasks to evaluate the performance of a model. The goal is to evaluate the trade-off between True Positive Rate (TPR) and False Positive Rate (FPR) as the threshold of the model is varied.
A ROC Curve is a graph that plots the TPR (or Sensitivity or Recall) against the FPR (1-Specificity) at different threshold settings. The area under the ROC Curve (AUC) is a single number summary of the model's performance. AUC is a value between 0 and 1, where 1 represents a perfect model and 0.5 represents a random model.
A high AUC value means that the model is making a high proportion of correct positive predictions and a low proportion of false positive predictions at different threshold settings, while a low AUC value means that the model is making a low proportion of correct positive predictions and a high proportion of false positive predictions at different threshold settings.
Detection Rate (DR)
Detection Rate (DR) is an evaluation metric used in object detection tasks to measure the performance of a model. The goal is to evaluate the proportion of objects that are correctly detected by the model.
DR is calculated as:
DR = (Number of Correctly Detected Objects) / (Total Number of Objects)
DR is a value between 0 and 1, where 1 represents a perfect model and 0 represents a poor model. A high DR value means that the model is detecting a high proportion of objects, while a low DR value means that the model is not detecting many objects.
Last updated