## The Confusion Matrix

The confusion matrix is a widely used evaluation tool in machine learning to measure the performance of a classification model. It provides a detailed overview of the predictions made by the model against the real classes of data. The confusion matrix is particularly useful when working with classification problems where there may be more than two classes.

The confusion matrix is organized in a table, where each row represents the real class and each column represents the class predicted by the model. The leading diagonal of the matrix represents correct predictions (true positives and true negatives), while cells off the diagonal represent misclassifications (false positives and false negatives).

Here’s how the confusion matrix works:

**True Positives (TP)**: Number of cases where the model correctly predicted a positive class..**True Negatives (TN)**: Number of cases where the model correctly predicted a negative class**.****False Positives (FP)**: Number of cases in which the model incorrectly predicted a positive class when it was actually negative (false alarm)..**False Negatives (FN)**: Number of cases where the model incorrectly predicted a negative class when it was actually positive (failed to detect).

The confusion matrix can help you understand what kind of errors your model is making and which class is performing better or worse. From these values, you can calculate various evaluation metrics such as accuracy, precision, recall and F1 score.

## The Confusion matrix as an analysis tool

The confusion matrix is an important tool for evaluating the performance of a classification model in detail. In addition to calculating baseline values such as true positives, true negatives, false positives, and false negatives, you can use these values to calculate various evaluation metrics that provide a more comprehensive view of model performance. Here are some of the more common metrics calculated by the confusion matrix:

**1. Accuracy:** Accuracy measures the proportion of correct predictions out of the total number of predictions. It is the simplest metric but can be misleading when classes are unbalanced.

**2. Precision**: Precision represents the proportion of true positives out of the total positive predictions made by the model. Measures how accurate the model is when making positive predictions

**3. Recall (Recall or Sensitivity): **Recall represents the proportion of true positives to the total of actual positive instances. Measures the model’s ability to identify all positive instances.

**4. F1 Score: **The F1 Score is the harmonic mean between accuracy and recall. It’s useful when you want to strike a balance between accuracy and recall.

**5. Specificity:** Specificity represents the proportion of true negatives to the total number of actual negative instances. Measures how good the model is at identifying negative instances.

**6. ROC Curve (Receiver Operating Characteristic Curve):** The ROC curve is a graph showing the relationship between the true positive rate and the false positive rate as the classification threshold varies. As the threshold varies, ROC points are drawn and connected, and the area under the ROC curve (AUC) can be used as a measure of model effectiveness.

These metrics can provide a deeper insight into model performance than a simple accuracy percentage. It’s important to select the metrics that are most relevant to your problem and the balance of accuracy and desired recall.

Remember that these metrics are useful tools for evaluating models, but you should always consider the context of the problem and the nature of your classes before drawing conclusions about model quality.

## A practical example

Here’s an example of how to use the confusion matrix in Python:

`import matplotlib.pyplot as plt from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import plot_confusion_matrix #`

Load the Iris dataset`iris = load_iris() X = iris.data y = iris.target #`

Divide the dataset into training and test sets`X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) #`

Create and train a classification model (Random Forest)`model = RandomForestClassifier() model.fit(X_train, y_train) #`

View the confusion matrix`plot_confusion_matrix(model, X_test, y_test) plt.show()`

In this example, we are using the plot_confusion_matrix() method from scikit-learn to display the confusion matrix of the classification model (Random Forest) trained on the Iris dataset. The confusion matrix will give us information about the accuracy of the model for each class and help us identify any misclassifications.

## Evaluation of metrics from the Confusion Matrix

Here are examples of how to calculate and visualize some of the evaluation metrics using the confusion matrix in Python:

`import matplotlib.pyplot as plt import numpy as np from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier from sklearn.metrics import confusion_matrix, precision_score, recall_score, f1_score, roc_curve, roc_auc_score #`

Load the Iris dataset`iris = load_iris() X = iris.data y = iris.target #`

Divide the dataset into training and test sets`X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) #`

Create and train a classification model (Random Forest)`model = RandomForestClassifier() model.fit(X_train, y_train) #`

Make predictions on the test set`predictions = model.predict(X_test) #`

Calculate the confusion matrix`cm = confusion_matrix(y_test, predictions) #`

Calculate accuracy`precision = precision_score(y_test, predictions, average='weighted') #`

Calculate the recall`recall = recall_score(y_test, predictions, average='weighted') #`

Calculate the F1 score`f1 = f1_score(y_test, predictions, average='weighted') #`

Calculate the area under the ROC curve (AUC)`y_prob = model.predict_proba(X_test)[:, 1] # Probabilità della classe positiva roc_auc = roc_auc_score(y_test, y_prob) #`

Visualize the confusion matrix as a heatmap`plt.figure(figsize=(8, 6)) plt.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues) plt.title("Matrice di Confusione") plt.colorbar() plt.xticks(np.arange(len(iris.target_names)), iris.target_names, rotation=45) plt.yticks(np.arange(len(iris.target_names)), iris.target_names) plt.ylabel("Valori Effettivi") plt.xlabel("Previsioni") plt.show() # Visualizza la curva ROC fpr, tpr, thresholds = roc_curve(y_test, y_prob) plt.figure(figsize=(8, 6)) plt.plot(fpr, tpr, label="Curva ROC (AUC = {:.2f})".format(roc_auc)) plt.plot([0, 1], [0, 1], 'k--') plt.xlabel('Tasso di Falsi Positivi') plt.ylabel('Tasso di Veri Positivi') plt.title('Curva ROC') plt.legend(loc="lower right") plt.show() print("Precisione:", precision) print("Richiamo:", recall) print("Punteggio F1:", f1) print("Area sotto la curva ROC (AUC):", roc_auc)`

In this example, we are calculating and displaying the confusion matrix, ROC curve, and calculating accuracy, recall, and F1 score metrics for a classification model (Random Forest) trained on the Iris dataset.