Precision Recall Evaluation Dashboard Including Confusion Tables OR ROC Curves
This web app provides a comprehensive and interactive way to analyze the performance of binary classification models using ROC and precision-recall curves,
along with class separation and confusion matrices.
Upload Data: Click on the Upload Data File (CSV)
button and select a CSV file from your computer. The file should contain two columns for actual and predicted values.
Generate Curves: After uploading the data, click the Generate Curves
button.
This will process the uploaded data and generate the ROC curve, precision-recall curve, class separation histogram, and confusion matrix.
Full Screen: Click the ⛶ Full Screen
button to expand the visual output area for easier viewing and interaction with plots. Alternatively, you can also press the “F” key
on your keyboard for the same effect.
Enter Sample Size: Input a desired sample size in the Enter Sample Size
field. The default is set to 1,000.
Generate Random Data: Click on Generate Random Data & Curves
to create random data based on the provided sample size and plot the respective curves and matrix.
Enter Threshold: Input a desired threshold value in the Enter Threshold
field. The default threshold is set to 0.5.
This value determines the decision boundary for classifying predictions as positive or negative, which impacts the ROC curve, precision-recall curve,
class separation, and confusion matrix.
Update Threshold: After entering a new threshold, click the Update Threshold
button to regenerate the curves and matrix based on the new threshold.
This allows you to explore how different threshold values affect the model’s performance metrics, such as precision, recall, and classification accuracy.
Reset Threshold: If you want to revert to the default threshold of 0.5, click the Reset Threshold
button.
This will reset the threshold and update the displayed curves and matrix accordingly, providing a baseline comparison.
Download Data as CSV: After generating curves using either your own data or random data, you can download the results by clicking the Download Data as CSV
button.
This will download a file containing the actuals and predictions used in the curves.
ROC Curve: This graph plots the True Positive Rate (TPR) against the False Positive Rate (FPR), providing insights into the performance of the binary classifier.
Precision-Recall Curve: This graph shows the trade-off between precision and recall for different thresholds.
Class Separation Histogram: These histograms display the distribution of scores for the positive and negative classes, helping visualize class separation.
Confusion Matrix: Presented as a heatmap, it illustrates the number of true positives, true negatives, false positives, and false negatives.
Interactivity: You can hover over points on the curves to see exact values at different thresholds.
Visual Aids: Annotations on the graphs will indicate key metrics like AUC (Area Under the Curve) for the ROC curve and AP (Average Precision) for the precision-recall curve.
Customizable Sample Size for Random Data: You can modify the sample size for generating random data to see how different sample sizes affect the curves and matrix.
Error Handling: The app will alert you if you attempt to generate curves without uploading data or entering a valid sample size.