Analyze prediction balance with normalized MCC metrics. Review confusion counts, edge cases, and diagnostic outputs. Export findings quickly for audits, model checks, and reporting.
Raw MCC = ((TP × TN) − (FP × FN)) ÷ √((TP + FP)(TP + FN)(TN + FP)(TN + FN))
Normalized MCC = (MCC + 1) ÷ 2
This normalization changes the raw MCC range from -1 to 1 into a simpler 0 to 1 scale.
A value near 1 means strong agreement. A value near 0.5 means neutral performance. A value near 0 means inverse agreement.
| Example | TP | FP | TN | FN | Raw MCC | Normalized MCC |
|---|---|---|---|---|---|---|
| High agreement | 94 | 10 | 180 | 16 | 0.8120 | 0.9060 |
| Moderate balance | 55 | 25 | 70 | 30 | 0.3857 | 0.6928 |
| Imbalanced but strong | 18 | 20 | 960 | 2 | 0.6441 | 0.8220 |
| Weak inverse pattern | 10 | 60 | 25 | 55 | -0.5483 | 0.2258 |
Normalized Matthews correlation coefficient is a strong model evaluation metric. It measures the quality of binary classification. It uses all confusion matrix values. That includes true positives, true negatives, false positives, and false negatives. Many popular metrics ignore this full balance. Accuracy can look strong when classes are uneven. Precision can hide missed positives. Recall can hide false alarms. Normalized MCC gives a broader statistical view.
This metric is useful for imbalanced datasets. Fraud detection is one example. Medical screening is another. Spam filtering also benefits. In these cases, class sizes are often uneven. A model may guess the major class often. Accuracy may still appear high. That can mislead analysis. MCC corrects that weakness. It rewards balanced prediction behavior. It also penalizes one-sided guessing. The normalized version makes interpretation easier. It converts the range from minus one to one into zero to one.
A score near one shows strong agreement. A score near zero shows strong disagreement after normalization from a negative MCC. A score near one half shows neutral performance. This scale is easier for dashboards. It is also easier for reports and stakeholder reviews. You can compare experiments faster. You can benchmark threshold changes clearly. You can monitor model drift with better context. The calculator also displays accuracy, precision, recall, specificity, F1 score, and balanced accuracy. These supporting metrics explain why the normalized MCC moved.
Decision teams also benefit from a normalized scale. It fits scorecards well. It simplifies communication across technical and nontechnical groups. Data scientists can still inspect the raw MCC. Analysts can explain the normalized value quickly. Product teams can compare versions without confusion. That makes this calculator practical for validation, monitoring, experimentation, and ongoing quality control. This supports cleaner benchmarking across binary model scenarios.
Use this calculator when you need reliable classification analysis. Enter the confusion matrix counts carefully. Review the normalized MCC first. Then inspect the raw MCC and related rates. Export the results for audits or model documentation. A consistent workflow improves reporting quality. It also reduces interpretation errors. For serious evaluation, do not rely on accuracy alone. Use normalized MCC with supporting diagnostic metrics for a more trustworthy statistical conclusion.
It measures binary classification quality using all four confusion matrix values. It is more balanced than accuracy, especially when class sizes are uneven.
Normalization converts the raw range from -1 to 1 into 0 to 1. This makes reporting easier and helps nontechnical readers interpret results faster.
For imbalanced datasets, yes. Accuracy can look impressive even when a model ignores the minority class. Normalized MCC is usually more informative in that situation.
A normalized score near 0.5 means the raw MCC is near 0. That usually suggests neutral or weak predictive correlation.
This happens when one side of the confusion matrix has no variability. The calculator can either show undefined values or apply neutral fallback handling.
Yes. After calculation, the page shows CSV and PDF download buttons. They export the same values shown in the result section.
The calculator requires TP, FP, TN, and FN. Model name, class labels, decimal places, and undefined handling improve reporting but do not change the confusion matrix itself.
Yes. Normalized MCC is strongest when read with precision, recall, specificity, balanced accuracy, and F1 score. Together they explain model behavior more clearly.
Important Note: All the Calculators listed in this site are for educational purpose only and we do not guarentee the accuracy of results. Please do consult with other sources as well.