From Eigenvector Research Documentation Wiki
Revision as of 15:59, 6 November 2020 by Donal (talk | contribs) (Description)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search


Create a confusion table where entries show the number predicted to belong to 'i-th' class which actually belong to the 'j-th' class from an input classification model or from lists of actual classes and predicted classes.


[confusiontab, classids, texttable] = confusiontable(model); % create confusion table from classifier model
[confusiontab, classids, texttable] = confusiontable(model, usecv); % create confusion table from model using CV results
[confusiontab, classids, texttable] = confusiontable(model, usecv, predrule); % create confusion table from model specifying CV and predrule
[confusiontab, classids, texttable] = confusiontable(trueClass, predClass); % create confusion table from vectors of true and pred classes


Calculate confusion table for classification model or from a list of actual classes and a list of predicted classes. Create a table with entry (i,j) = number predicted to be class with index i which actually are class with index j. The 'most probable' predicted class is used when a model is input, or 'strict' predicted class if 'predrule'= 'strict' is used. Input models must be of type PLSDA, SVMDA, KNN, or SIMCA.

Optional second parameter "usecv" specifies use of the cross-validation based "model.detail.cvmisclassification" instead of the default self-prediction classifications "model.classification".

Input can consist of vectors of true class and predicted class instead of a model.

Note: Prior to version 8.5 the Confusion Table values were based on the "most probable" prediction classification, and this is what was reported by the "Show Confusion Matrix and Table" icon in the Analysis window. Beginning with version 8.5 the Confusion Table function could report "most probable" or "strict" prediction results and the "Show Confusion Matrix and Table" icon in Analysis provides both, first the "most probable" classification results labeled as:

PLSDA Classification Using Rule: Pred Most Probable

then followed by the "strict" classification results labeled as:

PLSDA Classification Using Rule: Pred Strict (using strictthreshold = 0.50)

in the case of PLSDA, for example.

The row "Predicted as Unassigned" will contain only zeros for the "most probable" prediction rule where every sample is predicted to belong to one class, but it can have non-zero entries when the "strict" prediction rule is used. In the latter case it shows how many samples of each class are not predicted to any class by the "strict" rules. This can happen if a sample has either a) no class prediction probability greater than the strictthreshold for any class, and/or if it has class prediction probability greater than the strictthreshold value for two or more classes. The unassinged value for a class shown in this table will equal to the sum of "Class Pred Member - unassigned" and "Class Pred Member - multiple" as shown in the Scores Plot for that class.


  • model = previously generated classifier model or pred structure,
  • usecv = 0 or 1. 0 indicates confusion matrix should be based on self-prediction results, 1 indicates it is based on using cross-validation results (assuming they are available in the model),
  • trueClass = vector of numeric values indicating the true sample classes,
  • predClass = vector of numeric values indicating the predicted sample classes.
  • predrule = the classification rule used. 'mostprobable' makes predictions based on choosing the class that has the highest probability. 'strict' makes predictions based on the rule that each sample belongs to a class if the probability is greater than a specified threshold probability value for one and only one class.


  • confusiontab = confusion table, nclasses x nclasses array, where the (i,j) cell shows the number of samples which were predicted to be class i but which actually were class j.
  • classids = class names (identifiers),
  • texttable = cell array containing a text representation of the confusion table. The i-th element of the cell array, texttable{i}, is the i-th line of the texttable. Note that this text representation of the confusion table is displayed if the function is called with no output assignment.


Calling confusiontable with no output variables assigned: 'confusiontable(model)' displays the output:

>> confusiontable(model)
Confusion Table:                                           
                                Actual Class              
                               K       BL       SH       AN
Predicted as K                10        2        0        0
Predicted as BL                0        7        0        0
Predicted as SH                0        0       23        0
Predicted as AN                0        0        0       21
Predicted as Unassigned        0        0        0        0

See Also

confusionmatrix, plsda, svmda, knn, simca, Sample_Classification_Predictions