m = margin(Mdl,X,Y)
returns the classification margins for the classifier Mdl using the
predictor data in matrix X and the class labels
Y.

m = margin(___,Name,Value)
specifies options using one or more name-value pair arguments in addition to any of the
input argument combinations in previous syntaxes. For example, you can specify a decoding
scheme, binary learner loss function, and verbosity level.

fullPMdl and partPMdl are ClassificationPartitionedECOC models. Each model has the property Trained, a 1-by-1 cell array containing the CompactClassificationECOC model that the software trained using the corresponding training set.

Calculate the test-sample margins for each classifier. For each model, display the distribution of the margins using a boxplot.

tbl — Sample datatable

Sample data, specified as a table. Each row of tbl corresponds to one
observation, and each column corresponds to one predictor variable. Optionally,
tbl can contain additional columns for the response variable
and observation weights. tbl must contain all the predictors used
to train Mdl. Multicolumn variables and cell arrays other than cell
arrays of character vectors are not allowed.

If you train Mdl using sample data contained in a
table, then the input data for margin
must also be in a table.

When training Mdl, assume that you set
'Standardize',true for a template object specified in the
'Learners' name-value pair argument of fitcecoc. In
this case, for the corresponding binary learner j, the software standardizes
the columns of the new predictor data using the corresponding means in
Mdl.BinaryLearner{j}.Mu and standard deviations in
Mdl.BinaryLearner{j}.Sigma.

Data Types: table

ResponseVarName — Response variable namename of variable in tbl

Response variable name, specified as the name of a variable in tbl. If
tbl contains the response variable used to train
Mdl, then you do not need to specify
ResponseVarName.

If you specify ResponseVarName, then you must do so as a character vector
or string scalar. For example, if the response variable is stored as
tbl.y, then specify ResponseVarName as
'y'. Otherwise, the software treats all columns of
tbl, including tbl.y, as predictors.

The response variable must be a categorical, character, or string array, a logical or numeric
vector, or a cell array of character vectors. If the response variable is a character
array, then each element must correspond to one row of the array.

Data Types: char | string

X — Predictor datanumeric matrix

Predictor data, specified as a numeric matrix.

Each row of X corresponds to one observation, and each column corresponds
to one variable. The variables in the columns of
X must be the same as the
variables that trained the classifier
Mdl.

When training Mdl, assume that you set
'Standardize',true for a template object specified in the
'Learners' name-value pair argument of fitcecoc. In
this case, for the corresponding binary learner j, the software standardizes
the columns of the new predictor data using the corresponding means in
Mdl.BinaryLearner{j}.Mu and standard deviations in
Mdl.BinaryLearner{j}.Sigma.

Class labels, specified as a categorical, character, or string array, a logical or numeric
vector, or a cell array of character vectors. Y must have the same
data type as Mdl.ClassNames. (The software treats string arrays as cell arrays of character
vectors.)

Name-Value Pair Arguments

Specify optional
comma-separated pairs of Name,Value arguments. Name is
the argument name and Value is the corresponding value.
Name must appear inside quotes. You can specify several name and value
pair arguments in any order as
Name1,Value1,...,NameN,ValueN.

Binary learner loss function, specified as the comma-separated pair consisting of
'BinaryLoss' and a built-in loss function name or function handle.

This table describes the built-in functions, where
yj is a class label for a
particular binary learner (in the set {–1,1,0}),
sj is the score for
observation j, and
g(yj,sj)
is the binary loss formula.

Value

Description

Score Domain

g(yj,sj)

'binodeviance'

Binomial deviance

(–∞,∞)

log[1 +
exp(–2yjsj)]/[2log(2)]

'exponential'

Exponential

(–∞,∞)

exp(–yjsj)/2

'hamming'

Hamming

[0,1] or (–∞,∞)

[1 – sign(yjsj)]/2

'hinge'

Hinge

(–∞,∞)

max(0,1 – yjsj)/2

'linear'

Linear

(–∞,∞)

(1 – yjsj)/2

'logit'

Logistic

(–∞,∞)

log[1 +
exp(–yjsj)]/[2log(2)]

'quadratic'

Quadratic

[0,1]

[1 – yj(2sj –
1)]2/2

The software normalizes binary losses so that the loss is 0.5
when yj = 0. Also, the software
calculates the mean binary loss for each class.

bLoss is the classification loss. This
scalar aggregates the binary losses for every learner in a
particular class. For example, you can use the mean binary loss
to aggregate the loss over the learners for each class.

Output Arguments

m — Classification marginsnumeric column vector | numeric matrix

If Mdl.BinaryLearners contains ClassificationLinear models, then m is an
n-by-L vector, where n is the
number of observations in X and L is the number
of regularization strengths in the linear classification models
(numel(Mdl.BinaryLearners{1}.Lambda)). The value
m(i,j) is the margin of observation i for the
model trained using regularization strength
Mdl.BinaryLearners{1}.Lambda(j).

In loss-based decoding[Escalera et al.], the class producing the minimum sum of the binary losses over
binary learners determines the predicted class of an observation, that is,

k^=argmink∑j=1L|mkj|g(mkj,sj).

In loss-weighted decoding[Escalera et al.], the class producing the minimum average of the binary losses
over binary learners determines the predicted class of an observation, that is,

k^=argmink∑j=1L|mkj|g(mkj,sj)∑j=1L|mkj|.

Allwein et al. suggest that loss-weighted decoding improves classification
accuracy by keeping loss values for all classes in the same dynamic range.

This table summarizes the supported loss functions, where
yj is a class label for a particular binary
learner (in the set {–1,1,0}), sj is the score for
observation j, and
g(yj,sj).

Value

Description

Score Domain

g(yj,sj)

'binodeviance'

Binomial deviance

(–∞,∞)

log[1 +
exp(–2yjsj)]/[2log(2)]

'exponential'

Exponential

(–∞,∞)

exp(–yjsj)/2

'hamming'

Hamming

[0,1] or (–∞,∞)

[1 – sign(yjsj)]/2

'hinge'

Hinge

(–∞,∞)

max(0,1 – yjsj)/2

'linear'

Linear

(–∞,∞)

(1 – yjsj)/2

'logit'

Logistic

(–∞,∞)

log[1 +
exp(–yjsj)]/[2log(2)]

'quadratic'

Quadratic

[0,1]

[1 – yj(2sj –
1)]2/2

The software normalizes binary losses such that the loss is 0.5 when
yj = 0, and aggregates using the average
of the binary learners [Allwein et al.].

Do not confuse the binary loss with the overall classification loss (specified by the
'LossFun' name-value pair argument of the loss and
predict object functions), which measures how well an ECOC classifier
performs as a whole.

Classification Margin

The classification margin is, for each observation,
the difference between the negative loss for the true class and the maximal negative loss
among the false classes. If the margins are on the same scale, then they serve as a
classification confidence measure. Among multiple classifiers, those that yield greater
margins are better.

Tips

To compare the margins or edges of several ECOC classifiers, use template objects to
specify a common score transform function among the classifiers during training.