Model Building and Assessment

When building a high-quality regression model, it is important
to select the right features (or predictors), tune hyperparameters
(model parameters not fit to the data), and assess model assumptions
through residual diagnostics.

You can tune hyperparameters by iterating between choosing values
for them, and cross-validating a model using your choices. This process
yields multiple models, and the best model among them can be the one
that minimizes the estimated generalization error. For example, to
tune an SVM model, choose a set of box constraints and kernel scales,
cross-validate a model for each pair of values, and then compare their
10-fold cross-validated mean-squared error estimates.

Certain nonparametric regression functions in Statistics and Machine Learning Toolbox™ additionally
offer automatic hyperparameter tuning through Bayesian optimization,
grid search, or random search. However, bayesopt,
which is the main function to implement Bayesian optimization, is
flexible enough for many other applications. For more details, see Bayesian Optimization Workflow.

In linear regression, the F-statistic
is the test statistic for the analysis of variance (ANOVA) approach
to test the significance of the model or the components in the model.
The t-statistic is useful for making inferences
about the regression coefficients