final def==(arg0: Any): Boolean

final defasInstanceOf[T0]: T0

If false, the algorithm will pass trees to executors to match instances with nodes.

If false, the algorithm will pass trees to executors to match instances with nodes.
If true, the algorithm will cache node IDs for each instance.
Caching can speed up training of deeper trees. Users can set how often should the
cache be checkpointed or disable it by setting checkpointInterval.
(default = false)

Param for set checkpoint interval (>= 1) or disable checkpoint (-1). E.g. 10 means that the cache will get checkpointed every 10 iterations. Note: this setting will be ignored if the checkpoint directory is not set in the SparkContext.

Copies param values from this instance to another instance for params shared by them.

Copies param values from this instance to another instance for params shared by them.

This handles default Params and explicitly set Params separately.
Default Params are copied from and to defaultParamMap, and explicitly set Params are
copied from and to paramMap.
Warning: This implicitly assumes that this Params instance and the target instance
share the same set of default Params.

to

the target instance, which should work with the same set of default Params as this
source instance

Extracts the embedded default param values and user-supplied values, and then merges them with
extra values from input into a flat param map, where the latter value is used if there exist
conflicts, i.e., with ordering:
default param values less than user-supplied values less than extra.

Extracts the embedded default param values and user-supplied values, and then merges them with
extra values from input into a flat param map, where the latter value is used if there exist
conflicts, i.e., with ordering:
default param values less than user-supplied values less than extra.

Each feature's importance is the average of its importance across all trees in the ensemble
The importance vector is normalized to sum to 1. This method is suggested by Hastie et al.
(Hastie, Tibshirani, Friedman. "The Elements of Statistical Learning, 2nd Edition." 2001.)
and follows the implementation from scikit-learn.

deflogWarning(msg: ⇒ String, throwable: Throwable): Unit

deflogWarning(msg: ⇒ String): Unit

Maximum number of bins used for discretizing continuous features and for choosing how to split
on features at each node.

Maximum number of bins used for discretizing continuous features and for choosing how to split
on features at each node. More bins give higher granularity.
Must be >= 2 and >= number of categories in any categorical feature.
(default = 32)

Minimum number of instances each child must have after split.
If a split causes the left or right child to have fewer than minInstancesPerNode,
the split will be discarded as invalid.
Should be >= 1.
(default = 1)

Raw prediction for each possible label.
The meaning of a "raw" prediction may vary between algorithms, but it intuitively gives
a measure of confidence in each possible label (where larger = more confident).
This internal method is used to implement transform() and output rawPredictionCol.

returns

vector where element i is the raw prediction for label i.
This raw prediction may be any real number, where a larger value indicates greater
confidence for that label.

Param for Column name for predicted class conditional probabilities. Note: Not all models output well-calibrated probability estimates! These probabilities should be treated as confidences, not precise probabilities.

final defsynchronized[T0](arg0: ⇒ T0): T0

Param for Thresholds in multi-class classification to adjust the probability of predicting each class.

Param for Thresholds in multi-class classification to adjust the probability of predicting each class. Array must have length equal to the number of classes, with values > 0 excepting that at most one value may be 0. The class with largest value p/t is predicted, where p is the original probability of that class and t is the class's threshold.

Check transform validity and derive the output schema from the input schema.

We check validity for interactions between parameters during transformSchema and
raise an exception if any parameter value is invalid. Parameter value checks which
do not depend on other parameters are handled by Param.validate().