Member Typedef Documentation

Command observer that will interact with the ITK-VNL cost-function adaptor in order to generate iteration events. This will allow to overcome the limitation of VNL optimizers not offering callbacks for every iteration

Scale type. This array defines scale to be applied to parameters before being evaluated in the cost function. This allows to map to a more convenient space. In particular this is used to normalize parameter spaces in which some parameters have a different dynamic range.

Member Function Documentation

Allow people to add/remove/invoke observers (callbacks) to any ITK object. This is an implementation of the subject/observer design pattern. An observer is added by specifying an event to respond to and an itk::Command to execute. It returns an unsigned long tag which can be used later to remove the event or retrieve the command. The memory for the Command becomes the responsibility of this object, so don't pass the same instance of a command to two different objects

static void itk::LightObject::BreakOnError

(

)

[static, inherited]

This method is called when itkExceptionMacro executes. It allows the debugger to break on error.

Create an object from an instance, potentially deferring to a factory. This method allows you to create an instance of an object that is exactly the same type as the referring object. This is useful in cases where an object has been cast back to a base class.

Return Cached Values. These method have the advantage of not triggering a recomputation of the metric value, but it has the disadvantage of returning a value that may not be the one corresponding to the current parameters. For GUI update purposes, this method is a good option, for mathematical validation you should rather call GetValue().

Get the command associated with the given tag. NOTE: This returns a pointer to a Command, but it is safe to asign this to a Command::Pointer. Since Command inherits from LightObject, at this point in the code, only a pointer or a reference to the Command can be used.

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

virtual void itk::SingleValuedNonLinearVnlOptimizer::MaximizeOn

(

)

[virtual, inherited]

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

void itk::SingleValuedNonLinearVnlOptimizer::MinimizeOff

(

void

)

[inline, inherited]

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

Set/Get the gradient convergence tolerance. This is a positive real number that determines the accuracy with which the solution is to be found. The optimization terminates when: ||G|| < gtol max(1,||X||) where ||.|| denotes the Euclidean norm.

Set/Get the line search accuracy. This is a positive real number with a default value of 0.9, which controls the accuracy of the line search. If the function and gradient evalutions are inexpensive with respect to the cost of the iterations it may be advantageous to set the value to a small value (say 0.1).

virtual void itk::SingleValuedNonLinearVnlOptimizer::SetMaximize

(

bool

_arg

)

[virtual, inherited]

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.

Methods to define whether the cost function will be maximized or minimized. By default the VNL amoeba optimizer is only a minimizer. Maximization is implemented here by notifying the CostFunctionAdaptor which in its turn will multiply the function values and its derivative by -1.0.