Non-linear least-square method. More...
#include <ql/math/optimization/leastsquare.hpp>
Public Member Functions | |
NonLinearLeastSquare (Constraint &c, Real accuracy=1e-4, Size maxiter=100) | |
Default constructor. | |
NonLinearLeastSquare (Constraint &c, Real accuracy, Size maxiter, const ext::shared_ptr< OptimizationMethod > &om) | |
Default constructor. | |
~NonLinearLeastSquare () | |
Destructor. | |
Array & | perform (LeastSquareProblem &lsProblem) |
Solve least square problem using numerix solver. | |
void | setInitialValue (const Array &initialValue) |
Array & | results () |
return the results | |
Real | residualNorm () const |
return the least square residual norm | |
Real | lastValue () const |
return last function value | |
Integer | exitFlag () const |
return exit flag | |
Integer | iterationsNumber () const |
return the performed number of iterations | |
Non-linear least-square method.
Using a given optimization algorithm (default is conjugate gradient),
\[ min \{ r(x) : x in R^n \} \]
where \( r(x) = |f(x)|^2 \) is the Euclidean norm of \( f(x) \) for some vector-valued function \( f \) from \( R^n \) to \( R^m \),
\[ f = (f_1, ..., f_m) \]
with \( f_i(x) = b_i - \phi(x,t_i) \) where \( b \) is the vector of target data and \( phi \) is a scalar function.
Assuming the differentiability of \( f \), the gradient of \( r \) is defined by
\[ grad r(x) = f'(x)^t.f(x) \]