MagicPlot uses iterative Levenberg–Marquardt nonlinear least squares curve fitting algorithm which is widely used in most software.
MagicPlot implementation of Levenberg–Marquardt algorithm is optimised for using with multi-core processors. MagicPlot successfully passed testing with NIST Nonlinear Regression datasets (see our report).
Fit procedure iteratively varies the parameters βk of fit function f(x, β1, …, βp) to minimize the residual sum of squares (RSS, χ2):
here:
An initial guess for the parameters has to be provided to start minimization. Calculation of the new guess of parameters on each fit iteration is based on the fit function partial derivatives for current values of fit parameters for each x value:
Partial derivatives are computed using explicit formulas (for some predefined fit functions) or with finite difference (for custom equations).
MagicPlot can use weighting of y values based on y errors si:
In Fit Plot Properties
dialog (Plot Data
tab) you can set one of the following methods to evaluate standard y errors si:
After each iteration except the first MagicPlot evaluates deviation decrement D:
Deviation decrement shows how the residual sum of squares (RSS) on current iteration relatively differs from that on the previous iteration.
The iterative fit procedure stops on one of two conditions:
You can change the minimum allowable deviation decrement and maximum number of iterations in Fitting
tab of MagicPlot Preferences
.
In the table below you can find the formulas which MagicPlot uses to calculate fit parameters and values in Fit Report
tab.
Because of some confusion in the names of the parameters in different sources (books and software), we also give many different names of same parameter in note column.
Parameter Name | Symbol | Formula | Note |
---|---|---|---|
Original Data and Fit Model Properties | |||
Number of used data points | — | This is the number of data points inside specified fit Interval. | |
Fit parameters | β1,…,βp | — | For peak-like functions (Gauss, Lorentz) these parameters are amplitude, position and half width at half maximum. Only parameters with unset Lock checkbox are taken into account. |
Number of fit function parameters β | — | This is the total number of unlocked parameters of fit curves which are summarized to fit. | |
Degrees of freedom | |||
Estimated mean of data | |||
Estimated variance of data | Not used by fit algorithm, only for comparison. | ||
Data total sum of squares (TSS, SST) | TSS | Also known as: • Sum of squares about the mean. |
|
Fit Result | |||
Residual sum of squares (RSS) | This value is minimized during the fit to find the optimal fit function parameters. Also known as: • 'Chi-squared' • Sum of squared residuals (SSR), • Error sum of squares (ESS), • Sum of squares due to error (SSE). |
||
Reduced χ2 | The advantage of the reduced chi-squared is that it already normalizes for the number of data points and model (fit function) complexity. Also known as: • Mean square error (MSE), • Residual mean square. |
||
Residual standard deviation | s | Also known as: • Root mean square of the error (Root MSE) |
|
Coefficient of determination | R2 will be equal to one if fit is perfect, and to zero otherwise. This is a biased estimate of the population R2, and will never decrease if additional fit parameters (fit curves) are added, even if they are irrelevant. | ||
Adjusted R2 | Adjusted R2 (or degrees of freedom adjusted R-square) is a slightly modified version of R2, designed to penalize for the excess number of fit parameters which do not add to the explanatory power of the regression. This statistic is always smaller than R2, can decrease as you add new fit curves or introduce parameters, and even be negative for poorly fitting models. | ||
Covariance matrix of parameters βk | | Here α is the matrix of partial derivatives of fit function with respect to parameters βm and βn which is also used by fitting algorithm to compute parameters for next iteration. | |
Standard deviation of parameters βk (std. dev.) | These values are displayed in Std. Dev. column in parameters table. |
||
Correlation matrix of parameters βk | This matrix shows if the parameters are linked. The values lie in range -1…1, diagonal elements are always 1. If two parameters are linked the corresponding matrix value will be close to 1. It means that changing the first parameter compensates changing of the second one so that the fitting algorithm cannot select between them. |