This is an old revision of the document!
MagicPlot uses iterative Levenberg–Marquardt nonlinear least squares curve fitting algorithm which is widely used in most software.
Fit procedure iteratively varies the parameters β_{k} of fit function f(x, β_{1}, …, β_{p}) to minimize the residual sum of squares (RSS, χ^{2}):
here:
Calculation of the new guess of parameters on each fit iteration is based on the fit function partial derivatives for current values of fit parameters and for each x value:
To start minimization, you have to provide an initial guess for the parameters.
MagicPlot can use weighting of y values based on y errors s_{i}:
here C is normalizing coefficient (to make the sum of w_{i} be equal to one):
In Fit Plot Properties
dialog (Plot Data
tab) you can set one of the following methods to evaluate standard y errors s_{i}:
After each iteration except the first MagicPlot evaluates deviation decrement D:
Deviation decrement shows how the residual sum of squares (RSS) on current iteration relatively differs from that on the previous iteration.
The iterative fit procedure stops on one of two conditions:
or (and)
You can change the minimum allowable deviation decrement and maximum number of iterations in Fitting tab of MagicPlot Preferences.
In the table below you can find the formulas which MagicPlot uses to calculate fit parameters and values in Fit Report
tab.
Because of some confusion in the names of the parameters in different sources (books and software), we also give many different names of same parameter in note column.
Parameter Name | Symbol | Formula | Note |
---|---|---|---|
Original Data and Fit Model Properties | |||
Number of used data points | — | This is the number of data points inside specified Fit Interval. | |
Fit parameters | β_{1},…,β_{p} | — | For peak-like functions (Gauss, Lorentz) these parameters are amplitude, position and half width at half maximum. |
Number of fit function parameters β | — | This is the total number of parameters of all fit curves which are summarized to fit. | |
Degrees of freedom | |||
Estimated mean of data | |||
Estimated variance of data | Not used by fit algorithm, only for comparison. | ||
Data total sum of squares, TSS | TSS | TSS is also called sum of squares about the mean and acronym SST is also used. | |
Fit Result | |||
Residual sum of squares, RSS | This value is minimized during the fit to find the optimal fit function parameters. RSS is also called the sum of squared residuals (SSR), the error sum of squares (ESS), the sum of squares due to error (SSE). |
||
Reduced χ^{2} | The advantage of the reduced chi-squared is that it already normalizes for the number of data points and model (fit function) complexity. Reduced χ^{2} is also called mean square error (MSE) or the residual mean square. |
||
Residual Standard Deviation | s | Standard deviation is also called root mean square of the error (Root MSE) | |
Coefficient of determination | R^{2} will be equal to one if fit is perfect, and to zero otherwise. This is a biased estimate of the population R^{2}, and will never decrease if additional fit parameters (fit curves) are added, even if they are irrelevant. | ||
Adjusted R^{2} | Adjusted R^{2} (or degrees of freedom adjusted R-square) is a slightly modified version of R^{2}, designed to penalize for the excess number of fit parameters (fit curves) which do not add to the explanatory power of the regression. This statistic is always smaller than R^{2}, can decrease as you add new fit curves, and even be negative for poorly fitting models | ||
Covariance Matrix of Parameters β_{k} | Here α is the matrix of partial derivatives of fit function with respect to parameters β_{m} and β_{n} which is used for fitting: |
||
Standard Deviation of Parameters β_{k}, std. dev. | |||
Correlation Matrix of Parameters β_{k} |