summaryrefslogtreecommitdiff
path: root/doc/fitting.rst
diff options
context:
space:
mode:
Diffstat (limited to 'doc/fitting.rst')
-rw-r--r--doc/fitting.rst17
1 files changed, 8 insertions, 9 deletions
diff --git a/doc/fitting.rst b/doc/fitting.rst
index 1045653..50a05c8 100644
--- a/doc/fitting.rst
+++ b/doc/fitting.rst
@@ -129,7 +129,7 @@ Choosing Different Fitting Methods
By default, the `Levenberg-Marquardt
<https://en.wikipedia.org/wiki/Levenberg-Marquardt_algorithm>`_ algorithm is
used for fitting. While often criticized, including the fact it finds a
-*local* minima, this approach has some distinct advantages. These include
+*local* minimum, this approach has some distinct advantages. These include
being fast, and well-behaved for most curve-fitting needs, and making it
easy to estimate uncertainties for and correlations between pairs of fit
variables, as discussed in :ref:`fit-results-label`.
@@ -449,7 +449,7 @@ and standard errors could be done as
print('-------------------------------')
print('Parameter Value Stderr')
for name, param in out.params.items():
- print('{:7s} {:11.5f} {:11.5f}'.format(name, param.value, param.stderr))
+ print(f'{name:7s} {param.value:11.5f} {param.stderr:11.5f}')
.. _fit-itercb-label:
@@ -476,7 +476,7 @@ be used to abort a fit.
:type resid: numpy.ndarray
:param args: Positional arguments. Must match ``args`` argument to :func:`minimize`
:param kws: Keyword arguments. Must match ``kws`` argument to :func:`minimize`
- :return: Residual array (generally ``data-model``) to be minimized in the least-squares sense.
+ :return: Iteration abort flag.
:rtype: None for normal behavior, any value like ``True`` to abort the fit.
@@ -575,8 +575,6 @@ parameters, which is a similar goal to the one here.
x = np.linspace(1, 10, 250)
np.random.seed(0)
y = 3.0 * np.exp(-x / 2) - 5.0 * np.exp(-(x - 0.1) / 10.) + 0.1 * np.random.randn(x.size)
- plt.plot(x, y, 'b')
- plt.show()
Create a Parameter set for the initial guesses:
@@ -603,9 +601,9 @@ and plotting the fit using the Maximum Likelihood solution gives the graph below
.. jupyter-execute::
- plt.plot(x, y, 'b')
- plt.plot(x, residual(mi.params) + y, 'r', label='best fit')
- plt.legend(loc='best')
+ plt.plot(x, y, 'o')
+ plt.plot(x, residual(mi.params) + y, label='best fit')
+ plt.legend()
plt.show()
Note that the fit here (for which the ``numdifftools`` package is installed)
@@ -656,9 +654,10 @@ worked as intended (as a rule of thumb the value should be between 0.2 and
.. jupyter-execute::
- plt.plot(res.acceptance_fraction, 'b')
+ plt.plot(res.acceptance_fraction, 'o')
plt.xlabel('walker')
plt.ylabel('acceptance fraction')
+ plt.show()
With the results from ``emcee``, we can visualize the posterior distributions
for the parameters using the ``corner`` package: