root_finding

Function summary

find_n_roots(func[, args, n, x0, dx, p, ...]) Find the first n zeros/roots of a function
fixed_point_no_accel(func, x0[, args, xtol, ...]) Find a fixed point of the function (NO CONVEGENCE ACCELERATION!!!).

Module listing

Routines for finding zeros/roots of equations.

geotecha.mathematics.root_finding.find_n_roots(func, args=(), n=1, x0=0.001, dx=0.001, p=1.0, max_iter=2000, atol=1e-08, rtol=1e-05, debug=False, fsolve_kwargs={})[source]

Find the first n zeros/roots of a function

Root/zero finding can be tempramental. Small dx, p`=1, and large `max_iter will probably find the roots/zeros but it may take some time.

Parameters:

func : callable f(x, *args)

A function that takes at least one argument.

args : tuple, optional

Any extra arguments to func. Default args=().

n : float, optional

Number of roots to find, Default n=1.

x0 : float, optional

An x value less than the first root. This is NOT the initial guess of fsolve! Default x0=0.001.

dx : float, optional

Initial interval length to check for root in. Default dx=0.001.

p : float, optional

Factor to increase dx by up untill first root is found. Default p=1.0.

max_iter : int, optional

Maximum iterations when searching for an interval containing a root. Default max_iter=2000. Note this is not the same as fsolve maxfev keword argument.

atol, rtol : float, optional

numpy.allclose parameters. Used for checking if found the same root. Default atol=1e-8, rtol=1e-5.

debug : True/False, optional

Print calculations to stdout. Default debug=False

fsolve_kwargs : dict, optional

dict of kwargs to pass to scipy.optimize.fsolve. Default fsolve_kwargs={}.

Returns:

roots : 1d ndarray

Array of len n containing the first n roots of func.

Notes

Here is approximately what happens:

  • func is evaluated with x0 and args to give y0
  • x is incremented by dx to give x1
  • func is evaluated with x1 and args to give y1
  • If y0 and y1 have the same sign then there is no root between x0 and x1.
  • dx is multiplied by p and the x is incremented again.
  • successive intevals are checked until y0 and y1 are of different sign indicating that there is a root somewhere between x0 and x1.
  • scipy.optimize.fsolve is used to find the root. If it is the first root then the the left end of the interval, the current x0, is used as the initial guess for fsolve. If it is a later root then the guess for fsolve is the x-axis intercept of the straight line joining (x0, y0) and (x1, y1).
  • Once a root is found, the search begins for the next root which starts between the root and the current x1. The current dx value, the one that contained the root is reduced by a fifth and off we go again.

There are a few checks:

  • At least five intervals must be checked before a possible root

    interval is found. If less than five occur then we go back to the starting point of that roots search and use a reduced dx.

  • There is a maximum limit to the number of iterations used for each root

  • It is possible that the algorithm will, when searching for succesive roots, find identical roots. This is wrong. The current interval size along with the current dx will be reduced and the search will continue. If the same root is found five times then the root finding parameters are inappropriate and an error message will be raised.

geotecha.mathematics.root_finding.fixed_point_no_accel(func, x0, args=(), xtol=1e-08, maxiter=500)[source]

Find a fixed point of the function (NO CONVEGENCE ACCELERATION!!!).

Based on scipy.optimize.fixed_point but no convergence acelleration (RTRW)

Given a function of one or more variables and a starting point, find a fixed-point of the function: i.e. where func(x0) == x0.

Parameters:

func : function

Function to evaluate.

x0 : array_like

Fixed point of function.

args : tuple, optional

Extra arguments to func.

xtol : float, optional

Convergence tolerance, defaults to 1e-08.

maxiter : int, optional

Maximum number of iterations, defaults to 500.

Notes

Does NOT use Steffensen’s Method using Aitken’s Del^2 convergence Acceleration. See Burden, Faires, “Numerical Analysis”, 5th edition, pg. 80

Examples

>>> from scipy import optimize
>>> def func(x, c1, c2):
...    return np.sqrt(c1/(x+c2))
>>>
>>> c1 = np.array([10,12.])
>>> c2 = np.array([3, 5.])
>>> optimize.fixed_point(func, [1.2, 1.3], args=(c1,c2))
array([ 1.4920333 ,  1.37228132])
>>> fixed_point_no_accel(func, [1.2, 1.3], args=(c1,c2))
array([ 1.4920333 ,  1.37228132])

Sometimes iterations should converge but fail when acceleration convergence is used:

>>> def fn(n, kl, ks, i0):
...     return np.log(kl/ks/n) / np.log((i0 *n/(n - 1))) + 1
>>> fixed_point_no_accel(fn, 1.001, args=[6, 2, 3.7477525683])
1.300...
>>> optimize.fixed_point(fn, 1.001, args=[6, 2, 3.7477525683])
Traceback (most recent call last):
RuntimeError: Failed to converge after 500 iterations, value is nan