Bases: pymodelfit.builtins.DoubleGaussianModel
This model is a DoubleGaussianModel that forces one of the gaussians to have negative amplitude and the other positive. A is the amplitude of the positive gaussian, while B is always taken to be negative.
x.__init__(...) initializes x; see help(type(x)) for signature
Methods
autoDualModel(x, y[, taller, wider])  Generates and fits a doublegaussian model where one of the peaks is on top of the other and much stronger. 
chi2Data([x, y, weights, ddof])  Computes the chisquared statistic for the data assuming this model. 
derivative(x[, dx])  Compute the derivative. 
f(x[, A, B, sig1, sig2, mu1, mu2])  
findroot(x0[, method])  Finds a root for the model (location where the model is 0). 
findval(val, x0[, method])  Finds where the model is equal to a specified value. 
fitData([x, y, fixedpars, weights, ...])  Fit the provided data using algorithms from scipy.optimize, and adjust the model parameters to match. 
getCall()  Retreives infromation about the calling function. 
getCov()  Computes the covariance matrix for the last fitData() call. 
getMCMC(x, y[, priors, datamodel])  Generate an Markov Chain Monte Carlo sampler for the data and model. 
integrate(lower, upper[, method, n, jac])  Compute the definite integral of this model. 
integrateCircular(lower, upper, *args, **kwargs)  Integrate this model on the 2D circle. This calls integrate() with 
integrateSpherical(lower, upper, *args, **kwargs)  Integrate this model on the 3D sphere. This calls integrate() with 
inv(yval, *args, **kwargs)  Find the x value matching the requested yvalue. 
isVarnumModel()  Determines if the model represented by this class accepts a variable number of parameters (i.e. 
maximize(x0[, method])  Finds a local maximum for the model. 
minimize(x0[, method])  Finds a local minimum for the model. 
pixelize(xorxl[, xu, n, edge, sampling])  Generate a discretized version o the model. 
plot([lower, upper, n, clf, logplot, data, ...])  Plot the model function and possibly data and error bars with matplotlib.pyplot. 
plotResiduals([x, y, clf, logplot])  Plots the residuals of the provided data () against this model. 
resampleFit([x, y, xerr, yerr, bootstrap, ...])  Estimates errors via resampling. 
residuals([x, y, retdata])  Compute residuals of the provided data against the model. 
setCall([calltype, xtrans, ytrans])  Sets the type of function evaluation to occur when the model is called. 
stdData([x, y])  Determines the standard deviation of the model from data. 
Attributes
A  int(x=0) > int or long 
B  int(x=0) > int or long 
data  The fitting data for this model. 
defaultIntMethod  str(object=’‘) > string 
defaultInvMethod  str(object=’‘) > string 
defaultparval  int(x=0) > int or long 
errors  Error on the data. 
fittype  str(object=’‘) > string 
fittypes  A Sequence of the available valid values for the fittype 
fixedpars  tuple() > empty tuple 
mu1  float(x) > floating point number 
mu2  float(x) > floating point number 
params  A tuple of the parameter names. 
pardict  A dictionary mapping parameter names to the associated values. 
parvals  The values of the parameters in the same order as params 
rangehint  
sig1  int(x=0) > int or long 
sig2  int(x=0) > int or long 
weightstype  Determines the statistical interpretation of the weights in data. 
xaxisname  
yaxisname 
Generates and fits a doublegaussian model where one of the peaks is on top of the other and much stronger. the taller and wider argument must be either ‘A’ or ‘B’ for the two components.
Computes the chisquared statistic for the data assuming this model.
Parameters: 


Returns:  tuple of floats (chi2,reducedchi2,pvalue) 
The fitting data for this model. Should be either None, or a tuple(datain,dataout,weights). Note that the weights are interpreted statistically as errors based on the weightstype attribute.
Compute the derivative. This implementation numerically estimates the derivative at x using the following formula:
Parameters: 


Note
If overridden in a subclass for efficient analytical computation, the signature should be derivative(self,x,dx=None), such that if dx is None, the analytical computation is used, and otherwise this numerical technique is used. i.e. all subclasses should have this at the beginning:
if dx is not None:
return FunctionModel1D.integrate(self,x,dx)
Error on the data. Sets the weights on data assuming the interpretation for errors given by weightstype. If data is None/missing, a TypeError will be raised.
Finds a root for the model (location where the model is 0).
Parameters: 

kwargs are passed into the method function
Returns:  the x value where the model is 0 

Finds where the model is equal to a specified value.
x0 is the location to start the search method can be ‘fmin’ or ‘fmin_powell’ (from scipy.optimize) kwargs are passed into the scipy.optimize function
Fit the provided data using algorithms from scipy.optimize, and adjust the model parameters to match.
The fitting technique is sepcified by the fittype attribute of the object, which by default can be any of the optimization types in the scipy.optimize module (except for scalar minimizers)
The full fitting output is available in lastfit attribute after this method completes.
Parameters: 


kwargs are passed into the fitting function.
Returns:  array of the best fit parameters 

Raises ModelTypeError:  
If the output of the model does not match the shape of y. 
See also
A Sequence of the available valid values for the fittype attribute. (Readonly)
Retreives infromation about the calling function.
Returns:  The type of evaluation to perform when this model is called  a string like that of the type passed into setCall(), or None if the model function itself is to be called. 

Computes the covariance matrix for the last fitData() call.
Returns:  The covariance matrix with variables in the same order as params. Diagonal entries give the variance in each parameter. 

Warning
This is not guaranteed to work for custom fittypes, but will always work with the default (leastsq) fit.
Generate an Markov Chain Monte Carlo sampler for the data and model. This function requires the PyMC package for the MCMC internals and sampling.
Parameters: 


Raises ValueError:  
If a prior is not provided for any parameter. 

Returns:  A pymc.MCMC object ready to sample for this model. 
Compute the definite integral of this model. This implememntation numerically estimates the integral using scipy.integrate functions. The integral computed is:
where is the jacobian set from the jac argument.
Parameters: 


Typ jac:  a callable f(x,*params) or None 
Returns:  The value of the computed definite integral. 
Note
Integration methods will store their full output to the attribute lastintegrate upon completion.
Note
If overridden in a subclass, the signature should be integrate(self,lower,upper,method=None,**kwargs). if method is anything other than None, it should fall back on this version. e.g. the following should be at the top of the overriding method:
if method is not None:
return FunctionModel1D.integrate(self,lower,upper,method,**kwargs)
Integrate this model on the 2D circle. This calls integrate() with the jacobian set appropriately assuming the model is the radial profile for an azimuthally symmetric 2D surface density.
If a jac keyword is provided, it is taken as an additional factor to multiply the circular jacobian.
Integrate this model on the 3D sphere. This calls integrate() with the jacobian set appropriately assuming the model is the radial profile for a spherically symmetric 3D density.
If a jac keyword is provided, it is taken as an additional factor to multiply the circular jacobian.
Find the x value matching the requested yvalue. The inverse is computed using root finders from the scipy.optimize module.
Parameters:  yval (float) – the output yvalue at which to compute the inverse 

Other args and kwargs are those appropriate for the chosen rootfinder, except for the keyword method which can be a name of any of the root finders from scipy.optimize. method can also be a function that should take f(g(x),*args,**kwargs) and return the x value at which g(x) is 0.
The default method depends on the input arguments as follows:
Uses scipy.optimize.newton(), starting the search at x0
Uses scipy.optimize.brentq(), searching the bracketing interval [a,b] for the lower and upper edges of the search range.
Returns:  the xvalue at which the model equals the given yval 

Examples
This finds the x value of the (very simple) function at the point y=3
>>> from pymodelfit.builtins import LinearModel
>>> m = LinearModel(m=4,b=2)
>>> '%.2f'%m.inv(3)
'0.25'
These examples use Newton’s, Brent’s, and Ridder’s method, to find the inverse of :math`y(x) = x^2` for 2,9,and 16, respectively (i.e. they should give sqrt(2),3, and 4)
>>> from pymodelfit.builtins import QuadraticModel
>>> m = QuadraticModel()
>>> '%.2f'%m.inv(2,1)
'1.41'
>>> '%.2f'%m.inv(9,2,4)
'3.00'
>>> '%.2f'%m.inv(16,3,5,method='ridder')
'4.00'
All these methods require a guide for the range of x values to search. The first requires a guess, although the default guess of 0 will be assumed if the second argument is not present. (for this example, sqrt(2) and sqrt(2) are both valid answers, so the default guess of 0 is ambiguous).
Determines if the model represented by this class accepts a variable number of parameters (i.e. number of parameters is set when the object is created).
Returns:  True if this model has a variable number of parameters. 

Finds a local maximum for the model.
Parameters: 

kwargs are passed into the method function
Returns:  a x value where the model is a local maximum 

Finds a local minimum for the model.
Parameters: 

kwargs are passed into the method function
Returns:  a x value where the model is a local minimum 

A tuple of the parameter names. (readonly)
A dictionary mapping parameter names to the associated values.
Generate a discretized version o the model. This method integrates over the model for a number of ranges to get a 1D “pixelized” version of the model.
Parameters: 


Returns:  Integrated values for the function as an array of size n or matching xorxl. 
Plot the model function and possibly data and error bars with matplotlib.pyplot. The plot will reflect any changes applied with setCall().
Parameters: 


Additional arguments and keywords are passed into matplotlib.pyplot.plot().
Note
By default, the model is plotted over the data points. If the points should be drawn on top of the model, set the keyword argument zorder to 0.
Plots the residuals of the provided data () against this model.
Parameters: 


Additional arguments and keywords are passed into matplotlib.pyplot.scatter().
Estimates errors via resampling. Uses the fitData function to fit the function many times while either using the “bootstrap” technique (resampling w/replacement), monte carlo estimates for the error, or both to estimate the error in the fit.
Parameters: 


kwargs are passed into fitData
Returns:  (histd,cov) where histd is a dictionary mapping parameters to their histograms and cov is the covariance matrix of the parameters in parameter order. 

Note
If x, y, xerr, or yerr are provided, they do not overwrite the stored data, unlike most other methods for this class.
Compute residuals of the provided data against the model. E.g. .
Parameters:  

Returns:  Residuals of model from y or if retdata is True, a tuple (x,y,residuals). 
Return type:  arraylike 
Sets the type of function evaluation to occur when the model is called. Changes the output of a function call to be , where is set by calltype, is given by the xtrans argument, and is given by ytrans.
Parameters: 


Any kwargs are passed into the function specified in calltype.
xtrans and ytrans transformation functions can accept the following values:
 None
 ‘log’
 ‘ln’
 ‘log##.#’
 ‘pow’
 ‘exp’
 ‘pow##.#’
Warning
There may be unintended consequences of this method due to methods using the call value instead of the default function evaluation result. You have been warned...
Determines the standard deviation of the model from data. Data can either be provided or (by default) will be taken from the stored data.
Parameters:  

Returns:  standard deviation of model from y 
Determines the statistical interpretation of the weights in data. Can be:
Weights act as inverse errors (default)
Weights act as inverse variance
Weights act as errors (nonstandard  this makes points with larger error bars count more towards the fit).
Weights act as variance (nonstandard  this makes points with larger error bars count more towards the fit).