The module contains the basic network architectures
Network Type | Function | Count of layers | Support train fcn | Error fcn |
---|---|---|---|---|
Single-layer perceptron | newp | 1 | train_delta | SSE |
Multi-layer perceptron | newff | >=1 | train_gd, train_gdm, train_gda, train_gdx, train_rprop, train_bfgs*, train_cg | SSE |
Competitive layer | newc | 1 | train_wta, train_cwta* | SAE |
LVQ | newlvq | 2 | train_lvq | MSE |
Elman | newelm | >=1 | train_gdx | MSE |
Hopield | newhop | 1 | None | None |
Hemming | newhem | 2 | None | None |
Note
* - default function
Create competitive layer (Kohonen network)
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> # create network with 2 inputs and 10 neurons
>>> net = newc([[-1, 1], [-1, 1]], 10)
|
Create a Elman recurrent network
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> # 1 input, input range is [-1, 1], 1 output neuron, 1 layer including output layer
>>> net = newelm([[-1, 1]], [1], [trans.PureLin()])
>>> net.layers[0].np['w'][:] = 1 # set weight for all input neurons to 1
>>> net.layers[0].np['b'][:] = 0 # set bias for all input neurons to 0
>>> net.sim([[1], [1] ,[1], [3]])
array([[ 1.],
[ 2.],
[ 3.],
[ 6.]])
|
Create multilayer perceptron
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> # create neural net with 2 inputs
>>> # input range for each input is [-0.5, 0.5]
>>> # 3 neurons for hidden layer, 1 neuron for output
>>> # 2 layers including hidden layer and output layer
>>> net = newff([[-0.5, 0.5], [-0.5, 0.5]], [3, 1])
>>> net.ci
2
>>> net.co
1
>>> len(net.layers)
2
|
Create a Hemming recurrent network with 2 layers
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> net = newhop([[-1, -1, -1], [1, -1, 1]])
>>> output = net.sim([[-1, 1, -1], [1, -1, 1]])
|
Create a Hopfield recurrent network
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> net = newhem([[-1, -1, -1], [1, -1, 1]])
>>> output = net.sim([[-1, 1, -1], [1, -1, 1]])
|
Create a learning vector quantization (LVQ) network
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> # create network with 2 inputs,
>>> # 2 layers and 10 neurons in each layer
>>> net = newlvq([[-1, 1], [-1, 1]], 10, [0.6, 0.4])
|
Create one layer perceptron
Parameters: |
|
---|---|
Returns: | net: Net |
Example: | >>> # create network with 2 inputs and 10 neurons
>>> net = newp([[-1, 1], [-1, 1]], 10)
|
Gradient descent backpropogation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Gradient descent with momentum backpropagation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Gradient descent with adaptive learning rate
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Gradient descent with momentum backpropagation and adaptive lr
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Рarameters: |
|
Resilient Backpropagation
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Winner Take All algorithm
Support networks: | |
---|---|
newc (Kohonen layer) |
|
Parameters: |
|
Conscience Winner Take All algorithm
Support networks: | |
---|---|
newc (Kohonen layer) |
|
Parameters: |
|
BroydenFletcherGoldfarbShanno (BFGS) method Using scipy.optimize.fmin_bfgs
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Newton-CG method Using scipy.optimize.fmin_ncg
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
Conjugate gradient algorithm Using scipy.optimize.fmin_ncg
Support networks: | |
---|---|
newff (multi-layers perceptron) |
|
Parameters: |
|
LVQ1 train function
Support networks: | |
---|---|
newlvq |
|
Parameters: |
|
Train with Delta rule
Support networks: | |
---|---|
newp (one-layer perceptron) |
|
Parameters: |
|
Train error functions with derivatives
Example: | >>> msef = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> msef(x, 0)
1.25
>>> # calc derivative:
>>> msef.deriv(x[0], 0)
array([ 1., 0.])
|
---|
Cross-entropy error function. For use when targets in {0,1}
C = -sum( t * log(o) + (1 - t) * log(1 - o))
Thanks kwecht https://github.com/kwecht :Parameters:
- target: ndarray
- target values for network
- output: ndarray
- simulated output of network
Returns: |
|
---|
Mean absolute error function
Parameters: |
|
---|---|
Returns: |
|
Mean squared error function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = MSE()
>>> x = np.array([[1.0, 0.0], [2.0, 0.0]])
>>> f(x, 0)
1.25
|
Derivative of MSE error function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = MSE()
>>> x = np.array([1.0, 0.0])
>>> # calc derivative:
>>> f.deriv(x, 0)
array([ 1., 0.])
|
Sum absolute error function
Parameters: |
|
---|---|
Returns: |
|
Transfer function with derivatives
Example: | >>> import numpy as np
>>> f = TanSig()
>>> x = np.linspace(-5,5,100)
>>> y = f(x)
>>> df_on_dy = f.deriv(x, y) # calc derivative
>>> f.out_minmax # list output range [min, max]
[-1, 1]
>>> f.inp_active # list input active range [min, max]
[-2, 2]
|
---|
Competitive transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = Competitive()
>>> f([-5, -0.1, 0, 0.1, 100])
array([ 1., 0., 0., 0., 0.])
>>> f([-5, -0.1, 0, -6, 100])
array([ 0., 0., 0., 1., 0.])
|
Hard limit transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = HardLim()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0., 0., 0., 1., 1.])
|
Symmetric hard limit transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = HardLims()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([-1., -1., -1., 1., 1.])
|
Logarithmic sigmoid transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = LogSig()
>>> x = np.array([-np.Inf, 0.0, np.Inf])
>>> f(x).tolist()
[0.0, 0.5, 1.0]
|
Linear transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> import numpy as np
>>> f = PureLin()
>>> x = np.array([-100., 50., 10., 40.])
>>> f(x).tolist()
[-100.0, 50.0, 10.0, 40.0]
|
Saturating linear transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = SatLin()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0. , 0. , 0. , 0.1, 1. ])
|
Linear transfer function with parametric output May use instead Satlin and Satlins
Init Parameters: | |
---|---|
|
|
Parameters: |
|
Returns: |
|
Example: | >>> f = SatLinPrm()
>>> x = np.array([-5, -0.1, 0, 0.1, 100])
>>> f(x)
array([ 0. , 0. , 0. , 0.1, 1. ])
>>> f = SatLinPrm(1, -1, 1)
>>> f(x)
array([-1. , -0.1, 0. , 0.1, 1. ])
|
Symmetric saturating linear transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> f = SatLins()
>>> x = np.array([-5, -1, 0, 0.1, 100])
>>> f(x)
array([-1. , -1. , 0. , 0.1, 1. ])
|
Soft max transfer function
Parameters: |
|
---|---|
Returns: |
|
Example: | >>> from numpy import floor
>>> f = SoftMax()
>>> floor(f([0, 1, 0.5, -0.5]) * 10)
array([ 1., 4., 2., 1.])
|
Functions of initialization layers
Initialize the specified properties of the layer random numbers within specified limits
Parameters: |
|
---|
Initialize the specified property of the layer random numbers within specified limits
Parameters: |
|
---|
Set all layer properties of zero
Parameters: |
|
---|
Nguyen-Widrow initialization function
Parameters: |
|
---|
Initialize weights and bias linspace betweene active space, not random values
This function need for tests
Parameters: |
|
---|