Implementation of recurrent network, including Gauss-Newton approximation for use in Hessian-free optimization.
Based on Martens, J., & Sutskever, I. (2011). Learning recurrent neural networks with hessian-free optimization. Proceedings of the 28th International Conference on Machine Learning.
RNNet(shape, rec_layers=None, W_rec_params=None, truncation=None, **kwargs)[source]¶
Implementation of recurrent deep network (including gradient/curvature computation).
- rec_layers (list) – indices of layers with recurrent connections (default is to make all except first and last layers recurrent)
- W_rec_params (dict) – parameters used to initialize recurrent
weights (passed to
- truncation (tuple) – a tuple (n,k) where backpropagation through time will be executed every n timesteps and run backwards for k steps (defaults to full backprop if None)
FFNetfor the remaining parameters.
forward(inputs, params=None, deriv=False, init_activations=None, init_state=None)[source]¶
Compute layer activations for given input and parameters.
- inputs (
ndarray) – input vectors (passed to first layer)
- params (
ndarray) – parameter vector (weights) for the network (defaults to
- deriv (bool) – if True then also compute the derivative of the activations
- init_activations (list) – initial values for the activations in each layer
- init_state (list) – initial values for the internal state of any stateful nonlinearities
- inputs (