Wrappers

WrapperBase

class numpy_ml.neural_nets.wrappers.wrappers.WrapperBase(wrapped_layer)[source]

An abstract base class for all Wrapper instances

forward(z, **kwargs)[source]

Overwritten by inherited class

backward(out, **kwargs)[source]

Overwritten by inherited class

trainable[source]

Whether the base layer is frozen

parameters[source]

A dictionary of the base layer parameters

hyperparameters[source]

A dictionary of the base layer’s hyperparameters

derived_variables[source]

A dictionary of the intermediate values computed during layer training.

gradients[source]

A dictionary of the current layer parameter gradients.

act_fn[source]

The activation function for the base layer.

X[source]

The collection of layer inputs.

freeze()[source]

Freeze the base layer’s parameters at their current values so they can no longer be updated.

unfreeze()[source]

Unfreeze the base layer’s parameters so they can be updated.

flush_gradients()[source]

Erase all the wrapper and base layer’s derived variables and gradients.

update(lr)[source]

Update the base layer’s parameters using the accrued gradients and layer optimizer. Flush all gradients once the update is complete.

set_params(summary_dict)[source]

Set the base layer parameters from a dictionary of values.

Parameters:summary_dict (dict) – A dictionary of layer parameters and hyperparameters. If a required parameter or hyperparameter is not included within summary_dict, this method will use the value in the current layer’s summary() method.
Returns:layer (Layer object) – The newly-initialized layer.
summary()[source]

Return a dict of the layer parameters, hyperparameters, and ID.

Dropout

class numpy_ml.neural_nets.wrappers.Dropout(wrapped_layer, p)[source]

Bases: numpy_ml.neural_nets.wrappers.wrappers.WrapperBase

A dropout regularization wrapper.

Notes

During training, a dropout layer zeroes each element of the layer input with probability p and scales the activation by 1 / (1 - p) (to reflect the fact that on average only (1 - p) * N units are active on any training pass). At test time, does not adjust elements of the input at all (ie., simply computes the identity function).

Parameters:
  • wrapped_layer (Layer instance) – The layer to apply dropout to.
  • p (float in [0, 1)) – The dropout propbability during training
forward(X, retain_derived=True)[source]

Compute the layer output with dropout for a single minibatch.

Parameters:
  • X (ndarray of shape (n_ex, n_in)) – Layer input, representing the n_in-dimensional features for a minibatch of n_ex examples.
  • retain_derived (bool) – Whether to retain the variables calculated during the forward pass for use later during backprop. If False, this suggests the layer will not be expected to backprop through wrt. this input. Default is True.
Returns:

Y (ndarray of shape (n_ex, n_out)) – Layer output for each of the n_ex examples.

backward(dLdy, retain_grads=True)[source]

Backprop from the base layer’s outputs to inputs.

Parameters:
  • dLdy (ndarray of shape (n_ex, n_out) or list of arrays) – The gradient(s) of the loss wrt. the layer output(s).
  • retain_grads (bool) – Whether to include the intermediate parameter gradients computed during the backward pass in the final parameter update. Default is True.
Returns:

dLdX (ndarray of shape (n_ex, n_in) or list of arrays) – The gradient of the loss wrt. the layer input(s) X.