GClasses
GClasses::GLayerClassic Class Reference

#include <GNeuralNet.h>

Inheritance diagram for GClasses::GLayerClassic:
GClasses::GNeuralNetLayer GClasses::GLayerSoftMax

Public Member Functions

 GLayerClassic (size_t inputs, size_t outputs, GActivationFunction *pActivationFunction=NULL)
 General-purpose constructor. Takes ownership of pActivationFunction. If pActivationFunction is NULL, then GActivationTanH is used. More...
 
 GLayerClassic (GDomNode *pNode)
 Deserializing constructor. More...
 
 ~GLayerClassic ()
 
virtual void activate ()
 Applies the activation function to the net vector to compute the activation vector. More...
 
virtual double * activation ()
 Returns the activation values from the most recent call to feedForward(). More...
 
GActivationFunctionactivationFunction ()
 Returns a pointer to the activation function used in this layer. More...
 
virtual void backPropError (GNeuralNetLayer *pUpStreamLayer, size_t inputStart=0)
 Backpropagates the error from this layer into the upstream layer's error vector. (Assumes that the error in this layer has already been computed and deactivated. The error this computes is with respect to the output of the upstream layer.) More...
 
void backPropErrorSingleOutput (size_t output, double *pUpStreamError)
 Backpropagates the error from a single output node to a hidden layer. (Assumes that the error in the output node has already been deactivated. The error this computes is with respect to the output of the upstream layer.) More...
 
double * bias ()
 Returns the bias vector of this layer. More...
 
const double * bias () const
 Returns the bias vector of this layer. More...
 
double * biasDelta ()
 Returns a buffer used to store delta values for each bias in this layer. More...
 
virtual void computeError (const double *pTarget)
 Computes the error terms associated with the output of this layer, given a target vector. (Note that this is the error of the output, not the error of the weights. To obtain the error term for the weights, deactivateError must be called.) More...
 
void computeErrorSingleOutput (double target, size_t output)
 This is the same as computeError, except that it only computes the error of a single unit. More...
 
void contractWeights (double factor, bool contractBiases)
 Contracts all the weights. (Assumes contractive error terms have already been set.) More...
 
virtual void copyBiasToNet ()
 Copies the bias vector into the net vector. More...
 
void copySingleNeuronWeights (size_t source, size_t dest)
 
virtual void copyWeights (GNeuralNetLayer *pSource)
 Copy the weights from pSource to this layer. (Assumes pSource is the same type of layer.) More...
 
virtual size_t countWeights ()
 Returns the number of double-precision elements necessary to serialize the weights of this layer into a vector. More...
 
virtual void deactivateError ()
 Multiplies each element in the error vector by the derivative of the activation function. This results in the error having meaning with respect to the weights, instead of the output. (Assumes the error for this layer has already been computed.) More...
 
void deactivateErrorSingleOutput (size_t output)
 Same as deactivateError, but only applies to a single unit in this layer. More...
 
virtual void diminishWeights (double amount, bool regularizeBiases)
 Diminishes all the weights (that is, moves them in the direction toward 0) by the specified amount. More...
 
virtual void dropConnect (GRand &rand, double probOfDrop)
 Randomly sets some of the weights to 0. (The dropped weights are restored when you call updateWeightsAndRestoreDroppedOnes.) More...
 
virtual void dropOut (GRand &rand, double probOfDrop)
 Randomly sets the activation of some units to 0. More...
 
virtual double * error ()
 Returns a buffer used to store error terms for each unit in this layer. More...
 
void feedForwardToOneOutput (const double *pIn, size_t output, bool inputBias)
 Feeds a vector forward through this layer to compute only the one specified output value. More...
 
void feedForwardWithInputBias (const double *pIn)
 Feeds a vector forward through this layer. Uses the first value in pIn as an input bias. More...
 
virtual void feedIn (const double *pIn, size_t inputStart, size_t inputCount)
 Feeds a portion of the inputs through the weights and updates the net. More...
 
void getWeightsSingleNeuron (size_t outputNode, double *&weights)
 Gets the weights and bias of a single neuron. More...
 
virtual size_t inputs ()
 Returns the number of values expected to be fed as input into this layer. More...
 
virtual void maxNorm (double max)
 Scales weights if necessary such that the manitude of the weights (not including the bias) feeding into each unit are <= max. More...
 
double * net ()
 Returns the net vector (that is, the values computed before the activation function was applied) from the most recent call to feedForward(). More...
 
virtual size_t outputs ()
 Returns the number of nodes or units in this layer. More...
 
virtual void perturbWeights (GRand &rand, double deviation, size_t start=0, size_t count=INVALID_INDEX)
 Perturbs the weights that feed into the specifed units with Gaussian noise. start specifies the first unit whose incoming weights are perturbed. count specifies the maximum number of units whose incoming weights are perturbed. The default values for these parameters apply the perturbation to all units. More...
 
void regularizeWeights (double factor, double power)
 Adjusts the value of each weight to, w = w - factor * pow(w, power). If power is 1, this is the same as calling scaleWeights. If power is 0, this is the same as calling diminishWeights. More...
 
virtual void renormalizeInput (size_t input, double oldMin, double oldMax, double newMin=0.0, double newMax=1.0)
 Adjusts weights such that values in the new range will result in the same behavior that previously resulted from values in the old range. More...
 
virtual void resetWeights (GRand &rand)
 Initialize the weights with small random values. More...
 
virtual void resize (size_t inputs, size_t outputs, GRand *pRand=NULL, double deviation=0.03)
 Resizes this layer. If pRand is non-NULL, then it preserves existing weights when possible and initializes any others to small random values. More...
 
virtual void scaleUnitIncomingWeights (size_t unit, double scalar)
 Scale weights that feed into the specified unit. More...
 
virtual void scaleUnitOutgoingWeights (size_t input, double scalar)
 Scale weights that feed into this layer from the specified input. More...
 
virtual void scaleWeights (double factor, bool scaleBiases)
 Multiplies all the weights in this layer by the specified factor. More...
 
virtual GDomNodeserialize (GDom *pDoc)
 Marshall this layer into a DOM. More...
 
void setWeightsSingleNeuron (size_t outputNode, const double *weights)
 Gets the weights and bias of a single neuron. More...
 
void setWeightsToIdentity (size_t start=0, size_t count=(size_t)-1)
 Sets the weights of this layer to make it weakly approximate the identity function. start specifies the first unit whose incoming weights will be adjusted. count specifies the maximum number of units whose incoming weights are adjusted. More...
 
double * slack ()
 Returns a vector used to specify slack terms for each unit in this layer. More...
 
void transformWeights (GMatrix &transform, const double *pOffset)
 Transforms the weights of this layer by the specified transformation matrix and offset vector. transform should be the pseudoinverse of the transform applied to the inputs. pOffset should be the negation of the offset added to the inputs after the transform, or the transformed offset that is added before the transform. More...
 
virtual const char * type ()
 Returns the type of this layer. More...
 
virtual double unitIncomingWeightsL1Norm (size_t unit)
 Compute the L1 norm (sum of absolute values) of weights feeding into the specified unit. More...
 
virtual double unitIncomingWeightsL2Norm (size_t unit)
 Compute the L2 norm (sum of squares) of weights feeding into the specified unit. More...
 
virtual double unitOutgoingWeightsL1Norm (size_t input)
 Compute the L1 norm (sum of absolute values) of weights feeding into this layer from the specified input. More...
 
virtual double unitOutgoingWeightsL2Norm (size_t input)
 Compute the L2 norm (sum of squares) of weights feeding into this layer from the specified input. More...
 
virtual void updateBias (double learningRate, double momentum)
 Updates the bias of this layer by gradient descent. (Assumes the error has already been computed and deactivated.) More...
 
virtual void updateWeights (const double *pUpStreamActivation, size_t inputStart, size_t inputCount, double learningRate, double momentum)
 Updates the weights that feed into this layer (not including the bias) by gradient descent. (Assumes the error has already been computed and deactivated.) More...
 
virtual void updateWeightsAndRestoreDroppedOnes (const double *pUpStreamActivation, size_t inputStart, size_t inputCount, double learningRate, double momentum)
 This is a special weight update method for use with drop-connect. It updates the weights, and restores the weights that were previously dropped by a call to dropConnect. More...
 
void updateWeightsSingleNeuron (size_t outputNode, const double *pUpStreamActivation, double learningRate, double momentum)
 Updates the weights and bias of a single neuron. (Assumes the error has already been computed and deactivated.) More...
 
virtual size_t vectorToWeights (const double *pVector)
 Deserialize from a vector to the weights in this layer. Return the number of elements consumed. More...
 
GMatrixweights ()
 Returns a reference to the weights matrix of this layer. More...
 
virtual size_t weightsToVector (double *pOutVector)
 Serialize the weights in this layer into a vector. Return the number of elements written. More...
 
- Public Member Functions inherited from GClasses::GNeuralNetLayer
 GNeuralNetLayer ()
 
virtual ~GNeuralNetLayer ()
 
void feedForward (const double *pIn)
 Feeds in the bias and pIn, then computes the activation of this layer. More...
 
virtual void feedIn (GNeuralNetLayer *pUpStreamLayer, size_t inputStart)
 Feeds the previous layer's activation into this layer. (Implementations for specialized hardware may override this method to avoid shuttling the previous layer's activation back to host memory.) More...
 
GMatrixfeedThrough (const GMatrix &data)
 Feeds a matrix through this layer, one row at-a-time, and returns the resulting transformed matrix. More...
 
virtual void updateWeights (GNeuralNetLayer *pUpStreamLayer, size_t inputStart, double learningRate, double momentum)
 Refines the weights by gradient descent. More...
 
virtual void updateWeightsAndRestoreDroppedOnes (GNeuralNetLayer *pUpStreamLayer, size_t inputStart, double learningRate, double momentum)
 Refines the weights by gradient descent. More...
 
virtual bool usesGPU ()
 Returns true iff this layer does its computations in parallel on a GPU. More...
 

Protected Attributes

GMatrix m_bias
 
GMatrix m_delta
 
GActivationFunctionm_pActivationFunction
 
GMatrix m_weights
 

Friends

class GNeuralNet
 

Additional Inherited Members

- Static Public Member Functions inherited from GClasses::GNeuralNetLayer
static GNeuralNetLayerdeserialize (GDomNode *pNode)
 Unmarshalls the specified DOM node into a layer object. More...
 
- Protected Member Functions inherited from GClasses::GNeuralNetLayer
GDomNodebaseDomNode (GDom *pDoc)
 

Constructor & Destructor Documentation

GClasses::GLayerClassic::GLayerClassic ( size_t  inputs,
size_t  outputs,
GActivationFunction pActivationFunction = NULL 
)

General-purpose constructor. Takes ownership of pActivationFunction. If pActivationFunction is NULL, then GActivationTanH is used.

GClasses::GLayerClassic::GLayerClassic ( GDomNode pNode)

Deserializing constructor.

GClasses::GLayerClassic::~GLayerClassic ( )

Member Function Documentation

virtual void GClasses::GLayerClassic::activate ( )
virtual

Applies the activation function to the net vector to compute the activation vector.

Implements GClasses::GNeuralNetLayer.

Reimplemented in GClasses::GLayerSoftMax.

virtual double* GClasses::GLayerClassic::activation ( )
inlinevirtual

Returns the activation values from the most recent call to feedForward().

Implements GClasses::GNeuralNetLayer.

GActivationFunction* GClasses::GLayerClassic::activationFunction ( )
inline

Returns a pointer to the activation function used in this layer.

virtual void GClasses::GLayerClassic::backPropError ( GNeuralNetLayer pUpStreamLayer,
size_t  inputStart = 0 
)
virtual

Backpropagates the error from this layer into the upstream layer's error vector. (Assumes that the error in this layer has already been computed and deactivated. The error this computes is with respect to the output of the upstream layer.)

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::backPropErrorSingleOutput ( size_t  output,
double *  pUpStreamError 
)

Backpropagates the error from a single output node to a hidden layer. (Assumes that the error in the output node has already been deactivated. The error this computes is with respect to the output of the upstream layer.)

double* GClasses::GLayerClassic::bias ( )
inline

Returns the bias vector of this layer.

const double* GClasses::GLayerClassic::bias ( ) const
inline

Returns the bias vector of this layer.

double* GClasses::GLayerClassic::biasDelta ( )
inline

Returns a buffer used to store delta values for each bias in this layer.

virtual void GClasses::GLayerClassic::computeError ( const double *  pTarget)
virtual

Computes the error terms associated with the output of this layer, given a target vector. (Note that this is the error of the output, not the error of the weights. To obtain the error term for the weights, deactivateError must be called.)

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::computeErrorSingleOutput ( double  target,
size_t  output 
)

This is the same as computeError, except that it only computes the error of a single unit.

void GClasses::GLayerClassic::contractWeights ( double  factor,
bool  contractBiases 
)

Contracts all the weights. (Assumes contractive error terms have already been set.)

virtual void GClasses::GLayerClassic::copyBiasToNet ( )
virtual

Copies the bias vector into the net vector.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::copySingleNeuronWeights ( size_t  source,
size_t  dest 
)
virtual

Reimplemented from GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::copyWeights ( GNeuralNetLayer pSource)
virtual

Copy the weights from pSource to this layer. (Assumes pSource is the same type of layer.)

Implements GClasses::GNeuralNetLayer.

virtual size_t GClasses::GLayerClassic::countWeights ( )
virtual

Returns the number of double-precision elements necessary to serialize the weights of this layer into a vector.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::deactivateError ( )
virtual

Multiplies each element in the error vector by the derivative of the activation function. This results in the error having meaning with respect to the weights, instead of the output. (Assumes the error for this layer has already been computed.)

Implements GClasses::GNeuralNetLayer.

Reimplemented in GClasses::GLayerSoftMax.

void GClasses::GLayerClassic::deactivateErrorSingleOutput ( size_t  output)

Same as deactivateError, but only applies to a single unit in this layer.

virtual void GClasses::GLayerClassic::diminishWeights ( double  amount,
bool  regularizeBiases 
)
virtual

Diminishes all the weights (that is, moves them in the direction toward 0) by the specified amount.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::dropConnect ( GRand rand,
double  probOfDrop 
)
virtual

Randomly sets some of the weights to 0. (The dropped weights are restored when you call updateWeightsAndRestoreDroppedOnes.)

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::dropOut ( GRand rand,
double  probOfDrop 
)
virtual

Randomly sets the activation of some units to 0.

Implements GClasses::GNeuralNetLayer.

virtual double* GClasses::GLayerClassic::error ( )
inlinevirtual

Returns a buffer used to store error terms for each unit in this layer.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::feedForwardToOneOutput ( const double *  pIn,
size_t  output,
bool  inputBias 
)

Feeds a vector forward through this layer to compute only the one specified output value.

void GClasses::GLayerClassic::feedForwardWithInputBias ( const double *  pIn)

Feeds a vector forward through this layer. Uses the first value in pIn as an input bias.

virtual void GClasses::GLayerClassic::feedIn ( const double *  pIn,
size_t  inputStart,
size_t  inputCount 
)
virtual

Feeds a portion of the inputs through the weights and updates the net.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::getWeightsSingleNeuron ( size_t  outputNode,
double *&  weights 
)
virtual

Gets the weights and bias of a single neuron.

Reimplemented from GClasses::GNeuralNetLayer.

virtual size_t GClasses::GLayerClassic::inputs ( )
inlinevirtual

Returns the number of values expected to be fed as input into this layer.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::maxNorm ( double  max)
virtual

Scales weights if necessary such that the manitude of the weights (not including the bias) feeding into each unit are <= max.

Implements GClasses::GNeuralNetLayer.

double* GClasses::GLayerClassic::net ( )
inline

Returns the net vector (that is, the values computed before the activation function was applied) from the most recent call to feedForward().

virtual size_t GClasses::GLayerClassic::outputs ( )
inlinevirtual

Returns the number of nodes or units in this layer.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::perturbWeights ( GRand rand,
double  deviation,
size_t  start = 0,
size_t  count = INVALID_INDEX 
)
virtual

Perturbs the weights that feed into the specifed units with Gaussian noise. start specifies the first unit whose incoming weights are perturbed. count specifies the maximum number of units whose incoming weights are perturbed. The default values for these parameters apply the perturbation to all units.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::regularizeWeights ( double  factor,
double  power 
)

Adjusts the value of each weight to, w = w - factor * pow(w, power). If power is 1, this is the same as calling scaleWeights. If power is 0, this is the same as calling diminishWeights.

virtual void GClasses::GLayerClassic::renormalizeInput ( size_t  input,
double  oldMin,
double  oldMax,
double  newMin = 0.0,
double  newMax = 1.0 
)
virtual

Adjusts weights such that values in the new range will result in the same behavior that previously resulted from values in the old range.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::resetWeights ( GRand rand)
virtual

Initialize the weights with small random values.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::resize ( size_t  inputs,
size_t  outputs,
GRand pRand = NULL,
double  deviation = 0.03 
)
virtual

Resizes this layer. If pRand is non-NULL, then it preserves existing weights when possible and initializes any others to small random values.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::scaleUnitIncomingWeights ( size_t  unit,
double  scalar 
)
virtual

Scale weights that feed into the specified unit.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::scaleUnitOutgoingWeights ( size_t  input,
double  scalar 
)
virtual

Scale weights that feed into this layer from the specified input.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::scaleWeights ( double  factor,
bool  scaleBiases 
)
virtual

Multiplies all the weights in this layer by the specified factor.

Implements GClasses::GNeuralNetLayer.

virtual GDomNode* GClasses::GLayerClassic::serialize ( GDom pDoc)
virtual

Marshall this layer into a DOM.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::setWeightsSingleNeuron ( size_t  outputNode,
const double *  weights 
)
virtual

Gets the weights and bias of a single neuron.

Reimplemented from GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::setWeightsToIdentity ( size_t  start = 0,
size_t  count = (size_t)-1 
)

Sets the weights of this layer to make it weakly approximate the identity function. start specifies the first unit whose incoming weights will be adjusted. count specifies the maximum number of units whose incoming weights are adjusted.

double* GClasses::GLayerClassic::slack ( )
inline

Returns a vector used to specify slack terms for each unit in this layer.

void GClasses::GLayerClassic::transformWeights ( GMatrix transform,
const double *  pOffset 
)

Transforms the weights of this layer by the specified transformation matrix and offset vector. transform should be the pseudoinverse of the transform applied to the inputs. pOffset should be the negation of the offset added to the inputs after the transform, or the transformed offset that is added before the transform.

virtual const char* GClasses::GLayerClassic::type ( )
inlinevirtual

Returns the type of this layer.

Implements GClasses::GNeuralNetLayer.

Reimplemented in GClasses::GLayerSoftMax.

virtual double GClasses::GLayerClassic::unitIncomingWeightsL1Norm ( size_t  unit)
virtual

Compute the L1 norm (sum of absolute values) of weights feeding into the specified unit.

Implements GClasses::GNeuralNetLayer.

virtual double GClasses::GLayerClassic::unitIncomingWeightsL2Norm ( size_t  unit)
virtual

Compute the L2 norm (sum of squares) of weights feeding into the specified unit.

Implements GClasses::GNeuralNetLayer.

virtual double GClasses::GLayerClassic::unitOutgoingWeightsL1Norm ( size_t  input)
virtual

Compute the L1 norm (sum of absolute values) of weights feeding into this layer from the specified input.

Implements GClasses::GNeuralNetLayer.

virtual double GClasses::GLayerClassic::unitOutgoingWeightsL2Norm ( size_t  input)
virtual

Compute the L2 norm (sum of squares) of weights feeding into this layer from the specified input.

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::updateBias ( double  learningRate,
double  momentum 
)
virtual

Updates the bias of this layer by gradient descent. (Assumes the error has already been computed and deactivated.)

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::updateWeights ( const double *  pUpStreamActivation,
size_t  inputStart,
size_t  inputCount,
double  learningRate,
double  momentum 
)
virtual

Updates the weights that feed into this layer (not including the bias) by gradient descent. (Assumes the error has already been computed and deactivated.)

Implements GClasses::GNeuralNetLayer.

virtual void GClasses::GLayerClassic::updateWeightsAndRestoreDroppedOnes ( const double *  pUpStreamActivation,
size_t  inputStart,
size_t  inputCount,
double  learningRate,
double  momentum 
)
virtual

This is a special weight update method for use with drop-connect. It updates the weights, and restores the weights that were previously dropped by a call to dropConnect.

Implements GClasses::GNeuralNetLayer.

void GClasses::GLayerClassic::updateWeightsSingleNeuron ( size_t  outputNode,
const double *  pUpStreamActivation,
double  learningRate,
double  momentum 
)

Updates the weights and bias of a single neuron. (Assumes the error has already been computed and deactivated.)

virtual size_t GClasses::GLayerClassic::vectorToWeights ( const double *  pVector)
virtual

Deserialize from a vector to the weights in this layer. Return the number of elements consumed.

Implements GClasses::GNeuralNetLayer.

GMatrix& GClasses::GLayerClassic::weights ( )
inline

Returns a reference to the weights matrix of this layer.

virtual size_t GClasses::GLayerClassic::weightsToVector ( double *  pOutVector)
virtual

Serialize the weights in this layer into a vector. Return the number of elements written.

Implements GClasses::GNeuralNetLayer.

Friends And Related Function Documentation

friend class GNeuralNet
friend

Member Data Documentation

GMatrix GClasses::GLayerClassic::m_bias
protected
GMatrix GClasses::GLayerClassic::m_delta
protected
GActivationFunction* GClasses::GLayerClassic::m_pActivationFunction
protected
GMatrix GClasses::GLayerClassic::m_weights
protected