GClasses
GClasses::GLayerSoftMax Class Reference

#include <GNeuralNet.h>

Inheritance diagram for GClasses::GLayerSoftMax:
GClasses::GLayerClassic GClasses::GNeuralNetLayer

Public Member Functions

 GLayerSoftMax (size_t inputs, size_t outputs)
 
 GLayerSoftMax (GDomNode *pNode)
 
virtual ~GLayerSoftMax ()
 
virtual void activate ()
 Applies the logistic activation function to the net vector to compute the activation vector, and also adjusts the weights so that the activations sum to 1. More...
 
virtual void deactivateError ()
 This method is a no-op, since cross-entropy training does not multiply by the derivative of the logistic function. More...
 
virtual const char * type ()
 Returns the type of this layer. More...
 
- Public Member Functions inherited from GClasses::GLayerClassic
 GLayerClassic (size_t inputs, size_t outputs, GActivationFunction *pActivationFunction=NULL)
 General-purpose constructor. Takes ownership of pActivationFunction. If pActivationFunction is NULL, then GActivationTanH is used. More...
 
 GLayerClassic (GDomNode *pNode)
 Deserializing constructor. More...
 
 ~GLayerClassic ()
 
virtual double * activation ()
 Returns the activation values from the most recent call to feedForward(). More...
 
GActivationFunctionactivationFunction ()
 Returns a pointer to the activation function used in this layer. More...
 
virtual void backPropError (GNeuralNetLayer *pUpStreamLayer, size_t inputStart=0)
 Backpropagates the error from this layer into the upstream layer's error vector. (Assumes that the error in this layer has already been computed and deactivated. The error this computes is with respect to the output of the upstream layer.) More...
 
void backPropErrorSingleOutput (size_t output, double *pUpStreamError)
 Backpropagates the error from a single output node to a hidden layer. (Assumes that the error in the output node has already been deactivated. The error this computes is with respect to the output of the upstream layer.) More...
 
double * bias ()
 Returns the bias vector of this layer. More...
 
const double * bias () const
 Returns the bias vector of this layer. More...
 
double * biasDelta ()
 Returns a buffer used to store delta values for each bias in this layer. More...
 
virtual void computeError (const double *pTarget)
 Computes the error terms associated with the output of this layer, given a target vector. (Note that this is the error of the output, not the error of the weights. To obtain the error term for the weights, deactivateError must be called.) More...
 
void computeErrorSingleOutput (double target, size_t output)
 This is the same as computeError, except that it only computes the error of a single unit. More...
 
void contractWeights (double factor, bool contractBiases)
 Contracts all the weights. (Assumes contractive error terms have already been set.) More...
 
virtual void copyBiasToNet ()
 Copies the bias vector into the net vector. More...
 
void copySingleNeuronWeights (size_t source, size_t dest)
 
virtual void copyWeights (GNeuralNetLayer *pSource)
 Copy the weights from pSource to this layer. (Assumes pSource is the same type of layer.) More...
 
virtual size_t countWeights ()
 Returns the number of double-precision elements necessary to serialize the weights of this layer into a vector. More...
 
void deactivateErrorSingleOutput (size_t output)
 Same as deactivateError, but only applies to a single unit in this layer. More...
 
virtual void diminishWeights (double amount, bool regularizeBiases)
 Diminishes all the weights (that is, moves them in the direction toward 0) by the specified amount. More...
 
virtual void dropConnect (GRand &rand, double probOfDrop)
 Randomly sets some of the weights to 0. (The dropped weights are restored when you call updateWeightsAndRestoreDroppedOnes.) More...
 
virtual void dropOut (GRand &rand, double probOfDrop)
 Randomly sets the activation of some units to 0. More...
 
virtual double * error ()
 Returns a buffer used to store error terms for each unit in this layer. More...
 
void feedForwardToOneOutput (const double *pIn, size_t output, bool inputBias)
 Feeds a vector forward through this layer to compute only the one specified output value. More...
 
void feedForwardWithInputBias (const double *pIn)
 Feeds a vector forward through this layer. Uses the first value in pIn as an input bias. More...
 
virtual void feedIn (const double *pIn, size_t inputStart, size_t inputCount)
 Feeds a portion of the inputs through the weights and updates the net. More...
 
void getWeightsSingleNeuron (size_t outputNode, double *&weights)
 Gets the weights and bias of a single neuron. More...
 
virtual size_t inputs ()
 Returns the number of values expected to be fed as input into this layer. More...
 
virtual void maxNorm (double max)
 Scales weights if necessary such that the manitude of the weights (not including the bias) feeding into each unit are <= max. More...
 
double * net ()
 Returns the net vector (that is, the values computed before the activation function was applied) from the most recent call to feedForward(). More...
 
virtual size_t outputs ()
 Returns the number of nodes or units in this layer. More...
 
virtual void perturbWeights (GRand &rand, double deviation, size_t start=0, size_t count=INVALID_INDEX)
 Perturbs the weights that feed into the specifed units with Gaussian noise. start specifies the first unit whose incoming weights are perturbed. count specifies the maximum number of units whose incoming weights are perturbed. The default values for these parameters apply the perturbation to all units. More...
 
void regularizeWeights (double factor, double power)
 Adjusts the value of each weight to, w = w - factor * pow(w, power). If power is 1, this is the same as calling scaleWeights. If power is 0, this is the same as calling diminishWeights. More...
 
virtual void renormalizeInput (size_t input, double oldMin, double oldMax, double newMin=0.0, double newMax=1.0)
 Adjusts weights such that values in the new range will result in the same behavior that previously resulted from values in the old range. More...
 
virtual void resetWeights (GRand &rand)
 Initialize the weights with small random values. More...
 
virtual void resize (size_t inputs, size_t outputs, GRand *pRand=NULL, double deviation=0.03)
 Resizes this layer. If pRand is non-NULL, then it preserves existing weights when possible and initializes any others to small random values. More...
 
virtual void scaleUnitIncomingWeights (size_t unit, double scalar)
 Scale weights that feed into the specified unit. More...
 
virtual void scaleUnitOutgoingWeights (size_t input, double scalar)
 Scale weights that feed into this layer from the specified input. More...
 
virtual void scaleWeights (double factor, bool scaleBiases)
 Multiplies all the weights in this layer by the specified factor. More...
 
virtual GDomNodeserialize (GDom *pDoc)
 Marshall this layer into a DOM. More...
 
void setWeightsSingleNeuron (size_t outputNode, const double *weights)
 Gets the weights and bias of a single neuron. More...
 
void setWeightsToIdentity (size_t start=0, size_t count=(size_t)-1)
 Sets the weights of this layer to make it weakly approximate the identity function. start specifies the first unit whose incoming weights will be adjusted. count specifies the maximum number of units whose incoming weights are adjusted. More...
 
double * slack ()
 Returns a vector used to specify slack terms for each unit in this layer. More...
 
void transformWeights (GMatrix &transform, const double *pOffset)
 Transforms the weights of this layer by the specified transformation matrix and offset vector. transform should be the pseudoinverse of the transform applied to the inputs. pOffset should be the negation of the offset added to the inputs after the transform, or the transformed offset that is added before the transform. More...
 
virtual double unitIncomingWeightsL1Norm (size_t unit)
 Compute the L1 norm (sum of absolute values) of weights feeding into the specified unit. More...
 
virtual double unitIncomingWeightsL2Norm (size_t unit)
 Compute the L2 norm (sum of squares) of weights feeding into the specified unit. More...
 
virtual double unitOutgoingWeightsL1Norm (size_t input)
 Compute the L1 norm (sum of absolute values) of weights feeding into this layer from the specified input. More...
 
virtual double unitOutgoingWeightsL2Norm (size_t input)
 Compute the L2 norm (sum of squares) of weights feeding into this layer from the specified input. More...
 
virtual void updateBias (double learningRate, double momentum)
 Updates the bias of this layer by gradient descent. (Assumes the error has already been computed and deactivated.) More...
 
virtual void updateWeights (const double *pUpStreamActivation, size_t inputStart, size_t inputCount, double learningRate, double momentum)
 Updates the weights that feed into this layer (not including the bias) by gradient descent. (Assumes the error has already been computed and deactivated.) More...
 
virtual void updateWeightsAndRestoreDroppedOnes (const double *pUpStreamActivation, size_t inputStart, size_t inputCount, double learningRate, double momentum)
 This is a special weight update method for use with drop-connect. It updates the weights, and restores the weights that were previously dropped by a call to dropConnect. More...
 
void updateWeightsSingleNeuron (size_t outputNode, const double *pUpStreamActivation, double learningRate, double momentum)
 Updates the weights and bias of a single neuron. (Assumes the error has already been computed and deactivated.) More...
 
virtual size_t vectorToWeights (const double *pVector)
 Deserialize from a vector to the weights in this layer. Return the number of elements consumed. More...
 
GMatrixweights ()
 Returns a reference to the weights matrix of this layer. More...
 
virtual size_t weightsToVector (double *pOutVector)
 Serialize the weights in this layer into a vector. Return the number of elements written. More...
 
- Public Member Functions inherited from GClasses::GNeuralNetLayer
 GNeuralNetLayer ()
 
virtual ~GNeuralNetLayer ()
 
void feedForward (const double *pIn)
 Feeds in the bias and pIn, then computes the activation of this layer. More...
 
virtual void feedIn (GNeuralNetLayer *pUpStreamLayer, size_t inputStart)
 Feeds the previous layer's activation into this layer. (Implementations for specialized hardware may override this method to avoid shuttling the previous layer's activation back to host memory.) More...
 
GMatrixfeedThrough (const GMatrix &data)
 Feeds a matrix through this layer, one row at-a-time, and returns the resulting transformed matrix. More...
 
virtual void updateWeights (GNeuralNetLayer *pUpStreamLayer, size_t inputStart, double learningRate, double momentum)
 Refines the weights by gradient descent. More...
 
virtual void updateWeightsAndRestoreDroppedOnes (GNeuralNetLayer *pUpStreamLayer, size_t inputStart, double learningRate, double momentum)
 Refines the weights by gradient descent. More...
 
virtual bool usesGPU ()
 Returns true iff this layer does its computations in parallel on a GPU. More...
 

Additional Inherited Members

- Static Public Member Functions inherited from GClasses::GNeuralNetLayer
static GNeuralNetLayerdeserialize (GDomNode *pNode)
 Unmarshalls the specified DOM node into a layer object. More...
 
- Protected Member Functions inherited from GClasses::GNeuralNetLayer
GDomNodebaseDomNode (GDom *pDoc)
 
- Protected Attributes inherited from GClasses::GLayerClassic
GMatrix m_bias
 
GMatrix m_delta
 
GActivationFunctionm_pActivationFunction
 
GMatrix m_weights
 

Constructor & Destructor Documentation

GClasses::GLayerSoftMax::GLayerSoftMax ( size_t  inputs,
size_t  outputs 
)
GClasses::GLayerSoftMax::GLayerSoftMax ( GDomNode pNode)
virtual GClasses::GLayerSoftMax::~GLayerSoftMax ( )
inlinevirtual

Member Function Documentation

virtual void GClasses::GLayerSoftMax::activate ( )
virtual

Applies the logistic activation function to the net vector to compute the activation vector, and also adjusts the weights so that the activations sum to 1.

Reimplemented from GClasses::GLayerClassic.

virtual void GClasses::GLayerSoftMax::deactivateError ( )
inlinevirtual

This method is a no-op, since cross-entropy training does not multiply by the derivative of the logistic function.

Reimplemented from GClasses::GLayerClassic.

virtual const char* GClasses::GLayerSoftMax::type ( )
inlinevirtual

Returns the type of this layer.

Reimplemented from GClasses::GLayerClassic.