Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
AbstractLSTM.Builder |
ActivationLayer
Activation layer is a simple layer that applies the specified activation function to the input activations
|
AutoEncoder
Autoencoder layer.
|
AutoEncoder.Builder |
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseOutputLayer |
BaseOutputLayer.Builder |
BasePretrainNetwork |
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
BatchNormalization.Builder |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces intraclass consistency and doesn't require feed
forward of multiple examples.
|
CenterLossOutputLayer.Builder |
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
ConvolutionLayer.BaseConvBuilder |
ConvolutionLayer.Builder |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
Deconvolution2D.Builder |
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
DenseLayer.Builder |
DropoutLayer
Dropout layer.
|
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to
numClass-1) as input.
|
EmbeddingLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
|
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the Bidirectional
layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM. Note that this layer adds the
output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
GravesBidirectionalLSTM.Builder
Deprecated.
|
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
GravesLSTM.Builder
Deprecated.
|
Layer
A neural network layer.
|
Layer.Builder |
LocalResponseNormalization
Local response normalization layer
See section 3.3 of http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf |
LocalResponseNormalization.Builder |
LossLayer
LossLayer is a flexible output layer that performs a loss function on an input without MLP logic.
|
LSTM
LSTM recurrent neural network layer without peephole connections.
|
LSTM.Builder |
OutputLayer
Output layer used for training via backpropagation based on labels and a specified loss function.
|
OutputLayer.Builder |
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
RnnOutputLayer
A version of
OutputLayer for recurrent neural networks. |
RnnOutputLayer.Builder |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
SeparableConvolution2D.Builder |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
SubsamplingLayer.Builder |
SubsamplingLayer.PoolingType |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
Layer
A neural network layer.
|
Class and Description |
---|
Convolution3D.DataFormat
An optional dataFormat: "NDHWC" or "NCDHW".
|
Class and Description |
---|
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
AbstractLSTM.Builder |
ActivationLayer
Activation layer is a simple layer that applies the specified activation function to the input activations
|
ActivationLayer.Builder |
AutoEncoder
Autoencoder layer.
|
AutoEncoder.Builder |
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseOutputLayer |
BaseOutputLayer.Builder |
BasePretrainNetwork |
BasePretrainNetwork.Builder |
BaseRecurrentLayer |
BaseRecurrentLayer.Builder |
BaseUpsamplingLayer
Upsampling base layer
|
BaseUpsamplingLayer.UpsamplingBuilder |
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
BatchNormalization.Builder |
CapsuleLayer.Builder |
CapsuleStrengthLayer.Builder |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces intraclass consistency and doesn't require feed
forward of multiple examples.
|
CenterLossOutputLayer.Builder |
Cnn3DLossLayer
3D Convolutional Neural Network Loss Layer.
|
Cnn3DLossLayer.Builder |
CnnLossLayer
Convolutional Neural Network Loss Layer.
|
CnnLossLayer.Builder |
Convolution1DLayer
1D (temporal) convolutional layer.
|
Convolution1DLayer.Builder |
Convolution3D
3D convolution layer configuration
|
Convolution3D.Builder |
Convolution3D.DataFormat
An optional dataFormat: "NDHWC" or "NCDHW".
|
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
ConvolutionLayer.BaseConvBuilder |
ConvolutionLayer.Builder |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
Deconvolution2D.Builder |
Deconvolution3D
3D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
Deconvolution3D.Builder |
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
DenseLayer.Builder |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
DepthwiseConvolution2D.Builder |
DropoutLayer
Dropout layer.
|
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to
numClass-1) as input.
|
EmbeddingLayer.Builder |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
EmbeddingSequenceLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
|
GlobalPoolingLayer.Builder |
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the Bidirectional
layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM. Note that this layer adds the
output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
GravesBidirectionalLSTM.Builder
Deprecated.
|
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
Layer
A neural network layer.
|
Layer.Builder |
LearnedSelfAttentionLayer
Implements Dot Product Self Attention with learned queries
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using learned queries.
|
LearnedSelfAttentionLayer.Builder |
LocallyConnected1D
SameDiff version of a 1D locally connected layer.
|
LocallyConnected1D.Builder |
LocallyConnected2D
SameDiff version of a 2D locally connected layer.
|
LocallyConnected2D.Builder |
LocalResponseNormalization
Local response normalization layer
See section 3.3 of http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf |
LocalResponseNormalization.Builder |
LossLayer
LossLayer is a flexible output layer that performs a loss function on an input without MLP logic.
|
LossLayer.Builder |
LSTM
LSTM recurrent neural network layer without peephole connections.
|
NoParamLayer |
OutputLayer
Output layer used for training via backpropagation based on labels and a specified loss function.
|
OutputLayer.Builder |
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
PReLULayer
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter. |
PReLULayer.Builder |
PrimaryCapsules.Builder |
RecurrentAttentionLayer
Implements Recurrent Dot Product Attention
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using the hidden state as the query and
all time steps as keys/values.
|
RecurrentAttentionLayer.Builder |
RnnLossLayer
Recurrent Neural Network Loss Layer.
|
RnnLossLayer.Builder |
RnnOutputLayer
A version of
OutputLayer for recurrent neural networks. |
RnnOutputLayer.Builder |
SelfAttentionLayer
Implements Dot Product Self Attention
Takes in RNN style input in the shape of [batchSize, features, timesteps]
and applies dot product attention using each timestep as the query.
|
SelfAttentionLayer.Builder |
SeparableConvolution2D
2D Separable convolution layer configuration.
|
SeparableConvolution2D.Builder |
SpaceToBatchLayer
Space to batch utility layer configuration for convolutional input types.
|
SpaceToBatchLayer.Builder |
SpaceToDepthLayer
Space to channels utility layer configuration for convolutional input types.
|
SpaceToDepthLayer.Builder |
SpaceToDepthLayer.DataFormat
Deprecated.
Use
CNN2DFormat instead |
Subsampling1DLayer
1D (temporal) subsampling layer - also known as pooling layer.
|
Subsampling1DLayer.Builder |
Subsampling3DLayer
3D subsampling / pooling layer for convolutional neural networks
Supports max and average pooling modes
|
Subsampling3DLayer.BaseSubsamplingBuilder |
Subsampling3DLayer.Builder |
Subsampling3DLayer.PoolingType |
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
SubsamplingLayer.BaseSubsamplingBuilder |
SubsamplingLayer.Builder |
SubsamplingLayer.PoolingType |
Upsampling1D
Upsampling 1D layer
Repeats each step size times along the temporal/sequence axis (dimension 2)For input shape [minibatch, channels, sequenceLength] output has shape [minibatch, channels, size *
sequenceLength] Example: If input (for a single example, with channels down page, and sequence from left to right) is: [ A1, A2, A3] [ B1, B2, B3] Then output with size = 2 is: [ A1, A1, A2, A2, A3, A3] [ B1, B1, B2, B2, B3, B2] |
Upsampling1D.Builder |
Upsampling2D
Upsampling 2D layer
Repeats each value (or rather, set of depth values) in the height and width dimensions by size[0] and size[1] times respectively. |
Upsampling2D.Builder |
Upsampling3D
Upsampling 3D layer
Repeats each value (all channel values for each x/y/z location) by size[0], size[1] and size[2] If input has shape [minibatch, channels, depth, height, width] then output has shape [minibatch, channels, size[0] * depth, size[1] * height, size[2] * width] |
Upsampling3D.Builder |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
ZeroPaddingLayer
Zero padding layer for convolutional neural networks (2D CNNs).
|
ZeroPaddingLayer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
NoParamLayer |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseRecurrentLayer |
BaseRecurrentLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
NoParamLayer |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BasePretrainNetwork |
BasePretrainNetwork.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseLayer.Builder |
BaseOutputLayer |
BaseOutputLayer.Builder |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
FeedForwardLayer.Builder |
Layer
A neural network layer.
|
Layer.Builder |
Class and Description |
---|
Convolution3D.DataFormat
An optional dataFormat: "NDHWC" or "NCDHW".
|
Class and Description |
---|
BaseLayer
A neural network layer.
|
BaseOutputLayer |
Layer
A neural network layer.
|
Class and Description |
---|
BaseOutputLayer |
BasePretrainNetwork |
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
Class and Description |
---|
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
ConvolutionLayer.BwdDataAlgo
The backward data algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.BwdFilterAlgo
The backward filter algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
ConvolutionLayer.FwdAlgo
The forward algorithm to use when
ConvolutionLayer.AlgoMode is set to "USER_SPECIFIED". |
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
AbstractLSTM
LSTM recurrent net, based on Graves: Supervised Sequence Labelling with Recurrent Neural Networks
http://www.cs.toronto.edu/~graves/phd.pdf
|
BaseRecurrentLayer |
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
GravesBidirectionalLSTM
Deprecated.
use
Bidirectional instead. With the Bidirectional
layer wrapper you can make any recurrent layer bidirectional, in particular GravesLSTM. Note that this layer adds the
output of both directions, which translates into "ADD" mode in Bidirectional.
Usage: .layer(new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) |
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
FeedForwardLayer
Created by jeffreytang on 7/21/15.
|
Layer
A neural network layer.
|
Class and Description |
---|
ActivationLayer
Activation layer is a simple layer that applies the specified activation function to the input activations
|
PReLULayer
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter. |
Class and Description |
---|
Convolution1DLayer
1D (temporal) convolutional layer.
|
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
Deconvolution2D
2D deconvolution layer configuration
Deconvolutions are also known as transpose convolutions or fractionally strided convolutions. |
DepthwiseConvolution2D
2D depth-wise convolution layer configuration.
|
SeparableConvolution2D
2D Separable convolution layer configuration.
|
SpaceToDepthLayer
Space to channels utility layer configuration for convolutional input types.
|
Upsampling1D
Upsampling 1D layer
Repeats each step size times along the temporal/sequence axis (dimension 2)For input shape [minibatch, channels, sequenceLength] output has shape [minibatch, channels, size *
sequenceLength] Example: If input (for a single example, with channels down page, and sequence from left to right) is: [ A1, A2, A3] [ B1, B2, B3] Then output with size = 2 is: [ A1, A1, A2, A2, A3, A3] [ B1, B1, B2, B2, B3, B2] |
Upsampling2D
Upsampling 2D layer
Repeats each value (or rather, set of depth values) in the height and width dimensions by size[0] and size[1] times respectively. |
Upsampling3D
Upsampling 3D layer
Repeats each value (all channel values for each x/y/z location) by size[0], size[1] and size[2] If input has shape [minibatch, channels, depth, height, width] then output has shape [minibatch, channels, size[0] * depth, size[1] * height, size[2] * width] |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
ZeroPaddingLayer
Zero padding layer for convolutional neural networks (2D CNNs).
|
Class and Description |
---|
ActivationLayer
Activation layer is a simple layer that applies the specified activation function to the input activations
|
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
DropoutLayer
Dropout layer.
|
Class and Description |
---|
LocalResponseNormalization
Local response normalization layer
See section 3.3 of http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf |
Class and Description |
---|
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Class and Description |
---|
LocallyConnected1D
SameDiff version of a 1D locally connected layer.
|
LocallyConnected2D
SameDiff version of a 2D locally connected layer.
|
Class and Description |
---|
DropoutLayer
Dropout layer.
|
Class and Description |
---|
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
Class and Description |
---|
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
|
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
Subsampling1DLayer
1D (temporal) subsampling layer - also known as pooling layer.
|
Subsampling3DLayer
3D subsampling / pooling layer for convolutional neural networks
Supports max and average pooling modes
|
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
Layer
A neural network layer.
|
Class and Description |
---|
ConvolutionLayer.AlgoMode
The "PREFER_FASTEST" mode will pick the fastest algorithm for the specified parameters from the
ConvolutionLayer.FwdAlgo ,
ConvolutionLayer.BwdFilterAlgo , and ConvolutionLayer.BwdDataAlgo lists, but they may be very memory intensive, so if weird errors
occur when using cuDNN, please try the "NO_WORKSPACE" mode. |
Layer
A neural network layer.
|
Class and Description |
---|
Convolution3D.DataFormat
An optional dataFormat: "NDHWC" or "NCDHW".
|
Layer
A neural network layer.
|
PoolingType
Pooling type:
MAX: Max pooling - output is the maximum value of the input values AVG: Average pooling - output is the average value of the input values SUM: Sum pooling - output is the sum of the input values PNORM: P-norm pooling |
Class and Description |
---|
BatchNormalization
Batch normalization layer
See: Ioffe and Szegedy, 2014, Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift https://arxiv.org/abs/1502.03167 |
ConvolutionLayer
2D Convolution layer (for example, spatial convolution over images).
|
DenseLayer
Dense layer: a standard fully connected feed forward layer
|
SubsamplingLayer
Subsampling layer also referred to as pooling in convolution neural nets
Supports the following pooling types: MAX, AVG, SUM, PNORM
|
SubsamplingLayer.PoolingType |
Copyright © 2020. All rights reserved.