Modifier and Type | Class and Description |
---|---|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
Modifier and Type | Class and Description |
---|---|
class |
Cnn3DLossLayer
3D Convolutional Neural Network Loss Layer.
|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution3DLayer
3D convolution layer implementation.
|
class |
ConvolutionLayer
Convolution layer
|
class |
Deconvolution2DLayer
2D deconvolution layer implementation.
|
class |
Deconvolution3DLayer
3D deconvolution layer implementation.
|
class |
DepthwiseConvolution2DLayer
2D depth-wise convolution layer configuration.
|
class |
SeparableConvolution2DLayer
2D Separable convolution layer implementation
Separable convolutions split a regular convolution operation into two
simpler operations, which are usually computationally more efficient.
|
Modifier and Type | Class and Description |
---|---|
class |
PReLU
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer created by jingshu |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
Layer implementation for
OCNNOutputLayer
See OCNNOutputLayer
for details. |
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseRecurrentLayer> |
class |
GravesBidirectionalLSTM
RNN tutorial: https://deeplearning4j.konduit.ai/models/recurrent
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LSTM
LSTM layer implementation.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Copyright © 2020. All rights reserved.