Modifier and Type | Method and Description |
---|---|
void |
DL4JArbiterStatusReportingListener.iterationDone(Model model,
int iteration,
int epoch) |
Modifier and Type | Method and Description |
---|---|
<T extends Model> |
TaskListener.preProcess(T model,
Candidate candidate)
Preprocess the model, before any training has taken place.
|
Modifier and Type | Method and Description |
---|---|
void |
TaskListener.postProcess(Model model,
Candidate candidate)
Post process the model, after any training has taken place
|
Modifier and Type | Method and Description |
---|---|
void |
SystemInfoFilePrintListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
SystemInfoPrintListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
SystemInfoFilePrintListener.onBackwardPass(Model model) |
void |
SystemInfoPrintListener.onBackwardPass(Model model) |
void |
SystemInfoFilePrintListener.onEpochEnd(Model model) |
void |
SystemInfoPrintListener.onEpochEnd(Model model) |
void |
SystemInfoFilePrintListener.onEpochStart(Model model) |
void |
SystemInfoPrintListener.onEpochStart(Model model) |
void |
SystemInfoFilePrintListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
SystemInfoPrintListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
SystemInfoFilePrintListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
SystemInfoPrintListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
SystemInfoFilePrintListener.onGradientCalculation(Model model) |
void |
SystemInfoPrintListener.onGradientCalculation(Model model) |
Modifier and Type | Method and Description |
---|---|
static Model |
ModelGuesser.loadModelGuess(InputStream stream)
Load the model from the given input stream.
|
static Model |
ModelGuesser.loadModelGuess(InputStream stream,
File tempDirectory)
As per
ModelGuesser.loadModelGuess(InputStream) but (optionally) allows copying to the specified temporary directory |
static Model |
ModelGuesser.loadModelGuess(String path)
Load the model from the given file path
|
Modifier and Type | Class and Description |
---|---|
class |
EarlyStoppingConfiguration<T extends Model>
Early stopping configuration: Specifies the various configuration options for running training with early stopping.
|
static class |
EarlyStoppingConfiguration.Builder<T extends Model> |
interface |
EarlyStoppingModelSaver<T extends Model>
Interface for saving MultiLayerNetworks learned during early stopping, and retrieving them again later
|
class |
EarlyStoppingResult<T extends Model>
EarlyStoppingResult: contains the results of the early stopping training, such as:
- Why the training was terminated
- Score vs. epoch
- Epoch that the best model was found
- Score of the best model
- The best model (MultiLayerNetwork) itself
|
Modifier and Type | Interface and Description |
---|---|
interface |
EarlyStoppingListener<T extends Model>
EarlyStoppingListener is a listener interface for conducting early stopping training.
|
Modifier and Type | Class and Description |
---|---|
class |
InMemoryModelSaver<T extends Model>
Save the best (and latest) models for early stopping training to memory for later retrieval
Note: Assumes that network is cloneable via .clone() method
|
Modifier and Type | Interface and Description |
---|---|
interface |
ScoreCalculator<T extends Model>
ScoreCalculator interface is used to calculate a score for a neural network.
|
Modifier and Type | Method and Description |
---|---|
protected INDArray[] |
AutoencoderScoreCalculator.output(Model network,
INDArray[] input,
INDArray[] fMask,
INDArray[] lMask) |
protected INDArray[] |
DataSetLossCalculator.output(Model network,
INDArray[] input,
INDArray[] fMask,
INDArray[] lMask) |
protected INDArray[] |
VAEReconErrorScoreCalculator.output(Model network,
INDArray[] input,
INDArray[] fMask,
INDArray[] lMask) |
protected INDArray[] |
VAEReconProbScoreCalculator.output(Model network,
INDArray[] input,
INDArray[] fMask,
INDArray[] lMask) |
protected INDArray |
AutoencoderScoreCalculator.output(Model net,
INDArray input,
INDArray fMask,
INDArray lMask) |
protected INDArray |
DataSetLossCalculator.output(Model network,
INDArray input,
INDArray fMask,
INDArray lMask) |
protected INDArray |
VAEReconErrorScoreCalculator.output(Model net,
INDArray input,
INDArray fMask,
INDArray lMask) |
protected INDArray |
VAEReconProbScoreCalculator.output(Model network,
INDArray input,
INDArray fMask,
INDArray lMask) |
protected double |
AutoencoderScoreCalculator.scoreMinibatch(Model network,
INDArray[] features,
INDArray[] labels,
INDArray[] fMask,
INDArray[] lMask,
INDArray[] output) |
protected double |
DataSetLossCalculator.scoreMinibatch(Model network,
INDArray[] features,
INDArray[] labels,
INDArray[] fMask,
INDArray[] lMask,
INDArray[] output) |
protected double |
VAEReconErrorScoreCalculator.scoreMinibatch(Model network,
INDArray[] features,
INDArray[] labels,
INDArray[] fMask,
INDArray[] lMask,
INDArray[] output) |
protected double |
VAEReconProbScoreCalculator.scoreMinibatch(Model network,
INDArray[] features,
INDArray[] labels,
INDArray[] fMask,
INDArray[] lMask,
INDArray[] output) |
protected double |
AutoencoderScoreCalculator.scoreMinibatch(Model network,
INDArray features,
INDArray labels,
INDArray fMask,
INDArray lMask,
INDArray output) |
protected double |
VAEReconErrorScoreCalculator.scoreMinibatch(Model network,
INDArray features,
INDArray labels,
INDArray fMask,
INDArray lMask,
INDArray output) |
protected double |
VAEReconProbScoreCalculator.scoreMinibatch(Model net,
INDArray features,
INDArray labels,
INDArray fMask,
INDArray lMask,
INDArray output) |
Modifier and Type | Class and Description |
---|---|
class |
BaseIEvaluationScoreCalculator<T extends Model,U extends IEvaluation>
Base score function based on an IEvaluation instance.
|
class |
BaseScoreCalculator<T extends Model> |
Modifier and Type | Class and Description |
---|---|
class |
BaseEarlyStoppingTrainer<T extends Model>
Base/abstract class for conducting early stopping training locally (single machine).
|
interface |
IEarlyStoppingTrainer<T extends Model>
Interface for early stopping trainers
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseEarlyStoppingTrainer.model |
Modifier and Type | Method and Description |
---|---|
protected void |
BaseEarlyStoppingTrainer.triggerEpochListeners(boolean epochStart,
Model model,
int epochNum) |
Modifier and Type | Method and Description |
---|---|
List<DetectedObject> |
YoloModelAdapter.apply(Model model,
INDArray[] inputs,
INDArray[] masks,
INDArray[] labelsMasks) |
Modifier and Type | Interface and Description |
---|---|
interface |
Classifier
A classifier (this is for supervised learning)
|
interface |
Layer
Interface for a layer of a neural network.
|
Modifier and Type | Method and Description |
---|---|
T |
ModelAdapter.apply(Model model,
INDArray[] inputs,
INDArray[] inputMasks,
INDArray[] labelsMasks)
This method invokes model internally, and does convertion to T
|
Modifier and Type | Interface and Description |
---|---|
interface |
IOutputLayer
Interface for output layers (those that calculate gradients with respect to a labels array)
|
interface |
RecurrentLayer
Created by Alex on 28/08/2016.
|
Modifier and Type | Class and Description |
---|---|
class |
ComputationGraph
A ComputationGraph network is a neural network with arbitrary (directed acyclic graph) connection structure.
|
Modifier and Type | Class and Description |
---|---|
class |
AbstractLayer<LayerConfT extends Layer>
A layer with input and output, no parameters or gradients
|
class |
ActivationLayer
Activation Layer
Used to apply activation on input and corresponding derivative on epsilon.
|
class |
BaseLayer<LayerConfT extends BaseLayer>
A layer with parameters
|
class |
BaseOutputLayer<LayerConfT extends BaseOutputLayer>
Output layer with different objective
in co-occurrences for different objectives.
|
class |
BasePretrainNetwork<LayerConfT extends BasePretrainNetwork>
Baseline class for any Neural Network used
as a layer in a deep network *
|
class |
DropoutLayer
Created by davekale on 12/7/16.
|
class |
FrozenLayer
For purposes of transfer learning
A frozen layers wraps another dl4j layer within it.
|
class |
FrozenLayerWithBackprop
Frozen layer freezes parameters of the layer it wraps, but allows the backpropagation to continue.
|
class |
LossLayer
LossLayer is a flexible output "layer" that performs a loss function on
an input without MLP logic.
|
class |
OutputLayer
Output layer with different objective
incooccurrences for different objectives.
|
class |
RepeatVector
RepeatVector layer.
|
Modifier and Type | Class and Description |
---|---|
class |
Cnn3DLossLayer
3D Convolutional Neural Network Loss Layer.
|
class |
CnnLossLayer
Convolutional Neural Network Loss Layer.
|
class |
Convolution1DLayer
1D (temporal) convolutional layer.
|
class |
Convolution3DLayer
3D convolution layer implementation.
|
class |
ConvolutionLayer
Convolution layer
|
class |
Cropping1DLayer
Zero cropping layer for 1D convolutional neural networks.
|
class |
Cropping2DLayer
Zero cropping layer for convolutional neural networks.
|
class |
Cropping3DLayer
Cropping layer for 3D convolutional neural networks.
|
class |
Deconvolution2DLayer
2D deconvolution layer implementation.
|
class |
Deconvolution3DLayer
3D deconvolution layer implementation.
|
class |
DepthwiseConvolution2DLayer
2D depth-wise convolution layer configuration.
|
class |
SeparableConvolution2DLayer
2D Separable convolution layer implementation
Separable convolutions split a regular convolution operation into two
simpler operations, which are usually computationally more efficient.
|
class |
SpaceToBatch
Space to batch utility layer for convolutional input types.
|
class |
SpaceToDepth
Space to channels utility layer for convolutional input types.
|
class |
ZeroPadding1DLayer
Zero padding 1D layer for convolutional neural networks.
|
class |
ZeroPadding3DLayer
Zero padding 3D layer for convolutional neural networks.
|
class |
ZeroPaddingLayer
Zero padding layer for convolutional neural networks.
|
Modifier and Type | Class and Description |
---|---|
class |
Subsampling1DLayer
1D (temporal) subsampling layer.
|
class |
Subsampling3DLayer
Subsampling 3D layer, used for downsampling a 3D convolution
|
class |
SubsamplingLayer
Subsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
Upsampling1D
1D Upsampling layer.
|
class |
Upsampling2D
2D Upsampling layer.
|
class |
Upsampling3D
3D Upsampling layer.
|
Modifier and Type | Class and Description |
---|---|
class |
PReLU
Parametrized Rectified Linear Unit (PReLU)
f(x) = alpha * x for x < 0, f(x) = x for x >= 0
alpha has the same shape as x and is a learned parameter.
|
Modifier and Type | Class and Description |
---|---|
class |
AutoEncoder
Autoencoder.
|
Modifier and Type | Class and Description |
---|---|
class |
DenseLayer |
Modifier and Type | Class and Description |
---|---|
class |
ElementWiseMultiplicationLayer
Elementwise multiplication layer with weights: implements out = activationFn(input .* w + b) where:
- w is a learnable weight vector of length nOut - ".*" is element-wise multiplication - b is a bias vector Note that the input and output sizes of the element-wise layer are the same for this layer created by jingshu |
Modifier and Type | Class and Description |
---|---|
class |
EmbeddingLayer
Embedding layer: feed-forward layer that expects single integers per example as input (class numbers, in range 0 to numClass-1)
as input.
|
class |
EmbeddingSequenceLayer
Embedding layer for sequences: feed-forward layer that expects fixed-length number (inputLength) of integers/indices
per example as input, ranged from 0 to numClasses - 1.
|
Modifier and Type | Class and Description |
---|---|
class |
BatchNormalization
Batch normalization layer.
|
class |
LocalResponseNormalization
Deep neural net normalization approach normalizes activations between layers
"brightness normalization"
Used for nets like AlexNet
For a^i_{x,y} the activity of a neuron computed by applying kernel i
at position (x,y) and applying ReLU nonlinearity, the response
normalized activation b^i_{x,y} is given by:
x^2 = (a^j_{x,y})^2
unitScale = (k + alpha * sum_{j=max(0, i - n/2)}^{max(N-1, i + n/2)} (a^j_{x,y})^2 )
y = b^i_{x,y} = x * unitScale**-beta
gy = epsilon (aka deltas from previous layer)
sumPart = sum(a^j_{x,y} * gb^j_{x,y})
gx = gy * unitScale**-beta - 2 * alpha * beta * sumPart/unitScale * a^i_{x,y}
Reference:
http://www.cs.toronto.edu/~fritz/absps/imagenet.pdf https://github.com/vlfeat/matconvnet/issues/10 Created by nyghtowl on 10/29/15. |
Modifier and Type | Class and Description |
---|---|
class |
Yolo2OutputLayer
Output (loss) layer for YOLOv2 object detection model, based on the papers:
YOLO9000: Better, Faster, Stronger - Redmon & Farhadi (2016) - https://arxiv.org/abs/1612.08242
and You Only Look Once: Unified, Real-Time Object Detection - Redmon et al. (2016) - http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Redmon_You_Only_Look_CVPR_2016_paper.pdf This loss function implementation is based on the YOLOv2 version of the paper. |
Modifier and Type | Class and Description |
---|---|
class |
OCNNOutputLayer
Layer implementation for
OCNNOutputLayer
See OCNNOutputLayer
for details. |
Modifier and Type | Class and Description |
---|---|
class |
GlobalPoolingLayer
Global pooling layer - used to do pooling over time for RNNs, and 2d pooling for CNNs.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseRecurrentLayer<LayerConfT extends BaseRecurrentLayer> |
class |
BidirectionalLayer
Bidirectional is a "wrapper" layer: it wraps any uni-directional RNN layer to make it bidirectional.
|
class |
GravesBidirectionalLSTM
RNN tutorial: https://deeplearning4j.konduit.ai/models/recurrent
READ THIS FIRST
Bdirectional LSTM layer implementation.
|
class |
GravesLSTM
Deprecated.
Will be eventually removed. Use
LSTM instead, which has similar prediction accuracy, but supports
CuDNN for faster network training on CUDA (Nvidia) GPUs |
class |
LastTimeStepLayer
LastTimeStep is a "wrapper" layer: it wraps any RNN layer, and extracts out the last time step during forward pass,
and returns it as a row vector (per example).
|
class |
LSTM
LSTM layer implementation.
|
class |
MaskZeroLayer
Masks timesteps with activation equal to the specified masking value, defaulting to 0.0.
|
class |
RnnLossLayer
Recurrent Neural Network Loss Layer.
|
class |
RnnOutputLayer
Recurrent Neural Network Output Layer.
|
class |
SimpleRnn
Simple RNN - aka "vanilla" RNN is the simplest type of recurrent neural network layer.
|
class |
TimeDistributedLayer
TimeDistributed wrapper layer.
|
Modifier and Type | Class and Description |
---|---|
class |
SameDiffLayer |
class |
SameDiffOutputLayer |
Modifier and Type | Class and Description |
---|---|
class |
CenterLossOutputLayer
Center loss is similar to triplet loss except that it enforces
intraclass consistency and doesn't require feed forward of multiple
examples.
|
Modifier and Type | Class and Description |
---|---|
class |
MaskLayer
MaskLayer applies the mask array to the forward pass activations, and backward pass gradients, passing through
this layer.
|
Modifier and Type | Class and Description |
---|---|
class |
VariationalAutoencoder
Variational Autoencoder layer
See: Kingma & Welling, 2013: Auto-Encoding Variational Bayes - https://arxiv.org/abs/1312.6114
This implementation allows multiple encoder and decoder layers, the number and sizes of which can be set independently.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseWrapperLayer
Abstract wrapper layer.
|
Modifier and Type | Method and Description |
---|---|
protected Model |
ModelTupleStream.restoreModel(InputStream inputStream)
Uses the
ModelGuesser.loadModelGuess(InputStream) method. |
Modifier and Type | Field and Description |
---|---|
protected Model |
ScoringModel.model |
Modifier and Type | Method and Description |
---|---|
protected Model |
ScoringModel.restoreModel(InputStream inputStream)
Uses the
ModelGuesser.loadModelGuess(InputStream) method. |
Modifier and Type | Method and Description |
---|---|
static float |
ScoringModel.outputScore(Model model,
float[] modelFeatureValuesNormalized)
Uses the
NetworkUtils.output(Model, INDArray) method. |
Modifier and Type | Class and Description |
---|---|
class |
TFOpLayerImpl |
Modifier and Type | Method and Description |
---|---|
static Model |
KerasModelUtils.copyWeightsToModel(Model model,
Map<String,KerasLayer> kerasLayers)
Helper function to import weights from nested Map into existing model.
|
Modifier and Type | Method and Description |
---|---|
static Model |
KerasModelUtils.copyWeightsToModel(Model model,
Map<String,KerasLayer> kerasLayers)
Helper function to import weights from nested Map into existing model.
|
Modifier and Type | Class and Description |
---|---|
class |
MultiLayerNetwork
MultiLayerNetwork is a neural network with multiple layers in a stack, and usually an output layer.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseMultiLayerUpdater<T extends Model>
BaseMultiLayerUpdater - core functionality for applying updaters to MultiLayerNetwork and ComputationGraph.
|
Modifier and Type | Field and Description |
---|---|
protected T |
BaseMultiLayerUpdater.network |
Modifier and Type | Method and Description |
---|---|
static Updater |
UpdaterCreator.getUpdater(Model layer) |
Modifier and Type | Method and Description |
---|---|
Solver.Builder |
Solver.Builder.model(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
BaseTrainingListener.iterationDone(Model model,
int iteration,
int epoch) |
abstract void |
IterationListener.iterationDone(Model model,
int iteration,
int epoch)
Deprecated.
Event listener for each iteration
|
void |
TrainingListener.iterationDone(Model model,
int iteration,
int epoch)
Event listener for each iteration.
|
void |
BaseTrainingListener.onBackwardPass(Model model) |
void |
TrainingListener.onBackwardPass(Model model)
Called once per iteration (backward pass) after gradients have been calculated, and updated
Gradients are available via
gradient() . |
void |
BaseTrainingListener.onEpochEnd(Model model) |
void |
TrainingListener.onEpochEnd(Model model)
Called once at the end of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
BaseTrainingListener.onEpochStart(Model model) |
void |
TrainingListener.onEpochStart(Model model)
Called once at the start of each epoch, when using methods such as
MultiLayerNetwork.fit(DataSetIterator) ,
ComputationGraph.fit(DataSetIterator) or ComputationGraph.fit(MultiDataSetIterator) |
void |
BaseTrainingListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
TrainingListener.onForwardPass(Model model,
List<INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
MultiLayerNetwork ),
only at training time |
void |
BaseTrainingListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
TrainingListener.onForwardPass(Model model,
Map<String,INDArray> activations)
Called once per iteration (forward pass) for activations (usually for a
ComputationGraph ),
only at training time |
void |
BaseTrainingListener.onGradientCalculation(Model model) |
void |
TrainingListener.onGradientCalculation(Model model)
Called once per iteration (backward pass) before the gradients are updated
Gradients are available via
gradient() . |
void |
ConvexOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize,
LayerWorkspaceMgr workspaceMgr)
Update the gradient according to the configuration such as adagrad, momentum, and sparsity
|
Modifier and Type | Method and Description |
---|---|
protected void |
FailureTestingListener.call(FailureTestingListener.CallType callType,
Model model) |
protected static int |
CheckpointListener.getEpoch(Model model) |
protected static int |
CheckpointListener.getIter(Model model) |
protected static String |
CheckpointListener.getModelType(Model model) |
protected void |
EvaluativeListener.invokeListener(Model model) |
void |
CheckpointListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
CollectScoresIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
CollectScoresListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
ComposableIterationListener.iterationDone(Model model,
int iteration,
int epoch)
Deprecated.
|
void |
EvaluativeListener.iterationDone(Model model,
int iteration,
int epoch)
Event listener for each iteration
|
void |
FailureTestingListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
PerformanceListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
ScoreIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
SleepyTrainingListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
TimeIterationListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
FailureTestingListener.onBackwardPass(Model model) |
void |
SleepyTrainingListener.onBackwardPass(Model model) |
void |
CheckpointListener.onEpochEnd(Model model) |
void |
EvaluativeListener.onEpochEnd(Model model) |
void |
FailureTestingListener.onEpochEnd(Model model) |
void |
SleepyTrainingListener.onEpochEnd(Model model) |
void |
EvaluativeListener.onEpochStart(Model model) |
void |
FailureTestingListener.onEpochStart(Model model) |
void |
SleepyTrainingListener.onEpochStart(Model model) |
void |
FailureTestingListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
FailureTestingListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
SleepyTrainingListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
FailureTestingListener.onGradientCalculation(Model model) |
void |
SleepyTrainingListener.onGradientCalculation(Model model) |
abstract boolean |
FailureTestingListener.FailureTrigger.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model)
If true: trigger the failure.
|
boolean |
FailureTestingListener.And.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.Or.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.RandomProb.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.TimeSinceInitializedTrigger.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.UserNameTrigger.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.HostNameTrigger.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
boolean |
FailureTestingListener.IterationEpochTrigger.triggerFailure(FailureTestingListener.CallType callType,
int iteration,
int epoch,
Model model) |
Modifier and Type | Method and Description |
---|---|
void |
EvaluationCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
void |
ModelSavingCallback.call(EvaluativeListener listener,
Model model,
long invocationsCount,
IEvaluation[] evaluations) |
protected void |
ModelSavingCallback.save(Model model,
String filename)
This method saves model
|
Modifier and Type | Field and Description |
---|---|
protected Model |
BaseOptimizer.model |
Modifier and Type | Method and Description |
---|---|
static void |
BaseOptimizer.applyConstraints(Model model) |
static int |
BaseOptimizer.getEpochCount(Model model) |
static int |
BaseOptimizer.getIterationCount(Model model) |
static void |
BaseOptimizer.incrementIterationCount(Model model,
int incrementBy) |
void |
BaseOptimizer.updateGradientAccordingToParams(Gradient gradient,
Model model,
int batchSize,
LayerWorkspaceMgr workspaceMgr) |
Constructor and Description |
---|
BackTrackLineSearch(Model optimizable,
ConvexOptimizer optimizer) |
BackTrackLineSearch(Model layer,
StepFunction stepFunction,
ConvexOptimizer optimizer) |
BaseOptimizer(NeuralNetConfiguration conf,
StepFunction stepFunction,
Collection<TrainingListener> trainingListeners,
Model model) |
ConjugateGradient(NeuralNetConfiguration conf,
StepFunction stepFunction,
Collection<TrainingListener> trainingListeners,
Model model) |
LBFGS(NeuralNetConfiguration conf,
StepFunction stepFunction,
Collection<TrainingListener> trainingListeners,
Model model) |
LineGradientDescent(NeuralNetConfiguration conf,
StepFunction stepFunction,
Collection<TrainingListener> trainingListeners,
Model model) |
StochasticGradientDescent(NeuralNetConfiguration conf,
StepFunction stepFunction,
Collection<TrainingListener> trainingListeners,
Model model) |
Modifier and Type | Method and Description |
---|---|
static long |
EncodedGradientsAccumulator.getOptimalBufferSize(Model model,
int numWorkers,
int queueSize) |
Modifier and Type | Class and Description |
---|---|
class |
EarlyStoppingParallelTrainer<T extends Model>
Conduct parallel early stopping training with ParallelWrapper under the hood.
|
static class |
ParallelWrapper.Builder<T extends Model> |
Modifier and Type | Field and Description |
---|---|
protected T |
EarlyStoppingParallelTrainer.model |
protected Model |
ParallelInference.model |
protected Model |
ParallelWrapper.model |
protected T |
ParallelWrapper.Builder.model |
protected Model |
InplaceParallelInference.ModelHolder.sourceModel |
Modifier and Type | Field and Description |
---|---|
protected BlockingQueue<Model> |
InplaceParallelInference.ModelHolder.queue |
protected List<Model> |
InplaceParallelInference.ModelHolder.replicas |
Modifier and Type | Method and Description |
---|---|
protected Model |
InplaceParallelInference.ModelHolder.acquireModel() |
protected Model[] |
InplaceParallelInference.getCurrentModelsFromWorkers() |
protected Model[] |
ParallelInference.getCurrentModelsFromWorkers()
This method returns Models used in workers at this moment
PLEASE NOTE: This method is NOT thread safe, and should NOT be used anywhere but tests
|
Modifier and Type | Method and Description |
---|---|
protected void |
InplaceParallelInference.ModelHolder.releaseModel(Model model) |
void |
InplaceParallelInference.updateModel(@NonNull Model model) |
protected void |
InplaceParallelInference.ModelHolder.updateModel(@NonNull Model model) |
void |
ParallelInference.updateModel(@NonNull Model model)
This method allows to update Model used for inference in runtime, without queue reset
|
Constructor and Description |
---|
Builder(@NonNull Model model) |
ParallelWrapper(Model model,
int workers,
int prefetchSize) |
Modifier and Type | Method and Description |
---|---|
Trainer |
DefaultTrainerContext.create(String uuid,
int threadId,
Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
WorkspaceMode mode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
Trainer |
SymmetricTrainerContext.create(String uuid,
int threadId,
Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
WorkspaceMode mode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
Trainer |
TrainerContext.create(String uuid,
int threadId,
Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
WorkspaceMode workspaceMode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
void |
DefaultTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
DefaultTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
SymmetricTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
SymmetricTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
TrainerContext.finalizeRound(Model originalModel,
Model... models)
This method is called at averagingFrequency
|
void |
TrainerContext.finalizeRound(Model originalModel,
Model... models)
This method is called at averagingFrequency
|
void |
DefaultTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
DefaultTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
SymmetricTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
SymmetricTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
TrainerContext.finalizeTraining(Model originalModel,
Model... models)
This method is called
|
void |
TrainerContext.finalizeTraining(Model originalModel,
Model... models)
This method is called
|
void |
DefaultTrainerContext.init(Model model,
Object... args)
Initialize the context
|
void |
SymmetricTrainerContext.init(Model model,
Object... args)
Initialize the context
|
void |
TrainerContext.init(Model model,
Object... args)
Initialize the context
|
Modifier and Type | Method and Description |
---|---|
Model |
ParameterServerTrainer.getModel() |
Modifier and Type | Method and Description |
---|---|
Trainer |
ParameterServerTrainerContext.create(String uuid,
int threadId,
Model model,
int rootDevice,
boolean useMDS,
ParallelWrapper wrapper,
WorkspaceMode mode,
int averagingFrequency)
Create a
Trainer
based on the given parameters |
void |
ParameterServerTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
ParameterServerTrainerContext.finalizeRound(Model originalModel,
Model... models) |
void |
ParameterServerTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
ParameterServerTrainerContext.finalizeTraining(Model originalModel,
Model... models) |
void |
ParameterServerTrainerContext.init(Model model,
Object... args)
Initialize the context
|
ParameterServerTrainer.ParameterServerTrainerBuilder |
ParameterServerTrainer.ParameterServerTrainerBuilder.originalModel(Model originalModel) |
ParameterServerTrainer.ParameterServerTrainerBuilder |
ParameterServerTrainer.ParameterServerTrainerBuilder.replicatedModel(Model replicatedModel) |
void |
ParameterServerTrainer.updateModel(@NonNull Model model) |
Modifier and Type | Field and Description |
---|---|
protected Model |
DefaultTrainer.originalModel |
protected Model |
DefaultTrainer.replicatedModel |
Modifier and Type | Method and Description |
---|---|
Model |
DefaultTrainer.getModel() |
Model |
Trainer.getModel()
THe current model for the trainer
|
Modifier and Type | Method and Description |
---|---|
void |
DefaultTrainer.updateModel(@NonNull Model model) |
void |
Trainer.updateModel(@NonNull Model model)
Update the current
Model
for the worker |
Constructor and Description |
---|
SymmetricTrainer(@NonNull Model originalModel,
String uuid,
int threadIdx,
@NonNull WorkspaceMode mode,
@NonNull ParallelWrapper wrapper,
boolean useMDS) |
Modifier and Type | Class and Description |
---|---|
class |
BarnesHutTsne
Barnes hut algorithm for TSNE, uses a dual tree approximation approach.
|
Modifier and Type | Field and Description |
---|---|
protected Model |
DL4jServlet.model |
Constructor and Description |
---|
Builder(@NonNull Model model) |
DL4jServlet(@NonNull Model model,
@NonNull InferenceAdapter<I,O> inferenceAdapter,
JsonSerializer<O> serializer,
JsonDeserializer<I> deserializer) |
DL4jServlet(@NonNull Model model,
@NonNull InferenceAdapter<I,O> inferenceAdapter,
JsonSerializer<O> jsonSerializer,
JsonDeserializer<I> jsonDeserializer,
BinarySerializer<O> binarySerializer,
BinaryDeserializer<I> binaryDeserializer) |
Modifier and Type | Method and Description |
---|---|
void |
TrainingHook.postUpdate(DataSet minibatch,
Model model)
A hook method for post update
|
void |
TrainingHook.postUpdate(MultiDataSet minibatch,
Model model)
A hook method for post update
|
void |
TrainingHook.preUpdate(DataSet minibatch,
Model model)
A hook method for pre update.
|
void |
TrainingHook.preUpdate(MultiDataSet minibatch,
Model model)
A hook method for pre update.
|
Modifier and Type | Class and Description |
---|---|
class |
BaseSparkEarlyStoppingTrainer<T extends Model>
Base/abstract class for conducting early stopping training via Spark, on a
MultiLayerNetwork
or a ComputationGraph |
Modifier and Type | Method and Description |
---|---|
void |
ParameterServerTrainingHook.postUpdate(DataSet minibatch,
Model model)
A hook method for post update
|
void |
ParameterServerTrainingHook.postUpdate(MultiDataSet minibatch,
Model model)
A hook method for post update
|
void |
ParameterServerTrainingHook.preUpdate(DataSet minibatch,
Model model)
A hook method for pre update.
|
void |
ParameterServerTrainingHook.preUpdate(MultiDataSet minibatch,
Model model)
A hook method for pre update.
|
Modifier and Type | Field and Description |
---|---|
protected Model |
SharedTrainingWrapper.originalModel |
Modifier and Type | Method and Description |
---|---|
void |
BaseStatsListener.iterationDone(Model model,
int iteration,
int epoch) |
void |
BaseStatsListener.onBackwardPass(Model model) |
void |
BaseStatsListener.onEpochEnd(Model model) |
void |
BaseStatsListener.onEpochStart(Model model) |
void |
BaseStatsListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
BaseStatsListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
void |
BaseStatsListener.onGradientCalculation(Model model) |
Modifier and Type | Method and Description |
---|---|
void |
ConvolutionalIterationListener.iterationDone(Model model,
int iteration,
int epoch)
Event listener for each iteration
|
void |
ConvolutionalIterationListener.onForwardPass(Model model,
List<INDArray> activations) |
void |
ConvolutionalIterationListener.onForwardPass(Model model,
Map<String,INDArray> activations) |
Modifier and Type | Method and Description |
---|---|
static String |
CrashReportingUtil.generateMemoryStatus(Model net,
int minibatch,
InputType... inputTypes)
Generate memory/system report as a String, for the specified network.
|
static INDArray |
NetworkUtils.output(Model model,
INDArray input)
Currently supports
MultiLayerNetwork and ComputationGraph models. |
static Task |
ModelSerializer.taskByModel(Model model) |
static void |
CrashReportingUtil.writeMemoryCrashDump(@NonNull Model net,
@NonNull Throwable e)
Generate and write the crash dump to the crash dump root directory (by default, the working directory).
|
static void |
ModelSerializer.writeModel(@NonNull Model model,
@NonNull File file,
boolean saveUpdater)
Write a model to a file
|
static void |
ModelSerializer.writeModel(@NonNull Model model,
@NonNull File file,
boolean saveUpdater,
DataNormalization dataNormalization)
Write a model to a file
|
static void |
ModelSerializer.writeModel(@NonNull Model model,
@NonNull OutputStream stream,
boolean saveUpdater)
Write a model to an output stream
|
static void |
ModelSerializer.writeModel(@NonNull Model model,
@NonNull OutputStream stream,
boolean saveUpdater,
DataNormalization dataNormalization)
Write a model to an output stream
|
static void |
ModelSerializer.writeModel(@NonNull Model model,
@NonNull String path,
boolean saveUpdater)
Write a model to a file path
|
Modifier and Type | Method and Description |
---|---|
<M extends Model> |
InstantiableModel.init() |
<M extends Model> |
ZooModel.initPretrained(PretrainedType pretrainedType)
Returns a pretrained model for the given dataset, if available.
|
Modifier and Type | Method and Description |
---|---|
Model |
ZooModel.initPretrained()
By default, will return a pretrained ImageNet if available.
|
Modifier and Type | Method and Description |
---|---|
Class<? extends Model> |
InstantiableModel.modelType() |
Modifier and Type | Method and Description |
---|---|
Model |
LeNet.init() |
Model |
SimpleCNN.init() |
Model |
TextGenerationLSTM.init() |
Modifier and Type | Method and Description |
---|---|
Class<? extends Model> |
AlexNet.modelType() |
Class<? extends Model> |
Darknet19.modelType() |
Class<? extends Model> |
FaceNetNN4Small2.modelType() |
Class<? extends Model> |
InceptionResNetV1.modelType() |
Class<? extends Model> |
LeNet.modelType() |
Class<? extends Model> |
NASNet.modelType() |
Class<? extends Model> |
ResNet50.modelType() |
Class<? extends Model> |
SimpleCNN.modelType() |
Class<? extends Model> |
SqueezeNet.modelType() |
Class<? extends Model> |
TextGenerationLSTM.modelType() |
Class<? extends Model> |
TinyYOLO.modelType() |
Class<? extends Model> |
UNet.modelType() |
Class<? extends Model> |
VGG16.modelType() |
Class<? extends Model> |
VGG19.modelType() |
Class<? extends Model> |
Xception.modelType() |
Class<? extends Model> |
YOLO2.modelType() |
Copyright © 2020. All rights reserved.