Contents

Release Notes for Version 1.0.0-beta

Highlights - 1.0.0-beta Release

  • Performance and memory optimizations for DL4J

Deeplearning4J

Deeplearning4J: New Features

  • New or enhanced layers:
    • Added Cropping1D layer Link
    • Added Convolution3D, Cropping3D, UpSampling3D, ZeroPadding3D, Subsampling3D layers (all with Keras import support): Link Link
    • Added EmbeddingSequenceLayer (EmbeddingLayer for time series) Link
    • Added OCNNOutputLayer (one-class neural network) - implementation of this paper - Link
    • Added FrozenLayerWithBackprop layer Link
    • Added DepthwiseConvolution2D layer Link
  • Added ComputationGraph.output(DataSetIterator) method Link
  • Added MultiLayerNetwork/ComputationGraph.layerInputSize methods Link Link
  • Added SparkComputationGraph.feedForwardWithKey overload with feature mask support Link
  • Added MultiLayerNetwork.calculateGradients method (for easily getting parameter and input gradients, for example for some model interpretabilithy approaches) Link Link
  • Added support to get input/activation types for each layer from configuration: ComputationGraphConfiguration.getLayerActivationTypes(InputType...), ComputationGraphConfiguration.GraphBuilder.getLayerActivationTypes(), NeuralNetConfiguration.ListBuilder.getLayerActivationTypes(), MultiLayerConfiguration.getLayerActivationTypes(InputType) methods Link
  • Evaluation.stats() now prints confusion matrix in easier to read matrix format, rather than list format Link
  • Added ModelSerializer.addObjectToFile, .getObjectFromFile and .listObjectsInFile for storing arbitrary Java objects in same file as saved network Link
  • Added SpatialDropout support (with Keras import support) Link
  • Added MultiLayerNetwork/ComputationGraph.fit((Multi)DataSetIterator, int numEpochs) overloads Link
  • Added performance (hardware) listeners: SystemInfoPrintListener and SystemInfoFilePrintListener Link

Deeplearning4J: Bug Fixes and Optimizations

  • Performance and memory optimizations via optimizations of internal use of workspaces Link
  • Reflections library has entirely been removed from DL4J and is no longer required for custom layer serialization/deserialization Link, Link
    • Fixes issues with custom and some Keras import layers on Android
  • RecordReaderMultiDataSetIterator will no longer try to convert unused columns to numerical values Link
  • Added new model zoo models:
    • (to do)
  • Fixes for Android compilation (removed duplicate classes, aligned versions, removed some dependencies) Link Link Link
  • Fix for RecordReaderMulitDataSetIterator where output could be incorrect for some constructors Link
  • Non-frozen layers before a frozen layer will no longer be skipped during backprop (useful for GANs and similar architectures) Link Link
  • Fixed issue where ComputationGraph topological sort may not be consistent on all platforms; could sometimes break ComputationGraphs (with multiple valid topological orderings) trained on PC and deployed on Android Link
  • Fixed issue with CuDNN batch norm using 1-decay instead of decay Link
  • deeplearning4j-cuda no longer throws exceptions if present on classpath with nd4j-native backend set to higher priority Link
  • Added RNG control for CifarDataSetIterator Link
  • WordVectorSerializer now deletes temp files immediately once done Link

Deeplearning4J: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

  • WorkspaceMode.SINGLE and SEPARATE have been deprecated; use WorkspaceMode.ENABLED instead
  • Internal layer API changes: custom layers will need to be updated to the new Layer API - see built-in layers or custom layer example
  • Custom layers etc in pre-1.0.0-beta JSON (ModelSerializer) format need to be registered before they can be deserialized due to JSON format change. Built-in layers and models saved in 1.0.0-beta or later do not require this. Use NeuralNetConfiguration.registerLegacyCustomClassesForJSON(Class) for this purpose
  • IterationListener has been deprecated in favor of TrainingListener. For existing custom listeners, switch from implements TrainingListener to extends BaseTrainingListener Link
  • ExistingDataSetIterator has been deprecated; use fit(DataSetIterator, int numEpochs) method instead

Deelpearning4J: 1.0.0-beta Known Issues

  • ComputationGraph TrainingListener onEpochStart and onEpochEnd methods are not being called correctly
  • DL4J Zoo Model FaceNetNN4Small2 model configuration is incorrect, causing issues during forward pass
  • Early stopping score calculators with values thar should be maximized (accuracy, f1 etc) are not working properly (values are minimized not maximized). Workaround: override ScoreCalculator.calculateScore(...) and return 1.0 - super.calculateScore(...).

Deeplearing4J: Keras Import

Deeplearning4J: Keras Import - API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

ND4J

ND4J: New Features

ND4J: Known Issues

  • Not all op gradients implemented for automatic differentiation
  • Vast majority of new operations added in 1.0.0-beta do NOT use GPU yet.

ND4J: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

DataVec

DataVec: New Features

  • ImageRecordReader now logs number of inferred label classes (to reduce risk of users missing a problem if something is misconfigured) Link
  • Added AnalyzeSpark.getUnique overload for multiple columns Link
  • Added performance/timing module Link

DataVec: Optimizations and Bug Fixes

  • Reduced ImageRecordReader garbage generation via buffer reuse Link
  • Fixes for Android compilation (aligned versions, removed some dependencies) Link Link
  • Removed Reflections library use in DataVec Link
  • Fix for TransformProcessRecordReader batch support Link
  • Fix for TransformProcessRecordReader with filter operations Link
  • Fixed issue with ImageRecordReader/ParentPathLabelGenerator incorrectly filtering directories containing . character(s) Link
  • ShowImageTransform now initializes frame lazily to avoid blank windows Link

DataVec: API Changes (Transition Guide): 1.0.0-alpha to 1.0.0-beta

  • DataVec ClassPathResource has been deprecated; use nd4j-common version instead Link

Arbiter

Arbiter: New Features

  • Added LayerSpace for OCNN (one-class neural network)

Arbiter: Fixes

  • Fixed timestamp issue that could cause incorrect rendering of first model’s results in UI Link
  • Execution now waits for last model(s) to complete before returning when a termination condition is hit Link
  • As per DL4J etc: use of Reflections library has been removed entirely from Arbiter Link
  • Remove use of Eclipse Collections library due to issues with Android compilation Link
  • Improved cleanup of completed models to reduce maximum memory requirements for training Link

RL4J

ScalNet

ND4S

Release Notes for Version 1.0.0-alpha

Highlights - 1.0.0-alpha Release

  • ND4J: Added SameDiff - Java automatic differentiation library (alpha release) with Tensorflow import (technology preview) and hundreds of new operations
  • ND4J: Added CUDA 9.0 and 9.1 support (with cuDNN), dropped support for CUDA 7.5, continued support for CUDA 8.0
  • ND4J: Native binaries (nd4j-native on Maven Central) now ship with AVX/AVX2/AVX-512 support (Windows/Linux)
  • DL4J: Large number of new layers and API improvements
  • DL4J: Keras 2.0 import support

Deeplearning4J

Deeplearning4J: New Features

  • Layers (new and enhanced)
    • Added Yolo2OutputLayer CNN layer for object detection (Link). See also DataVec’s ObjectDetectionRecordReader
    • Adds support for ‘no bias’ layers via hasBias(boolean) config (DenseLayer, EmbeddingLayer, OutputLayer, RnnOutputLayer, CenterLossOutputLayer, ConvolutionLayer, Convolution1DLayer). EmbeddingLayer now defaults to no bias (Link)
    • Adds support for dilated convolutions (aka ‘atrous’ convolutions) - ConvolutionLayer, SubsamplingLayer, and 1D versions there-of. (Link)
    • Added Upsampling2D layer, Upsampling1D layer (Link, Link)
    • ElementWiseVertex now (additionally) supports Average and Max modes in addition to Add/Subtract/Product (Link)
    • Added SeparableConvolution2D layer (Link)
    • Added Deconvolution2D layer (aka transpose convolution, fractionally strided convolution layer) (Link)
    • Added ReverseTimeSeriesVertex (Link)
    • Added RnnLossLayer - no-parameter version of RnnOutputLayer, or RNN equivalent of LossLayer (Link)
    • Added CnnLossLayer - no-parameter CNN output layer for use cases such as segmentation, denoising, etc. (Link)
    • Added Bidirectional layer wrapper (converts any uni-directional RNN to a bidirectional RNN) (Link)
    • Added SimpleRnn layer (aka “vanilla” RNN layer) (Link)
    • Added LastTimeStep wrapper layer (wraps a RNN layer to get last time step, accounting for masking if present) (Link)
    • Added MaskLayer utility layer that simply zeros out activations on forward pass when a mask array is present (Link)
    • Added alpha-version (not yet stable) SameDiff layer support to DL4J (Note: forward pass, CPU only for now)(Link)
    • Added SpaceToDepth and SpaceToBatch layers (Link, Link)
    • Added Cropping2D layer (Link)
  • Added parameter constraints API (LayerConstraint interface), and MaxNormConstraint, MinMaxNormConstraint, NonNegativeConstraint, UnitNormConstraint implementations (Link)
  • Significant refactoring of learning rate schedules (Link)
    • Added ISchedule interface; added Exponential, Inverse, Map, Poly, Sigmoid and Step schedule implementations (Link)
    • Added support for both iteration-based and epoch-based schedules via ISchedule. Also added support for custom (user defined) schedules
    • Learning rate schedules are configured on the updaters, via the .updater(IUpdater) method
  • Added dropout API (IDropout - previously dropout was available but not a class); added Dropout, AlphaDropout (for use with self-normalizing NNs), GaussianDropout (multiplicative), GaussianNoise (additive). Added support for custom dropout types (Link)
  • Added support for dropout schedules via ISchedule interface (Link)
  • Added weight/parameter noise API (IWeightNoise interface); added DropConnect and WeightNoise (additive/multiplicative Gaussian noise) implementations (Link); dropconnect and dropout can now be used simultaneously
  • Adds layer configuration alias .units(int) equivalent to .nOut(int) (Link)
  • Adds ComputationGraphConfiguration GraphBuilder .layer(String, Layer, String...) alias for .addLayer(String, Layer, String...)
  • Layer index no longer required for MultiLayerConfiguration ListBuilder (i.e., .list().layer(<layer>) can now be used for configs) (Link)
  • Added MultiLayerNetwork.summary(InputType) and ComputationGraph.summary(InputType...) methods (shows layer and activation size information) (Link)
  • MultiLayerNetwork, ComputationGraph and layerwise trainable layers now track the number of epochs (Link)
  • Added deeplearning4j-ui-standalone module: uber-jar for easy launching of UI server (usage: java -jar deeplearning4j-ui-standalone-1.0.0-alpha.jar -p 9124 -r true -f c:/UIStorage.bin)
  • Weight initializations:
    • Added .weightInit(Distribution) convenience/overload (previously: required .weightInit(WeightInit.DISTRIBUTION).dist(Distribution)) (Link)
    • WeightInit.NORMAL (for self-normalizing neural networks) (Link)
    • Ones, Identity weight initialization (Link)
    • Added new distributions (LogNormalDistribution, TruncatedNormalDistribution, OrthogonalDistribution, ConstantDistribution) which can be used for weight initialization (Link)
    • RNNs: Added ability to specify weight initialization for recurrent weights separately to “input” weights (Link)
  • Added layer alias: Convolution2D (ConvolutionLayer), Pooling1D (Subsampling1DLayer), Pooling2D (SubsamplingLayer) (Link)
  • Added Spark IteratorUtils - wraps a RecordReaderMultiDataSetIterator for use in Spark network training (Link)
  • CuDNN-supporting layers (ConvolutionLayer, etc) now warn the user if using CUDA without CuDNN (Link)
  • Binary cross entropy (LossBinaryXENT) now implements clipping (1e-5 to (1 - 1e-5) by default) to avoid numerical underflow/NaNs (Link)
  • SequenceRecordReaderDataSetIterator now supports multi-label regression (Link)
  • TransferLearning FineTuneConfiguration now has methods for setting training/inference workspace modes (Link)
  • IterationListener iterationDone method now reports both current iteration and epoch count; removed unnecessary invoke/invoked methods (Link)
  • Added MultiLayerNetwork.layerSize(int), ComputationGraph.layerSize(int)/layerSize(String) to easily determine size of layers (Link)
  • Added MultiLayerNetwork.toComputationGraph() method (Link)
  • Added NetworkUtils convenience methods to easily change the learning rate of an already initialized network (Link)
  • Added MultiLayerNetwork.save(File)/.load(File) and ComputationGraph.save(File)/.load(File) convenience methods (Link)
  • Added CheckpointListener to periodically save a copy of the model during training (every N iter/epochs, every T time units) (Link)
  • Added ComputationGraph output method overloads with mask arrays (Link)
  • New LossMultiLabel loss function for multi-label classification (Link)
  • Added new model zoo models:
  • New iterators, and iterator improvements:
    • Added FileDataSetIterator, FileMultiDataSetIterator for flexibly iterating over directories of saved (Multi)DataSet objects (Link)
    • UCISequenceDataSetIterator (Link)
    • RecordReaderDataSetIterator now has builder pattern for convenience, improved javadoc (Link)
    • Added DataSetIteratorSplitter, MultiDataSetIteratorSplitter (Link, Link)
  • Added additional score functions for early stopping (ROC metrics, full set of Evaluation/Regression metrics, etc) (Link)
  • Added additional ROC and ROCMultiClass evaluation overloads for MultiLayerNetwork and ComputationGraph (Link)
  • Clarified Evaluation.stats() output to refer to “Predictions” instead of “Examples” (former is more correct for RNNs) (Link)
  • EarlyStoppingConfiguration now supports Supplier<ScoreCalculator> for use with non-serializable score calculators (Link)
  • Improved ModelSerializer exceptions when trying to load a model via wrong method (i.e., try to load ComputationGraph via restoreMultiLayerNetwork) (Link)
  • Added SparkDataValidation utility methods to validate saved DataSet and MultiDataSet on HDFS or local (Link)
  • ModelSerializer: added restoreMultiLayerNetworkAndNormalizer and restoreComputationGraphAndNormalizer methods (Link)
  • ParallelInference now has output overloads with support for input mask arrays (Link)

Deeplearning4J: Bug Fixes and Optimizations

  • Lombok is no longer included as a transitive dependency (Link)
  • ComputationGraph can now have a vertex as the output (not just layers) (Link, Link)
  • Performance improvement for J7FileStatsStorage with large amount of history (Link)
  • Fixed UI layer sizes for variational autoencoder layers (Link)
  • Fixes to avoid HDF5 library crashes (Link, Link)
  • UI Play servers switch to production (PROD) mode (Link)
  • Related to the above: users can now set play.crypto.secret system property to manually set the Play application secret; is randomly generated by default (Link).
  • SequenceRecordReaderDataSetIterator would apply preprocessor twice (Link)
  • Evaluation no-arg constructor could cause NaN evaluation metrics when used on Spark
  • CollectScoresIterationListener could recurse endlessly (Link)
  • Async(Multi)DataSetIterator calling reset() on underlying iterator could cause issues in some situations (Link)
  • In some cases, L2 regularization could be (incorrectly) applied to frozen layers (Link)
  • Logging fixes for NearestNeighboursServer (Link)
  • Memory optimization for BaseStatsListener (Link)
  • ModelGuesser fix for loading Keras models from streams (previously would fail) (Link)
  • Various fixes for workspaces in MultiLayerNetwork and ComputationGraph (Link, Link, Link, Link, Link, Link)
  • Fix for incorrect condition in DuplicateToTimeSeriesVertex (Link)
  • Fix for getMemoryReport exception on some valid ComputationGraph networks (Link)
  • RecordReaderDataSetIterator when used with preprocessors could cause an exception under some circumstances (Link)
  • CnnToFeedForwardPreProcessor could silently reshape invalid input, as long as the input array length matches the expected length (Link)
  • ModelSerializer temporary files would not be deleted if JVM crashes; now are deleted immediately when no longer required (Link)
  • RecordReaderMultiDataSetIterator may not add mask arrays under some circumstances, when set to ALIGN_END mode (Link)
  • ConvolutionIterationListener previously produced an IndexOutOfBoundsException when all convolution layers are frozen (Link)
  • PrecisionRecallCurve.getPointAtRecall could return a point with a correct but sub-optimal precision when multiple points had identical recall (Link)
  • Setting dropout(0) on transfer learning FineTuneConfiguration did not remove dropout if present on existing layer (Link)
  • Under some rare circumstances, Spark evaluation could lead to a NullPointerException (Link)
  • ComputationGraph: disconnected vertices were not always detected in configuration validation (Link)
  • Activation layers would not always inherit the global activation function configuration (Link)
  • RNN evaluation memory optimization: when TBPTT is configured for training, also use TBPTT-style splitting for evaluation (identical result, less memory) (Link, Link)
  • PerformanceListener is now serializable (Link)
  • ScoreIterationListener and PerformanceListener now report model iteration, not “iterations since listener creation” (Link)
  • Precision/recall curves cached values in ROC class may not be updated after merging ROC instances (Link)
  • ROC merging after evaluating a large number of examples may produce IllegalStateException (Link)
  • Added checks for invalid input indices to EmbeddingLayer (Link)
  • Fixed possible NPE when loading legacy (pre-0.9.0) model configurations from JSON (Link)
  • Fixed issues with EvaluationCalibration HTML export chart rendering (Link)
  • Fixed possible incorrect redering of UI/StatsStorage charts with J7FileStatsStorage when used with Spark training (Link)
  • MnistDataSetIterator would not always reliably detect and automatically fix/redownload on corrupted download data (Link)
  • MnistDataSetIterator / EmnistDataSetIterator: updated download location after hosting URL change (Link, Link)
  • Fixes to propagation of thread interruptions (Link)
  • MultiLayerNetwork/ComputationGraph will no longer throw an ND4JIllegalStateException during initialization if a network contains no parameters (Link, Link)
  • Fixes for TSNE posting of data to UI for visualization (Link)
  • PerformanceListener now throws a useful exception (in constructor) on invalid frequency argument, instead of runtime ArithmeticException (Link)
  • RecordReader(Multi)DataSetIterator now throws more useful exceptions when Writable values are non-numerical (Link)
  • UI: Fixed possible character encoding issues for non-English languages when internationalization data .txt files are read from uber JARs (Link)
  • UI: Fixed UI incorrectly trying to parse non-DL4J UI resources when loading I18N data (Link)
  • Various threading fixes (Link)
  • Evaluation: no-arg methods (f1(), precion(), etc) now return single class value for binary case instead of macro-averaged value; clarify values in stats() method and javadoc (Link)
  • Early stopping training: TrainingListener opEpochStart/End (etc) methods were not being called correctly (Link)
  • Fixes issue where dropout was not always applied to input of RNN layers (Link)
  • ModelSerializer: improved validation/exceptions when reading from invalid/empty/closed streams (Link)
  • ParallelInference fixes:
    • fixes for variable size inputs (variable length time series, variable size CNN inputs) when using batch mode (Link)
    • fixes undelying model exceptions during output method are now properly propagated back to the user (Link)
    • fixes support for ‘pre-batched’ inputs (i.e., inputs where minibatch size is > 1) (Link)
  • Memory optimization for network weight initialization via in-place random ops (Link)
  • Fixes for CuDNN with SAME mode padding (Link, Link)
  • Fix for VariationalAutoencoder builder decoder layer size validation (Link)
  • Improved Kmeans throughputlink
  • Add RPForest to nearest neighbors link

Deeplearning4J: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • Default training workspace mode has been switched to SEPARATE from NONE for MultiLayerNetwork and ComputationGraph (Link)
  • Behaviour change: fit(DataSetIterator) and similar methods no longer perform layerwise pretraining followed by backprop - only backprop is performed in these methods. For pretraining, use pretrain(DataSetIterator) and pretrain(MultiDataSetIterator) methods (Link)
  • Previously deprecated updater configuration methods (.learningRate(double), .momentum(double) etc) all removed
    • To configure learning rate: use .updater(new Adam(lr)) instead of .updater(Updater.ADAM).learningRate(lr)
    • To configure bias learning rate: use .biasUpdater(IUpdater) method
    • To configure learning rate schedules: use .updater(new Adam(ISchedule)) and similar
  • Updater configuration via enumeration (i.e., .updater(Updater)) has been deprecated; use .updater(IUpdater)
  • .regularization(boolean) config removed; functionality is now always equivalent to .regularization(true)
  • .useDropConnect(boolean) removed; use .weightNoise(new DropConnect(double)) instead
  • .iterations(int) method has been removed (was rarely used and confusing to users)
  • Multiple utility classes (in org.deeplearning4j.util) have been deprecated and/or moved to nd4j-common. Use same class names in nd4j-common org.nd4j.util instead.
  • DataSetIterators in DL4J have been moved from deeplearning4j-nn module to new deeplearning4j-datasets, deeplearning4j-datavec-iterators and deeplearning4j-utility-iterators modules. Packages/imports are unchanged; deeplearning4j-core pulls these in as transitive dependencies hence no user changes should be required in most cases (Link)
  • Previously deprecated .activation(String) has been removed; use .activation(Activation) or .activation(IActivation) instead
  • Layer API change: Custom layers may need to implement applyConstraints(int iteration, int epoch) method
  • Parameter initializer API change: Custom parameter initializers may need to implement isWeightParam(String) and isBiasParam(String) methods
  • RBM (Restricted Boltzmann Machine) layers have been removed entirely. Consider using VariationalAutoencoder layers as a replacement (Link)
  • GravesBidirectionalLSTM has been deprecated; use new Bidirectional(Bidirectional.Mode.ADD, new GravesLSTM.Builder()....build())) instead
  • Previously deprecated WordVectorSerializer methods have now been removed (Link)
  • Removed deeplearning4j-ui-remote-iterationlisteners module and obsolete RemoteConvolutionalIterationListener (Link)

Deeplearning4J: 1.0.0-alpha Known Issues

  • Performance on some networks types may be reduced on CUDA compared to 0.9.1 (with workspaces configured). This will be addressed in the next release
  • Some issues have been noted with FP16 support on CUDA (Link)

Deeplearing4J: Keras Import

  • Keras 2 support, keeping backward compatibility for keras 1
  • Keras 2 and 1 import use exact same API and are inferred by DL4J
  • Keras unit test coverage increased by 10x, many more real-world integration tests
  • Unit tests for importing and checking layer weights
  • Leaky ReLU, ELU, SELU support for model import
  • All Keras layers can be imported with optional bias terms
  • Old deeplearning4j-keras module removed, old “Model” API removed
  • All Keras initializations (Lecun normal, Lecun uniform, ones, zeros, Orthogonal, VarianceScaling, Constant) supported
  • 1D convolution and pooling supported in DL4J and Keras model import
  • Atrous Convolution 1D and 2D layers supported in Keras model import
  • 1D Zero padding layers supported
  • Keras constraints module fully supported in DL4J and model import
  • Upsampling 1D and 2D layers in DL4J and Keras model import (including GAN examples in tests)
  • Most merge modes supported in Keras model import, Keras 2 Merge layer API supported
  • Separable Convolution 2D layer supported in DL4J and Keras model import
  • Deconvolution 2D layer supported in DL4J and Keras model import
  • Full support of Keras noise layers on import (Alpha dropout, Gaussian dropout and noise)
  • Support for SimpleRNN layer in Keras model import
  • Support for Bidirectional layer wrapper Keras model import
  • Addition of LastTimestepVertex in DL4J to support return_sequences=False for Keras RNN layers.
  • DL4J support for recurrent weight initializations and Keras import integration.
  • SpaceToBatch and BatchToSpace layers in DL4J for better YOLO support, plus end-to-end YOLO Keras import test.
  • Cropping2D support in DL4J and Keras model import

Deeplearning4J: Keras Import - API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • In 0.9.1 deprecated Model and ModelConfiguration have been permanently removed. Use KerasModelImport instead, which is now the only entry point for Keras model import.

Deeplearning4J: Keras Import - Known Issues

  • Embedding layer: In DL4J the output of an embedding layer is 2D by default, unless preprocessors are specified. In Keras the output is always 3D, but depending on specified parameters can be interpreted as 2D. This often leads to difficulties when importing Embedding layers. Many cases have been covered and issues fixed, but inconsistencies remain.
  • Batchnormalization layer: DL4J’s batch normalization layer is much more restrictive (in a good way) than Keras’ version of it. For instance, DL4J only allows to normalize spatial dimensions for 4D convolutional inputs, while in Keras any axis can be used for normalization. Depending on the dimension ordering (NCHW vs. NHWC) and the specific configuration used by a Keras user, this can lead to expected (!) and unexpected import errors.
  • Support for importing a Keras model for training purposes in DL4J (enforceTrainingConfig == true) is still very limited and will be tackled properly for the next release.
  • Keras Merge layers: seem to work fine with the Keras functional API, but have issues when used in a Sequential model.
  • Reshape layers: can be somewhat unreliable on import. DL4J rarely has a need to explicitly reshape input beyond (inferred) standard input preprocessors. In Keras, Reshape layers are used quite often. Mapping the two paradigms can be difficult in edge cases.

ND4J

ND4J: New Features

  • Hundreds of new operations added
  • New DifferentialFunction api with automatic differentiation (see samediff section) Link
  • Technology preview of tensorflow import added (supports 1.4.0 and up)
  • Apache Arrow serialization added supporting new tensor API Link
  • Add support for AVX/AVX2 and AVX-512 instruction sets for Windows/Linux for nd4j-native backend Link
  • nVidia CUDA 8/9.0/9.1 now supported
  • Worskpaces improvements were introduced to ensure safety: SCOPE_PANIC profiling mode is enabled by default
  • FlatBuffers support for INDArray serde
  • Support for auto-broadcastable operations was added
  • libnd4j, underlying c++ library, got functionality boost and now offers: NDArray class, Graph class, and can be used as standalone library or executable.
  • Convolution-related ops now support NHWC in addition to NCHW data format.
  • Accumulation ops now have option to keep reduced dimensions.

ND4J: Known Issues

  • Not all op gradients implemented for automatic differentiation
  • Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.

ND4J: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

ND4J - SameDiff

  • Initial tech preview Link
  • Control flow is supported with IF and WHILE primitives.

Alpha release of SameDiff auto-differentiation engine for ND4J.

Features

  • Two execution modes available: Java-driven execution, and Native execution for serialized graphs.
  • SameDiff graphs can be serialized using FlatBuffers
  • Building and running computation graphs build from SameDiff operations.
  • Graphs can run forward pass on input data and compute gradients for the backward pass.
  • Already supports many high-level layers, like dense layers, convolutions (1D-3D) deconvolutions, separable convolutions, pooling and upsampling, batch normalization, local response normalization, LSTMs and GRUs.
  • In total there are about 350 SameDiff operations available, including many basic operations used in building complex graphs.
  • Supports rudimentary import of TensorFlow and ONNX graphs for inference.
  • TFOpTests is a dedicated project for creating test resources for TensorFlow import.

Known Issues and Limitations

  • Vast majority of new operations added in 1.0.0-alpha do NOT use GPU yet.
  • While many of the widely used base operations and high-level layers used in practice are supported, op coverage is still limited. Goal is to achieve feature parity with TensorFlow and fully support import for TF graphs.
  • Some of the existing ops do not have a backward pass implemented (called doDiff in SameDiff).

DataVec

DataVec: New Features

  • Added ObjectDetectionRecordReader - for use with DL4J’s Yolo2OutputLayer (Link) (also supports image transforms: Link)
  • Added ImageObjectLabelProvider, VocLabelProvider and SvhnLabelProvider (Streetview house numbers) for use with ObjectDetectionRecordReader (Link, Link)
  • Added LocalTransformExecutor for single machine execution (without Spark dependency) (Link)
  • Added ArrowRecordReader (for reading Apache Arrow format data) (Link)
  • Added RecordMapper class for conversion between RecordReader and RecordWriter (Link)
  • RecordWriter and InputSplit APIs have been improved; more flexible and support for partitioning across all writers (Link, Link, Link)
  • Added ArrowWritableRecordBatch and NDArrayRecordBatch for efficient batch storage (List<List<Writable>>) (Link, Link)
  • Added BoxImageTransform - an ImageTransform that either crops or pads without changing aspect ratio (Link)
  • TransformProcess now has executeToSequence(List<Writable)), executeSequenceToSingle(List<List<Writable>>) and executeToSequenceBatch(List<List<Writable>>) methods (Link, Link)
  • Added CSVVariableSlidingWindowRecordReader (Link)
  • ImageRecordReader: supports regression use cases for labels (previously: only classification) (Link)
  • ImageRecordReader: supports multi-class and multi-label image classification (via PathMultiLabelGenerator interface) (Link, Link)
  • DataAnalysis/AnalyzeSpark now includes quantiles (via t-digest) (Link)
  • Added AndroidNativeImageLoader.asBitmap(), Java2DNativeImageLoader.asBufferedImage() (Link)
  • Add new RecordReader / SequenceRecordReader implementations:
    • datavec-excel module and ExcelRecordReader (Link)
    • JacksonLineRecordReader (Link)
    • ConcatenatingRecordReader (Link)
  • Add new transforms:
    • TextToTermIndexSequenceTransform (Link)
    • ConditionalReplaceValueTransformWithDefault (Link)
    • GeographicMidpointReduction (Link)
  • StringToTimeTransform will con try to guess time format if format isn’t provided (Link)
  • Improved performance for NativeImageLoader on Android (Link)
  • Added BytesWritable (Writable for byte[] data) (Link)
  • Added TranformProcess.inferCategories methods to auto-infer categories from a RecordReader (Link)

DataVec: Fixes

  • Lombok is no longer included as a transitive dependency (Link)
  • MapFileRecordReader and MapFileSequenceRecordReader can handle empty partitions/splits for multi-part map files (Link)
  • CSVRecordReader is now properly serializable using Java serialization (Link) and Kryo serialization (Link)
  • Writables: equality semantics have been changed: for example, now DoubleWritable(1.0) is equal to IntWritable(1) (Link)
  • NumberedFileInputSplit now supports leading zeros (Link)
  • CSVSparkTransformServer and ImageSparkTransformServer Play severs changed to production mode (Link)
  • Fix for JSON subtype info for FloatMetaData (Link)
  • Serialization fixes for JacksonRecordReader, RegexSequenceRecordReader (Link)
  • Added RecordReader.resetSupported() method (Link)
  • SVMLightRecordReader now implements nextRecord() method (Link)
  • Fix for custom reductions when using conditions (Link)
  • SequenceLengthAnalysis is now serializable (Link) and supports to/from JSON (Link)
  • Fixes for FFT functionality (Link, Link)
  • Remove use of backported java.util.functions; use ND4J functions API instead (Link)
  • Fix for transforms data quality analysis for time columns (Link)

DataVec: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • Many of the util classes (in org.datavec.api.util mainly) have been deprecated or removed; use equivalently named util clases in nd4j-common module (Link)
  • RecordReader.next(int) method now returns List<List<Writable>> for batches, not List<Writable>. See also NDArrayRecordBatch
  • RecordWriter and SequenceRecordWriter APIs have been updated with multiple new methods

Arbiter

Arbiter: New Features

  • Workspace support added (Link, Link)
  • Added new layer spaces: LSTM, CenterLoss, Deconvolution2D, LossLayer, Bidirectional layer wrapper (Link, Link)
  • As per DL4J API changes: Updater configuration options (learning rate, momentum, epsilon, rho etc) have been moved to ParameterSpace instead. Updater spaces (AdamSpace, AdaGradSpace etc) introduced ([Link](https://github.com/deeplearning4j/Arbiter/pull/103))
  • As per DL4J API changes: Dropout configuration is now via ParameterSpace<IDropout>, DropoutSpace introduced (Link)
  • RBM layer spaces removed (Link)
  • ComputationGraphSpace: added layer/vertex methods with overloads for preprocessors (Link)
  • Added support to specify ‘fixed’ layers using DL4J layers directly (instead of using LayerSpaces, even for layers without hyperparameters) (Link)
  • Added LogUniformDistribution (Link)
  • Improvements to score functions; added ROC score function (Link)
  • Learning rate schedule support added (Link)
  • Add math ops for ParameterSpace<Double> and ParameterSpace<Integer> (Link)

Arbiter: Fixes

  • Fix parallel job execution (when using multiple execution threads) (Link, Link)
  • Improved logging for failed task execution (Link)
  • Fix for UI JSON serialization (Link)
  • Fix threading issues when running on CUDA and multiple execution threads (Link, Link, Link)
  • Rename saved model file to model.bin (Link)
  • Fix threading issues with non thread-safe candidates / parameter spaces (Link)
  • Lombok is no longer included as a transitive dependency (Link)

Arbiter: API Changes (Transition Guide): 0.9.1 to 1.0.0-alpha

  • As per DL4J updater API changes: old updater configuration (learningRate, momentum, etc) methods have been removed. Use .updater(IUpdater) or .updater(ParameterSpace<IUpdater>) methods instead

RL4J

  • Add support for LSTM layer to A3C
  • Fix A3C to make it actually work using new ActorCriticLoss and correct use of randomness
  • Fix cases when QLearning would fail (non-flat input, incomplete serialization, incorrect normalization)
  • Fix logic of HistoryProcessor with async algorithms and failures when preprocessing images
  • Tidy up and correct the output of statistics, also allowing the use of IterationListener
  • Fix issues preventing efficient execution with CUDA
  • Provide access to more of the internal structures with NeuralNet.getNeuralNetworks(), Policy.getNeuralNet(), and convenience constructors for Policy
  • Add MDPs for ALE (Arcade Learning Environment) and MALMO to support Atari games and Minecraft
  • Update MDP for Doom to allow using the latest version of VizDoom

ScalNet

  • First release of ScalNet Scala API, which closely resembles Keras’ API.
  • Can be built with sbt and maven.
  • Supports both Keras inspired Sequential models, corresponding to DL4J’s MultiLayerNetwork, and Model, corresponding to ComputationGraph.
  • Project structure is closely aligned to both DL4J model-import module and Keras.
  • Supports the following layers: Convolution2D, Dense, EmbeddingLayer, AvgPooling2D, MaxPooling2D, GravesLSTM, LSTM, Bidirectional layer wrapper, Flatten, Reshape. Additionally, DL4J OutputLayers are supported.

ND4S

  • Scala 2.12 support

Release Notes for Version 0.9.1

Deeplearning4J

  • Fixed issue with incorrect version dependencies in 0.9.0
  • Added EmnistDataSetIterator Link
  • Numerical stability improvements to LossMCXENT / LossNegativeLogLikelihood with softmax (should reduce NaNs with very large activations)

ND4J

  • Added runtime version checking for ND4J, DL4J, RL4J, Arbiter, DataVec Link

Known Issues

  • Deeplearning4j: Use of Evaluation class no-arg constructor (i.e., new Evaluation()) can result in accuracy/stats being reported as 0.0. Other Evaluation class constructors, and ComputationGraph/MultiLayerNetwork.evaluate(DataSetIterator) methods work as expected.
    • This also impacts Spark (distributed) evaluation: workaround is to replace sparkNet.evaluate(testData); with sparkNet.doEvaluation(testData, 64, new Evaluation(10))[0];, where 10 is the number of classes and 64 in the evaluation minibatch size to use.
  • SequenceRecordReaderDataSetIterator applies preprocessors (such as normalization) twice to each DataSet (possible workaround: use RecordReaderMultiDataSetIterator + MultiDataSetWrapperIterator)
  • TransferLearning: ComputationGraph may incorrectly apply l1/l2 regularization (defined in FinetuneConfiguration) to frozen layers. Workaround: set 0.0 l1/l2 on FineTuneConfiguration, and required l1/l2 on new/non-frozen layers directly. Note that MultiLayerNetwork with TransferLearning appears to be unaffected.

Release Notes for Version 0.9.0

Deeplearning4J

  • Workspaces feature added (faster training performance + less memory) Link
  • SharedTrainingMaster added for Spark network training (improved performance) Link 1, Link 2
  • ParallelInference added - wrapper that server inference requests using internal batching and queues Link
  • ParallelWrapper now able to work with gradients sharing, in addition to existing parameters averaging mode Link
  • VPTree performance significantly improved
  • CacheMode network configuration option added - improved CNN and LSTM performance at the expense of additional memory use Link
  • LSTM layer added, with CuDNN support Link (Note that the existing GravesLSTM implementation does not support CuDNN)
  • New native model zoo with pretrained ImageNet, MNIST, and VGG-Face weights Link
  • Convolution performance improvements, including activation caching
  • Custom/user defined updaters are now supported Link
  • Evaluation improvements
    • EvaluationBinary, ROCBinary classes added: for evaluation of binary multi-class networks (sigmoid + xent output layers) Link
    • Evaluation and others now have G-Measure and Matthews Correlation Coefficient support; also macro + micro-averaging support for Evaluation class metrics Link
    • ComputationGraph and SparkComputationGraph evaluation convenience methods added (evaluateROC, etc)
    • ROC and ROCMultiClass support exact calculation (previous: thresholded calculation was used) Link
    • ROC classes now support area under precision-recall curve calculation; getting precision/recall/confusion matrix at specified thresholds (via PrecisionRecallCurve class) Link
    • RegressionEvaluation, ROCBinary etc now support per-output masking (in addition to per-example/per-time-step masking)
    • EvaluationCalibration added (residual plots, reliability diagrams, histogram of probabilities) Link 1 Link 2
    • Evaluation and EvaluationBinary: now supports custom classification threshold or cost array Link
  • Optimizations: updaters, bias calculation
  • Network memory estimation functionality added. Memory requirements can be estimated from configuration without instantiating networks Link 1 Link 2
  • New loss functions:
    • Mixture density loss function Link
    • F-Measure loss function Link

ND4J

  • Workspaces feature added Link
  • Native parallel sort was added
  • New ops added: SELU/SELUDerivative, TAD-based comparisons, percentile/median, Reverse, Tan/TanDerivative, SinH, CosH, Entropy, ShannonEntropy, LogEntropy, AbsoluteMin/AbsoluteMax/AbsoluteSum, Atan2
  • New distance functions added: CosineDistance, HammingDistance, JaccardDistance

DataVec

  • MapFileRecordReader and MapFileSequenceRecordReader added Link 1 Link 2
  • Spark: Utilities to save and load JavaRDD<List<Writable>> and JavaRDD<List<List<Writable>> data to Hadoop MapFile and SequenceFile formats Link
  • TransformProcess and Transforms now support NDArrayWritables and NDArrayWritable columns
  • Multiple new Transform classes

Arbiter

  • Arbiter UI: Link
    • UI now uses Play framework, integrates with DL4J UI (replaces Dropwizard backend). Dependency issues/clashing versions fixed.
    • Supports DL4J StatsStorage and StatsStorageRouter mechanisms (FileStatsStorage, Remote UI via RemoveUIStatsStorageRouter)
    • General UI improvements (additional information, formatting fixes)

0.8.0 -> 0.9.0 Transition Notes

Deeplearning4j

  • Updater configuration methods such as .momentum(double) and .epsilon(double) have been deprecated. Instead: use .updater(new Nesterovs(0.9)) and .updater(Adam.builder().beta1(0.9).beta2(0.999).build()) etc to configure

DataVec

  • CsvRecordReader constructors: now uses characters for delimiters, instead of Strings (i.e., ‘,’ instead of “,”)

Arbiter

  • Arbiter UI is now a separate module, with Scala version suffixes: arbiter-ui_2.10 and arbiter-ui_2.11

Release Notes for Version 0.8.0

  • Added transfer learning API Link
  • Spark 2.0 support (DL4J and DataVec; see transition notes below)
  • New layers
    • Global pooling (aka “pooling over time”; usable with both RNNs and CNNs) Link
    • Center loss output layer Link
    • 1D Convolution and subsampling layers Link Link2
    • ZeroPaddingLayer Link
  • New ComputationGraph vertices
    • L2 distance vertex
    • L2 normalization vertex
  • Per-output masking is now supported for most loss functions (for per output masking, use a mask array equal in size/shape to the labels array; previous masking functionality was per-example for RNNs)
  • L1 and L2 regularization can now be configured for biases (via l1Bias and l2Bias configuration options)
  • Evaluation improvements:
    • DL4J now has an IEvaluation class (that Evaluation, RegressionEvaluation, etc all implement. Also allows custom evaluation on Spark) Link
    • Added multi-class (one vs. all) ROC: ROCMultiClass Link
    • For both MultiLayerNetwork and SparkDl4jMultiLayer: added evaluateRegression, evaluateROC, evaluateROCMultiClass convenience methods
    • HTML export functionality added for ROC charts Link
    • TSNE re-added to new UI
    • Training UI: now usable without an internet connection (no longer relies on externally hosted fonts)
    • UI: improvements to error handling for ‘no data’ condition
  • Epsilon configuration now used for Adam and RMSProp updaters
  • Fix for bidirectional LSTMs + variable-length time series (using masking)
  • Added CnnSentenceDataSetIterator (for use with ‘CNN for Sentence Classification’ architecture) Link Link2
  • Spark + Kryo: now test serialization + throw exception if misconfigured (instead of logging an error that can be missed)
  • MultiLayerNetwork now adds default layer names if no name is specified
  • DataVec:
    • JSON/YAML support for DataAnalysis, custom Transforms etc
    • ImageRecordReader refactored to reduce garbage collection load (hence improve performance with large training sets)
    • Faster quality analysis.
  • Arbiter: added new layer types to match DL4J
    • Performance improvement for Word2Vec/ParagraphVectors tokenization & training.
  • Batched inference introduced for ParagraphVectors
  • Nd4j improvements
    • New native operations available for ND4j: firstIndex, lastIndex, remainder, fmod, or, and, xor.
    • OpProfiler NAN_PANIC & INF_PANIC now also checks result of BLAS calls.
    • Nd4.getMemoryManager() now provides methods to tweak GC behavior.
  • Alpha version of parameter server for Word2Vec/ParagraphVectors were introduced for Spark. Please note: It’s not recommended for production use yet.
  • Performance improvements for CNN inference

0.7.2 -> 0.8.0 Transition Notes

  • Spark versioning schemes: with the addition of Spark 2 support, the versions for Deeplearning4j and DataVec Spark modules has changed
    • For Spark 1: use <version>0.8.0_spark_1</version>
    • For Spark 2: use <version>0.8.0_spark_2</version>
    • Also note: Modules with Spark 2 support are released with Scala 2.11 support only. Spark 1 modules are released with both Scala 2.10 and 2.11 support

0.8.0 Known Issues (At Launch)

  • UI/CUDA/Linux issue: Link
  • Dirty shutdown on JVM exit is possible for CUDA backend sometimes: Link
  • Issues with RBM implementation Link
  • Keras 1D convolutional and pooling layers cannot be imported yet. Will be supported in forthcoming release.
  • Keras v2 model configurations cannot be imported yet. Will be supported in forthcoming release.

Release Notes for Version 0.7.2

  • Added variational autoencoder Link
  • Activation function refactor
    • Activation functions are now an interface Link
    • Configuration now via enumeration, not via String (see examples - Link)
    • Custom activation functions now supported Link
    • New activation functions added: hard sigmoid, randomized leaky rectified linear units (RReLU)
  • Multiple fixes/improvements for Keras model import
  • Added P-norm pooling for CNNs (option as part of SubsamplingLayer configuration)
  • Iteration count persistence: stored/persisted properly in model configuration + fixes to learning rate schedules for Spark network training
  • LSTM: gate activation function can now be configured (previously: hard-coded to sigmoid)
  • UI:
    • Added Chinese translation
    • Fixes for UI + pretrain layers
    • Added Java 7 compatible stats collection compatibility Link
    • Improvements in front-end for handling NaNs
    • Added UIServer.stop() method
    • Fixed score vs. iteration moving average line (with subsampling)
  • Solved Jaxb/Jackson issue with Spring Boot based applications
  • RecordReaderDataSetIterator now supports NDArrayWritable for the labels (set regression == true; used for multi-label classification + images, etc)

0.7.1 -> 0.7.2 Transition Notes

  • Activation functions (built-in): now specified using Activation enumeration, not String (String-based configuration has been deprecated)

Release Notes for Version 0.7.1

  • RBM and AutoEncoder key fixes:
    • Ensured visual bias updated and applied during pretraining.
    • RBM HiddenUnit is the activation function for this layer; thus, established derivative calculations for backprop according to respective HiddenUnit.
  • RNG performance issues fixed for CUDA backend
  • OpenBLAS issues fixed for macOS, powerpc, linux.
  • DataVec is back to Java 7 now.
  • Multiple minor bugs fixed for ND4J/DL4J

Release Notes for Version 0.7.0

  • UI overhaul: new training UI has considerably more information, supports persistence (saving info and loading later), Japanese/Korean/Russian support. Replaced Dropwizard with Play framework. Link
  • Import of models configured and trained using Keras
  • Added ‘Same’ padding more for CNNs (ConvolutionMode network configuration option) Link
  • Weighted loss functions: Loss functions now support a per-output weight array (row vector)
  • ROC and AUC added for binary classifiers Link
  • Improved error messages on invalid configuration or data; improved validation on both
  • Added metadata functionality: track source of data (file, line number, etc) from data import to evaluation. Loading a subset of examples/data from this metadata is now supported. Link
  • Removed Jackson as core dependency (shaded); users can now use any version of Jackson without issue
  • Added LossLayer: version of OutputLayer that only applies loss function (unlike OutputLayer: it has no weights/biases)
  • Functionality required to build triplet embedding model (L2 vertex, LossLayer, Stack/Unstack vertices etc)
  • Reduced DL4J and ND4J ‘cold start’ initialization/start-up time
  • Pretrain default changed to false and backprop default changed to true. No longer needed to set these when setting up a network configuration unless defaults need to be changed.
  • Added TrainingListener interface (extends IterationListener). Provides access to more information/state as network training occurs Link
  • Numerous bug fixes across DL4J and ND4J
  • Performance improvements for nd4j-native & nd4j-cuda backends
  • Standalone Word2Vec/ParagraphVectors overhaul:
    • Performance improvements
    • ParaVec inference available for both PV-DM & PV-DBOW
    • Parallel tokenization support was added, to address computation-heavy tokenizers.
  • Native RNG introduced for better reproducibility within multi-threaded execution environment.
  • Additional RNG calls added: Nd4j.choice(), and BernoulliDistribution op.
  • Off-gpu storage introduced, to keep large things, like Word2Vec model in host memory. Available via WordVectorSerializer.loadStaticModel()
  • Two new options for performance tuning on nd4j-native backend: setTADThreshold(int) & setElementThreshold(int)

0.6.0 -> 0.7.0 Transition Notes

Notable changes for upgrading codebases based on 0.6.0 to 0.7.0:

  • UI: new UI package name is deeplearning4j-ui_2.10 or deeplearning4j-ui_2.11 (previously: deeplearning4j-ui). Scala version suffix is necessary due to Play framework (written in Scala) being used now.
  • Histogram and Flow iteration listeners deprecated. They are still functional, but using new UI is recommended Link
  • DataVec ImageRecordReader: labels are now sorted alphabetically by default before assigning an integer class index to each - previously (0.6.0 and earlier) they were according to file iteration order. Use .setLabels(List) to manually specify the order if required.
  • CNNs: configuration validation is now less strict. With new ConvolutionMode option, 0.6.0 was equivalent to ‘Strict’ mode, but new default is ‘Truncate’
    • See ConvolutionMode javadoc for more details: Link
  • Xavier weight initialization change for CNNs and LSTMs: Xavier now aligns better with original Glorot paper and other libraries. Xavier weight init. equivalent to 0.6.0 is available as XAVIER_LEGACY
  • DataVec: Custom RecordReader and SequenceRecordReader classes require additional methods, for the new metadata functionality. Refer to existing record reader implementations for how to implement these methods.
  • Word2Vec/ParagraphVectors:
    • Few new builder methods:
      • allowParallelTokenization(boolean)
      • useHierarchicSoftmax(boolean)
    • Behaviour change: batchSize: now batch size is ALSO used as threshold to execute number of computational batches for sg/cbow

Release Notes for Version 0.6.0

  • Custom layer support
  • Support for custom loss functions
  • Support for compressed INDArrays, for memory saving on huge data
  • Native support for BooleanIndexing where applicable
  • Initial support for combined operations on CUDA
  • Significant performance improvements on CPU & CUDA backends
  • Better support for Spark environments using CUDA & cuDNN with multi-gpu clusters
  • New UI tools: FlowIterationListener and ConvolutionIterationListener, for better insights of processes within NN.
  • Special IterationListener implementation for performance tracking: PerformanceListener
  • Inference implementation added for ParagraphVectors, together with option to use existing Word2Vec model
  • Severely decreased file size on the deeplearnning4j api
  • nd4j-cuda-8.0 backend is available now for cuda 8 RC
  • Added multiple new built-in loss functions
  • Custom preprocessor support
  • Performance improvements to Spark training implementation
  • Improved network configuration validation using InputType functionality

Release Notes for Version 0.5.0

  • FP16 support for CUDA
  • [Better performance for multi-gpu}(http://deeplearning4j.org/gpu)
  • Including optional P2P memory access support
  • Normalization support for time series and images
  • Normalization support for labels
  • Removal of Canova and shift to DataVec: Javadoc, Github Repo
  • Numerous bug fixes
  • Spark improvements

Release Notes for version 0.4.0

  • Initial multi-GPU support viable for standalone and Spark.
  • Refactored the Spark API significantly
  • Added CuDNN wrapper
  • Performance improvements for ND4J
  • Introducing DataVec: Lots of new functionality for transforming, preprocessing, cleaning data. (This replaces Canova)
  • New DataSetIterators for feeding neural nets with existing data: ExistingDataSetIterator, Floats(Double)DataSetIterator, IteratorDataSetIterator
  • New learning algorithms for word2vec and paravec: CBOW and PV-DM respectively
  • New native ops for better performance: DropOut, DropOutInverted, CompareAndSet, ReplaceNaNs
  • Shadow asynchronous datasets prefetch enabled by default for both MultiLayerNetwork and ComputationGraph
  • Better memory handling with JVM GC and CUDA backend, resulting in significantly lower memory footprint

Resources

Roadmap for Fall 2016

  • ScalNet Scala API (WIP!)
  • Standard NN configuration file shared with Keras
  • CGANs
  • Model interpretability
Chat with us on Gitter