This interface describes communication primitive for GradientsAccumulator PLEASE NOTE: All implementations of this interface must be thread-safe.
This class provides accumulation for gradients for both input (i.e. updates coming from network) and output (comint from one ore more models training at the same time)
This GradientsAccumulator is suited for CUDA backend.
This MessageHandler implementation is suited for debugging mostly, but still can be used in production environment if you really want that.
This BlockingQueue implementation is suited only for symmetric gradients updates, and should NOT be used anywhere else.
This class provides queue-like functionality for multiple readers/multiple writers, with transparent duplication and collapsing ability
MessageHandler implementation suited for ParallelWrapper running on single box PLEASE NOTE: This handler does NOT provide any network connectivity.
This class provides additional functionality to FancyBlockingQueue: it tracks memory use of stored compressed INDArrays, and if their size becomes too big, it: a) decompresses them into single INDArray b) removes original updates messages c) keeps updating single INDArray until it gets consumed d) once that happened - it automatically switches back to original behavior
Copyright © 2020. All rights reserved.