Deeplearning4j Hardware and CPU/GPU Setup

ND4J backends for GPUs and CPUs

You can choose GPUs or native CPUs for your backend linear algebra operations by changing the dependencies in ND4J’s POM.xml file. Your selection will affect both ND4J and DL4J being used in your application.

If you have CUDA v9.2+ installed and NVIDIA-compatible hardware, then your dependency declaration will look like:

<dependency>
 <groupId>org.nd4j</groupId>
 <artifactId>nd4j-cuda-9.2</artifactId>
 <version>1.0.0-beta3</version>
</dependency>

Otherwise you will need to use the native implementation of ND4J as a CPU backend:

<dependency>
 <groupId>org.nd4j</groupId>
 <artifactId>nd4j-native</artifactId>
 <version>1.0.0-beta3</version>
</dependency>

System architectures

If you are developing your project on multiple operating systems/system architectures, you can add -platform to the end of your artifactId which will download binaries for most major systems.

<dependency>
 ...
 <artifactId>nd4j-native-platform</artifactId>
 ...
</dependency>

Multiple GPUs

If you have several GPUs, but your system is forcing you to use just one, you can use the helper CudaEnvironment.getInstance().getConfiguration().allowMultiGPU(true); as first line of your main() method.

CuDNN

See our page on CuDNN.

CUDA Installation

Check the NVIDIA guides for instructions on setting up CUDA on the NVIDIA website.

API Reference

API Reference

Detailed API docs for all libraries including DL4J, ND4J, DataVec, and Arbiter.

Examples

Examples

Explore sample projects and demos for DL4J, ND4J, and DataVec in multiple languages including Java and Kotlin.

Tutorials

Tutorials

Step-by-step tutorials for learning concepts in deep learning while using the DL4J API.

Guide

Guide

In-depth documentation on different scenarios including import, distributed training, early stopping, and GPU setup.

Deploying models? There's a tool for that.