Tokenization

What is Tokenization?

Tokenization is the process of breaking text down into individual words. Word windows are also composed of tokens. Word2Vec can output text windows that comprise training examples for input into neural nets, as seen here.

Example

Here’s an example of tokenization done with DL4J tools:

     //tokenization with lemmatization,part of speech taggin,sentence segmentation
     TokenizerFactory tokenizerFactory = new UimaTokenizerFactory();
     Tokenizer tokenizer = tokenizerFactory.tokenize("mystring");

      //iterate over the tokens
      while(tokenizer.hasMoreTokens()) {
      	   String token = tokenizer.nextToken();
      }
      
      //get the whole list of tokens
      List<String> tokens = tokenizer.getTokens();

The above snippet creates a tokenizer capable of stemming.

In Word2Vec, that’s the recommended a way of creating a vocabulary, because it averts various vocabulary quirks, such as the singular and plural of the same noun being counted as two different words.

API Reference

API Reference

Detailed API docs for all libraries including DL4J, ND4J, DataVec, and Arbiter.

Examples

Examples

Explore sample projects and demos for DL4J, ND4J, and DataVec in multiple languages including Java and Kotlin.

Tutorials

Tutorials

Step-by-step tutorials for learning concepts in deep learning while using the DL4J API.

Guide

Guide

In-depth documentation on different scenarios including import, distributed training, early stopping, and GPU setup.

Deploying models? There's a tool for that.