Natural language processing

HuffmanEncoder

class numpy_ml.preprocessing.nlp.HuffmanEncoder[source]
fit(text)[source]

Build a Huffman tree for the tokens in text and compute each token’s binary encoding.

Notes

In a Huffman code, tokens that occur more frequently are (generally) represented using fewer bits. Huffman codes produce the minimum expected codeword length among all methods for encoding tokens individually.

Huffman codes correspond to paths through a binary tree, with 1 corresponding to “move right” and 0 corresponding to “move left”. In contrast to standard binary trees, the Huffman tree is constructed from the bottom up. Construction begins by initializing a min-heap priority queue consisting of each token in the corpus, with priority corresponding to the token frequency. At each step, the two most infrequent tokens in the corpus are removed and become the children of a parent pseudotoken whose “frequency” is the sum of the frequencies of its children. This new parent pseudotoken is added to the priority queue and the process is repeated recursively until no tokens remain.

Parameters:text (list of strs or Vocabulary instance) – The tokenized text or a pretrained Vocabulary object to use for building the Huffman code.
transform(text)[source]

Transform the words in text into their Huffman-code representations.

Parameters:text (list of N strings) – The list of words to encode
Returns:codes (list of N binary strings) – The encoded words in text
inverse_transform(codes)[source]

Transform an encoded sequence of bit-strings back into words.

Parameters:codes (list of N binary strings) – A list of encoded bit-strings, represented as strings.
Returns:text (list of N strings) – The decoded text.
tokens[source]

A list the unique tokens in text

codes[source]

A list with the Huffman code for each unique token in text

TFIDFEncoder

class numpy_ml.preprocessing.nlp.TFIDFEncoder(vocab=None, lowercase=True, min_count=0, smooth_idf=True, max_tokens=None, input_type='filename', filter_stopwords=True)[source]

An object for compiling and encoding the term-frequency inverse-document-frequency (TF-IDF) representation of the tokens in a text corpus.

Notes

TF-IDF is intended to reflect how important a word is to a document in a collection or corpus. For a word token w in a document d, and a corpus, \(D = \{d_1, \ldots, d_N\}\), we have:

\[\begin{split}\text{TF}(w, d) &= \text{num. occurences of }w \text{ in document }d \\ \text{IDF}(w, D) &= \log \frac{|D|}{|\{ d \in D: t \in d \}|}\end{split}\]
Parameters:
  • vocab (Vocabulary object or list-like) – An existing vocabulary to filter the tokens in the corpus against. Default is None.
  • lowercase (bool) – Whether to convert each string to lowercase before tokenization. Default is True.
  • min_count (int) – Minimum number of times a token must occur in order to be included in vocab. Default is 0.
  • smooth_idf (bool) – Whether to add 1 to the denominator of the IDF calculation to avoid divide-by-zero errors. Default is True.
  • max_tokens (int) – Only add the max_tokens most frequent tokens that occur more than min_count to the vocabulary. If None, add all tokens greater that occur more than than min_count. Default is None.
  • input_type ({'filename', 'strings'}) – If ‘files’, the sequence input to fit is expected to be a list of filepaths. If ‘strings’, the input is expected to be a list of lists, each sublist containing the raw strings for a single document in the corpus. Default is ‘filename’.
  • filter_stopwords (bool) – Whether to remove stopwords before encoding the words in the corpus. Default is True.
fit(corpus_seq, encoding='utf-8-sig')[source]

Compute term-frequencies and inverse document frequencies on a collection of documents.

Parameters:
  • corpus_seq (str or list of strs) – The filepath / list of filepaths / raw string contents of the document(s) to be encoded, in accordance with the input_type parameter passed to the __init__() method. Each document is expected to be a newline-separated strings of text, with adjacent tokens separated by a whitespace character.
  • encoding (str) – Specifies the text encoding for corpus if input_type is files. Common entries are either ‘utf-8’ (no header byte), or ‘utf-8-sig’ (header byte). Default is ‘utf-8-sig’.
transform(ignore_special_chars=True)[source]

Generate the term-frequency inverse-document-frequency encoding of a text corpus.

Parameters:ignore_special_chars (bool) – Whether to drop columns corresponding to “<eol>”, “<bol>”, and “<unk>” tokens from the final tfidf encoding. Default is True.
Returns:tfidf (numpy array of shape (D, M [- 3])) – The encoded corpus, with each row corresponding to a single document, and each column corresponding to a token id. The mapping between column numbers and tokens is stored in the idx2token attribute IFF ignore_special_chars is False. Otherwise, the mappings are not accurate.

Vocabulary

class numpy_ml.preprocessing.nlp.Vocabulary(lowercase=True, min_count=None, max_tokens=None, filter_stopwords=True)[source]

An object for compiling and encoding the unique tokens in a text corpus.

Parameters:
  • lowercase (bool) – Whether to convert each string to lowercase before tokenization. Default is True.
  • min_count (int) – Minimum number of times a token must occur in order to be included in vocab. If None, include all tokens from corpus_fp in vocab. Default is None.
  • max_tokens (int) – Only add the max_tokens most frequent tokens that occur more than min_count to the vocabulary. If None, add all tokens greater that occur more than than min_count. Default is None.
  • filter_stopwords (bool) – Whether to remove stopwords before encoding the words in the corpus. Default is True.
n_tokens[source]

The number of unique word tokens in the vocabulary

n_words[source]

The total number of words in the corpus

shape[source]

The number of unique word tokens in the vocabulary

most_common(n=5)[source]

Return the top n most common tokens in the corpus

words_with_count(k)[source]

Return all tokens that occur k times in the corpus

filter(words, unk=True)[source]

Filter or replace any word in words that does not occur in Vocabulary

Parameters:
  • words (list of strs) – A list of words to filter
  • unk (bool) – Whether to replace any out of vocabulary words in words with the <unk> token (unk = True) or skip them entirely (unk = False). Default is True.
Returns:

filtered (list of strs) – The list of words filtered against the vocabulary.

words_to_indices(words)[source]

Convert the words in words to their token indices. If a word is not in the vocabulary, return the index for the <unk> token

Parameters:words (list of strs) – A list of words to filter
Returns:indices (list of ints) – The token indices for each word in words
indices_to_words(indices)[source]

Convert the indices in indices to their word values. If an index is not in the vocabulary, return the the <unk> token.

Parameters:indices (list of ints) – The token indices for each word in words
Returns:words (list of strs) – The word strings corresponding to each token index in indices
fit(corpus_fps, encoding='utf-8-sig')[source]

Compute the vocabulary across a collection of documents.

Parameters:
  • corpus_fps (str or list of strs) – The filepath / list of filepaths for the document(s) to be encoded. Each document is expected to be encoded as newline-separated string of text, with adjacent tokens separated by a whitespace character.
  • encoding (str) – Specifies the text encoding for corpus. Common entries are either ‘utf-8’ (no header byte), or ‘utf-8-sig’ (header byte). Default is ‘utf-8-sig’.

Token

class numpy_ml.preprocessing.nlp.Token(word)[source]

ngrams

numpy_ml.preprocessing.nlp.ngrams(sequence, N)[source]

Return all N-grams of the elements in sequence

remove_stop_words

numpy_ml.preprocessing.nlp.remove_stop_words(words)[source]

Remove stop words from a list of word strings

strip_punctuation

numpy_ml.preprocessing.nlp.strip_punctuation(line)[source]

Remove punctuation from a string

tokenize_chars

numpy_ml.preprocessing.nlp.tokenize_chars(line, lowercase=True, filter_punctuation=True)[source]

Split a string into individual lower-case words, optionally removing punctuation and stop-words in the process

tokenize_words

numpy_ml.preprocessing.nlp.tokenize_words(line, lowercase=True, filter_stopwords=True)[source]

Split a string into individual lower-case words, optionally removing punctuation and stop-words in the process