DeText: A Deep Neural Text Understanding Framework
DeText: A Deep Neural Text Understanding Framework for Ranking and Classification Tasks.
natural-language-processing natural-language-understanding detext code library

What is it

DeText is a Deep Text understanding framework for NLP related ranking, classification, and language generation tasks. It leverages semantic matching using deep neural networks to understand member intents in search and recommender systems. As a general NLP framework, currently DeText can be applied to many tasks, including search & recommendation ranking, multi-class classification and query understanding tasks.

Highlight

Design principles for DeText framework: * Natural language understanding powered by state-of-the-art deep neural networks * Automatic feature extraction with deep models * End-to-end training * Interaction modeling between ranking sources and targets * A general framework with great flexibility to meet requirement of different production applications. * Flexible deep model types * Multiple loss function choices * User defined source/target fields * Configurable network structure (layer sizes and #layers) * Tunable hyperparameters ...

  • Reaching a good balance between effectiveness and efficiency to meet the industry requirements.

The framework

The DeText framework contains multiple components:

Word embedding layer. It converts the sequence of words into a d by n matrix.

CNN/BERT/LSTM for text encoding layer. It takes into the word embedding matrix as input, and maps the text data into a fixed length embedding. It is worth noting that we adopt the representation based methods over the interaction based methods. The main reason is the computational complexity: The time complexity of interaction based methods is at least O(mnd), which is one order higher than the representation based methods max(O(md), O(nd).

Interaction layer. It generates deep features based on the text embeddings. Many options are provided, such as concatenation, cosine similarity, etc.

Wide & Deep Feature Processing. We combine the traditional features with the interaction features (deep features) in a wide & deep fashion.

MLP layer. The MLP layer is to combine wide features and deep features.

It is an end-to-end model where all the parameters are jointly updated to optimize the click probability.

Model Flexibility

DeText is a general ranking framework that offers great flexibility for clients to build customized networks for their own use cases:

LTR/classification layer: in-house LTR loss implementation, or tf-ranking LTR loss, multi-class classification support.

MLP layer: customizable number of layers and number of dimensions.

Interaction layer: support Cosine Similarity, Outer Product, Hadamard Product, and Concatenation.

Text embedding layer: support CNN, BERT, LSTM-Language-Model with customized parameters on filters, layers, dimensions, etc.

Continuous feature normalization: element-wise scaling, value normalization.

Categorical feature processing: modeled as entity embedding.

All these can be customized via hyper-parameters in the DeText template. Note that tf-ranking is supported in the DeText framework, i.e., users can choose the LTR loss and metrics defined in DeText.

Don't forget to tag @linkedin in your comment, otherwise they may not be notified.

Authors community post
Share this project
Similar projects
ICLR 2020 Trends: Better & Faster Transformers for NLP
A summary of promising directions from ICLR 2020 for better and faster pretrained tranformers language models.
Rasa NLU Examples
Experimental components for Rasa NLU pipelines.
Web Mining and Information theory
Mining the Web and playing with Natural Language processing. Implementing Information retrieval System tasks. Going towards the NLP and Performing Machine ...
Zero To One For NLP
A collection of all resources for learning NLP