Haystack — Neural Question Answering At Scale
🔍 Transformers at scale for question answering & search
question-answering scale elastic-search transformers search haystack natural-language-processing code article library

Introduction

The performance of modern Question Answering Models (BERT, ALBERT ...) has seen drastic improvements within the last year enabling many new opportunities for accessing information more efficiently. However, those models are designed to find answers within rather small text passages. Haystack lets you scale QA models to large collections of documents! While QA is the focussed use case for Haystack, we will address further options around neural search in the future (re-ranking, most-similar search ...).

Haystack is designed in a modular way and lets you use any models trained with FARM or Transformers.

Core Features

  • Powerful ML models: Utilize all latest transformer based models (BERT, ALBERT, RoBERTa ...)
  • Modular & future-proof: Easily switch to newer models once they get published.
  • Developer friendly: Easy to debug, extend and modify.
  • Scalable: Production-ready deployments via Elasticsearch backend & REST API
  • Customizable: Fine-tune models to your own domain & improve them continuously via user feedback

Components

image

  1. DocumentStore: Database storing the documents for our search. We recommend Elasticsearch, but have also more light-weight options for fast prototyping (SQL or In-Memory).
  2. Retriever: Fast, simple algorithm that identifies candidate passages from a large collection of documents. Algorithms include TF-IDF or BM25, custom Elasticsearch queries, and embedding-based approaches. The Retriever helps to narrow down the scope for Reader to smaller units of text where a given question could be answered.
  3. Reader: Powerful neural model that reads through texts in detail to find an answer. Use diverse models like BERT, RoBERTa or XLNet trained via FARM or Transformers on SQuAD like tasks. The Reader takes multiple passages of text as input and returns top-n answers with corresponding confidence scores. You can just load a pretrained model from Hugging Face's model hub or fine-tune it to your own domain data.
  4. Finder: Glues together a Reader and a Retriever as a pipeline to provide an easy-to-use question answering interface.
  5. REST API: Exposes a simple API for running QA search, collecting feedback and monitoring requests
  6. Haystack Annotate: Create custom QA labels, Hosted version (Beta), Docker images (coming soon)

Resources

Don't forget to tag @deepset-ai in your comment, otherwise they may not be notified.

Authors community post
Building enterprise search systems powered by latest NLP & open-source.
Share this project
Similar projects
Haystack — Neural Question Answering At Scale
Scaling Question Answering models to find answers in large document stores via retriever and reader approach.
Unsupervised Question Decomposition for Question Answering
Decompose hard (multi-hop) questions into several, easier (single-hop) questions using unsupervised learning, and get better accuracy on multi-hop QA.
Differentiable Adaptive Computation Time for Visual Reasoning
DACT, a new algorithm for achieving adaptive computation time that, unlike existing approaches, is fully differentiable.
Long Form Question Answering with ELI5
A model for open domain long form question answering.