bookmark

Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks


Описание

The paper discusses the capabilities of large pre-trained language models and their limitations in accessing and manipulating knowledge. The authors introduce retrieval-augmented generation (RAG) models that combine pre-trained parametric and non-parametric memory for language generation. The study explores the effectiveness of RAG models in various NLP tasks and compares them with other architectures.

тэги

Пользователи данного ресурса

  • @tomvoelker

Комментарии и рецензии