RAG private knowledge base¶
Retrieval-Augmented Generation (RAG) is a technology that utilizes information from private or proprietary data sources to enhance text generation. It combines the generative capabilities of large language models (LLMs) with the ability to retrieve relevant information from external knowledge bases, thereby producing more accurate and contextually relevant responses or text content.
By leveraging RAG techniques, key issues inherent in large models such as outdated knowledge, limitations in contextual understanding, and uncertainty of information sources can be addressed.
This section provides three RAG deployment cases that can be deployed on AIBOX-1684X, namely FireflyChat, ChatDoc-TPU and LangChain-Chatchat-TPU.
FireflyChat project introduction¶
FireflyChat is a graphical application platform for LLM developed by the Firefly team. It requires only simple installation with no need for compilation, allowing for quick experience of the enhancement that RAG brings to LLM.
For details, see FireflyChat.
ChatDoc-TPU project introduction¶
ChatDoc-TPU is a fully localized inference document conversation tool whose main goal is to simplify interactions with documents and extract valuable information by using natural language.
Project repository link: ChatDoc-TPU
LangChain-Chatchat-TPU project introduction¶
LangChain-Chatchat-TPU is a fully localized inference knowledge base enhancement scheme based on Langchain-Chatchat.
Project repository link: LangChain-Chatchat-TPU