.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/Basic-RAG/BasicRAG_refine.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_Basic-RAG_BasicRAG_refine.py: Refine Chain ======================= This cookbook demonstrates how to use the refine chain for BasicRAG. For more information, refer to `RAG-PIPELINES `_. .. figure:: ../../_static/refine_chain_langchain_illustration.jpg :width: 800 :alt: Refine Documents Chain Process :align: center Illustration of refine chain (Source: LangChain) `Note that this cookbook assumes that you already have the` ``Llama-2-13b-chat`` `LLM ready,` `for more details on how to quantize and run an LLM locally,` `refer to the LLM section under Getting Started.` .. GENERATED FROM PYTHON SOURCE LINES 19-32 .. code-block:: Python from grag.components.multivec_retriever import Retriever from grag.components.vectordb.deeplake_client import DeepLakeClient from grag.rag.basic_rag import BasicRAG client = DeepLakeClient(collection_name="grag") retriever = Retriever(vectordb=client) rag = BasicRAG(model_name="Llama-2-13b-chat", doc_chain="refine", retriever=retriever) if __name__ == "__main__": while True: query = input("Query:") rag(query) .. _sphx_glr_download_auto_examples_Basic-RAG_BasicRAG_refine.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: BasicRAG_refine.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: BasicRAG_refine.py ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_