ragflow

infiniflow/ragflow
6508
552
42
552
95
30
Apache-2.0
Chatbots
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding.
2023-12-12EST 06:Dec:th-18000
2024-05-06EDT 18:May:th-14400
2024-04-29EDT 10:Apr:th-14400

English | 简体中文 | 日本語

Latest Release Static Badge docker pull infiniflow/ragflow:v0.4.0 license

💡 What is RAGFlow?

RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.

🌟 Key Features

🍭 "Quality in, quality out"

  • Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
  • Finds "needle in a data haystack" of literally unlimited tokens.

🍱 Template-based chunking

  • Intelligent and explainable.
  • Plenty of template options to choose from.

🌱 Grounded citations with reduced hallucinations

  • Visualization of text chunking to allow human intervention.
  • Quick view of the key references and traceable citations to support grounded answers.

🍔 Compatibility with heterogeneous data sources

  • Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.

🛀 Automated and effortless RAG workflow

  • Streamlined RAG orchestration catered to both personal and large businesses.
  • Configurable LLMs as well as embedding models.
  • Multiple recall paired with fused re-ranking.
  • Intuitive APIs for seamless integration with business.

📌 Latest Features

  • 2024-04-26 Add file management.
  • 2024-04-19 Support conversation API (detail).
  • 2024-04-16 Add an embedding model 'bce-embedding-base_v1' from BCEmbedding.
  • 2024-04-16 Add FastEmbed, which is designed specifically for light and speedy embedding.
  • 2024-04-11 Support Xinference for local LLM deployment.
  • 2024-04-10 Add a new layout recognization model for analyzing Laws documentation.
  • 2024-04-08 Support Ollama for local LLM deployment.
  • 2024-04-07 Support Chinese UI.

🔎 System Architecture

🎬 Get Started

📝 Prerequisites

  • CPU >= 4 cores
  • RAM >= 16 GB
  • Disk >= 50 GB
  • Docker >= 24.0.0 & Docker Compose >= v2.26.1

    If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.

🚀 Start up the server

  1. Ensure vm.max_map_count >= 262144 (more):

    To check the value of vm.max_map_count:

    $ sysctl vm.max_map_count

    Reset vm.max_map_count to a value at least 262144 if it is not.

    # In this case, we set it to 262144:
    $ sudo sysctl -w vm.max_map_count=262144

    This change will be reset after a system reboot. To ensure your change remains permanent, add or update the vm.max_map_count value in /etc/sysctl.conf accordingly:

    vm.max_map_count=262144
  2. Clone the repo:

    $ git clone https://github.com/infiniflow/ragflow.git
  3. Build the pre-built Docker images and start up the server:

    $ cd ragflow/docker
    $ chmod +x ./entrypoint.sh
    $ docker compose up -d

    The core image is about 9 GB in size and may take a while to load.

  4. Check the server status after having the server up and running:

    $ docker logs -f ragflow-server

    The following output confirms a successful launch of the system:

       ____                 ______ __
      / __  ____ _ ____ _ / ____// /____  _      __
     / /_/ // __ `// __ `// /_   / // __ | | /| / /
    / _, _// /_/ // /_/ // __/  / // /_/ /| |/ |/ /
    /_/ |_| __,_/ __, //_/    /_/ ____/ |__/|__/
                 /____/
    
    * Running on all addresses (0.0.0.0)
    * Running on http://127.0.0.1:9380
    * Running on http://x.x.x.x:9380
    INFO:werkzeug:Press CTRL+C to quit

    If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a network anomaly error because, at that moment, your RAGFlow may not be fully initialized.

  5. In your web browser, enter the IP address of your server and log in to RAGFlow.

    With default settings, you only need to enter http://IP_OF_YOUR_MACHINE (sans port number) as the default HTTP serving port 80 can be omitted when using the default configurations.

  6. In service_conf.yaml, select the desired LLM factory in user_default_llm and update the API_KEY field with the corresponding API key.

    See ./docs/llm_api_key_setup.md for more information.

    The show is now on!

🔧 Configurations

When it comes to system configurations, you will need to manage the following files:

You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.

The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.

To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80 to <YOUR_SERVING_PORT>:80.

Updates to all system configurations require a system reboot to take effect:

$ docker-compose up -d

🛠️ Build from source

To build the Docker images from source:

$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:v0.4.0 .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d

🛠️ Launch Service from Source

To launch the service from source, please follow these steps:

  1. Clone the repository

    $ git clone https://github.com/infiniflow/ragflow.git
    $ cd ragflow/
  2. Create a virtual environment (ensure Anaconda or Miniconda is installed)

    $ conda create -n ragflow python=3.11.0
    $ conda activate ragflow
    $ pip install -r requirements.txt

    If CUDA version is greater than 12.0, execute the following additional commands:

    $ pip uninstall -y onnxruntime-gpu
    $ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
  3. Copy the entry script and configure environment variables

    $ cp docker/entrypoint.sh .
    $ vi entrypoint.sh

    Use the following commands to obtain the Python path and the ragflow project path:

    $ which python
    $ pwd

Set the output of which python as the value for PY and the output of pwd as the value for PYTHONPATH.

If LD_LIBRARY_PATH is already configured, it can be commented out.

# Adjust configurations according to your actual situation; the two export commands are newly added.
PY=${PY}
export PYTHONPATH=${PYTHONPATH}
# Optional: Add Hugging Face mirror
export HF_ENDPOINT=https://hf-mirror.com
  1. Start the base services

    $ cd docker
    $ docker compose -f docker-compose-base.yml up -d 
  2. Check the configuration files Ensure that the settings in docker/.env match those in conf/service_conf.yaml. The IP addresses and ports for related services in service_conf.yaml should be changed to the local machine IP and ports exposed by the container.

  3. Launch the service

    $ chmod +x ./entrypoint.sh
    $ bash ./entrypoint.sh

📚 Documentation

📜 Roadmap

See the RAGFlow Roadmap 2024

🏄 Community

🙌 Contributing

RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.