Local Langchain RAG

screenshot of Local Langchain RAG

This system implements a micro-microservice architecture, and uses vuetify+langchain+local large model+local vector database to build a RAG system.

Overview

The integration of Vuetify, Langchain, Llama3, and Chroma offers an innovative approach to building a Robust Augmented Generation (RAG) system. By implementing a micro-microservice architecture, this system is designed to efficiently handle complex tasks, such as language translation and professional knowledge Q&A, leveraging local resources. The system is constructed with three distinct layers: a user-friendly front-end, an API gateway, and a back-end service, all working seamlessly together.

Built primarily using Vue3 and Vuetify3, combined with powerful back-end functionalities, this system demonstrates how to utilize local large models alongside a local vector database. It emphasizes security and performance while facilitating user authentication and complex language processing in a structured manner.

Features

  • User Authentication: Implements OAuth2.0 and JWT standards, ensuring secure access for users.
  • Translation Component: Provides efficient language translation services using the local LLM.
  • Q&A Chat Component: Allows users to pose questions and receive responses based on professional knowledge, powered by the local LLM.
  • Local Vector Data Storage: Utilizes Chroma for effective data storage and retrieval, enhancing the performance of language tasks.
  • Service Deployment: Leverages the Ollama platform to run the open-source LLM Ollama3.1 locally, boosting operational capabilities.
  • API Gateway: Built with FastAPI to manage user authentication and request proxy forwarding efficiently.
  • Common Functionality: Features like local session management and theme switching enrich the user experience.
  • Flexible File Structure: Organizes code and resources effectively across various directories, streamlining development and maintenance.