This is a Retrieval-Augmented Generation (RAG) application that processes legal document PDFs, stores them in a vector database using Chroma, and retrieves relevant information using Llama3 from Ollama.
- Upload Legal Documents: Accepts PDFs and extracts text.
- Vector Search with Chroma: Uses embeddings to store and retrieve relevant document sections.
- Local AI Processing with Llama3: Runs Llama3 locally using Ollama for fast, private inference.
- Streamlit UI: A simple web interface to interact with the AI model.
Before running the application, install the required dependencies using the provided requirements.txt file:
pip install -r requirements.txt# Install Ollama
brew install ollama
# Start Ollama service
ollama serve
# Pull the Llama3 model
ollama pull llama3- Download and install Ollama for Windows.
- Open PowerShell and start the service:
ollama serve
- Pull the Llama3 model:
ollama pull llama3
Ensure ChromaDB is set up for vector storage:
mkdir chromaTo start the Streamlit web application, run:
streamlit run app.pyTo populate the database with legal documents:
python populate_database.py --resetTo query the AI through CLI:
python query.py "Your legal question here"