1. Setting Up Apolo
The Apolo platform is the backbone of this workflow, providing:
- Compute Resources: GPUs for running ML models.
- Storage: To manage raw data, embeddings, and processed outputs.
- Job Management: To orchestrate the pipeline.
2. Data Preparation
Upload your sample data to Apolo:
apolo cp -r ./sample_data/ storage:visual_rag/raw-data/
The uploaded PDFs will be used to extract text and images for embedding.
Run the ingestion job to process PDFs and store embeddings in LanceDB:
apolo run --detach \
--no-http-auth \
--preset H100x1 \
--name ingest-data \
--http-port 80 \
--volume storage:visual_rag/cache:/root/.cache/huggingface:rw \
--volume storage:visual_rag/raw-data/:/raw-data:rw \
--volume storage:visual_rag/lancedb-data/:/lancedb-data:rw \
-e HF_TOKEN=$HF_TOKEN \
ghcr.io/kyryl-opens-ml/apolo_visual_rag:latest -- python main.py ingest-data /raw-data --table-name=demo --db-path=/lancedb-data/datastore
The ingestion process involves:
- Extracting images and text from each page of a PDF.
- Generating embeddings for these components using ColPali.
- Storing the embeddings in LanceDB.
The processed data, including embeddings and metadata, is stored in LanceDB, a vector database optimized for high-speed search and retrieval.
4. Deploy the Generative LLM
Once the data is ingested and stored in LanceDB, deploy the generative LLM server for processing multimodal queries. This server runs the Llama 3.2 Vision-Instruct model, enabling responses based on both text and visual data.
apolo run --detach \
--no-http-auth \
--preset H100x1 \
--name generation-inference \
--http-port 80 \
--volume storage:visual_rag:/models:rw \
-e HF_TOKEN=$HF_TOKEN \
ghcr.io/huggingface/text-generation-inference:2.4.0 -- --model-id meta-llama/Llama-3.2-11B-Vision-Instruct
What Happens in This Step:
- Deploying the Server: The command sets up the generative LLM server within Apolo’s infrastructure, running the meta-llama/Llama-3.2-11B-Vision-Instruct
model.
- Secure Storage Integration: The model weights are accessed securely via the mounted storage:visual_rag
directory.
- Multimodal Inference: The server is configured to handle multimodal queries, such as combining text and images for processing.
With this setup, your generative LLM is ready to serve multimodal queries, providing the backbone for the Visual RAG pipeline. The system can now combine the embeddings retrieved from LanceDB with the user queries, using the model to generate comprehensive and accurate responses.
With the ingestion pipeline and LLM server running, you can query the system using the ask_data
function
Here’s how it works:
1. Query Embedding: The user query is embedded using ColPali in get_query_embedding
.
2. Database Search: search_db
retrieves the most relevant images based on embeddings.
3. Response Generation: A vision-enabled LLM (e.g., Llama 3.2) processes the prompt and images via run_vision_inference
.
6. Visualizing the Results
To enhance usability, integrate a Streamlit-based dashboard for querying and visualizing responses. The dashboard includes:
- PDF Viewer: Displays available documents for context.
- Search Input: Allows users to submit natural language queries.
- Results Panel: Shows the retrieved images and the LLM-generated responses.For example, querying “What is the market share by region?” retrieves visuals related to market share and generates a concise, context-aware response.