Notebooks
M
Mistral AI
Milvus Rag French Parliament

Milvus Rag French Parliament

mistral-cookbookmilvusthird_party

Build a RAG application with Milvus Lite, Mistral and Llama-index

In this notebook, we are showing how you can build a Retrieval Augmented Generation (RAG) application to interact with data from the French Parliament. It uses Ollama with Mistral for LLM operations, Llama-index for orchestration, and Milvus for vector storage.

Install Ollama

Make sure to have Ollama installed and Running on your laptop --> https://ollama.com/

Install the different dependencies

[ ]

Download data

Note: Run this cell only if you haven't cloned the repository.

[ ]

Use Mistral Embedding

Make sure to create an API Key on Mistral's platform and load it as an environment variable.

On this tutorial, we are loading the environment variable stored in our .env file.

[1]
[2]

Prepare out data to be stored in Milvus

This code makes it possible to process text embeddings using Mistral Embed & Mistral-7B and store those in Milvus.

!!Make sure to have Ollama running on your laptop!!

  • Initialises Mistral-7B model using Ollama
  • Service Context: Configures a service context with Mistral and the embedding model defined above
  • Vector Store: Sets up a collection in Milvus to store text embeddings, specifying the database file, collection name, vector dimensions
  • Storage Context: Configures a storage context with the Milvus vector store

This makes it possible to have efficient storage and retrieval of vector embeddings for text data.

[3]
/Users/ravitheja/Desktop/mistral/lib/python3.12/site-packages/milvus_lite/__init__.py:15: UserWarning: pkg_resources is deprecated as an API. See https://setuptools.pypa.io/en/latest/pkg_resources.html. The pkg_resources package is slated for removal as early as 2025-11-30. Refrain from using this package or pin to Setuptools<81.
  from pkg_resources import DistributionNotFound, get_distribution
2025-07-14 15:21:25,453 [DEBUG][_create_connection]: Created new connection using: async-milvus_mistral_rag.db (async_milvus_client.py:599)

Using Mistral AI API

If you prefer not to run models locally or need more powerful models, you can use Mistral's API instead of Ollama. The API offers:

  • Access to more powerful models like mistral-large and mistral-small
  • No local GPU/CPU requirements
  • Consistent performance and reliability
  • Production-ready deployment

Make sure to create an API Key on Mistral's platform first.

from llama_index.llms.mistralai import MistralAI

# Initialize Mistral LLM
mistral_llm = MistralAI(api_key=MISTRAL_API_KEY, model="mistral-7B")

# Configure settings for Mistral
Settings.llm = mistral_llm

The rest of the setup using Milvus would stay the same.

Process and load the Data

[4]
[5]

Finally, ask questions to our RAG system

[6]
 The conversation in the French parliament centered around a motion and a method for action regarding the seventh wave of some issue. There was criticism towards the chosen method being considered as "peu efficace" (ineffective) and "très disproportionnée" (highly disproportionate). Additionally, there were comments about the parliament not acting democratically and without consulting other parties when it comes to implementing certain measures like the passe sanitaire or vaccinal. The session ended with applause from some groups, specifically LFI-NUPES.

If you like this tutorial, feel free to reach out on LinkedIn, check out Milvus and join our Discord.