Notebooks
L
Langfuse
Integration Pydantic Ai

Integration Pydantic Ai

observabilityllmsgenaicookbookprompt-managementhacktoberfestlarge-language-modelsnextraLangfuselangfuse-docs

Integrate Langfuse with Pydantic AI

This notebook provides a step-by-step guide on integrating Langfuse with Pydantic AI to achieve observability and debugging for your LLM applications.

About PydanticAI: PydanticAI is a Python agent framework designed to simplify the development of production-grade generative AI applications. It brings the same type-safety, ergonomic API design, and developer experience found in FastAPI to the world of GenAI app development.

What is Langfuse? Langfuse is an open-source LLM engineering platform. It offers tracing and monitoring capabilities for AI applications. Langfuse helps developers debug, analyze, and optimize their AI systems by providing detailed insights and integrating with a wide array of tools and Pydantic AIs through native integrations, OpenTelemetry, and dedicated SDKs.

Getting Started

Let's walk through a practical example of using Pydantic AI and integrating it with Langfuse for comprehensive tracing.

Step 1: Install Dependencies

[ ]

Step 2: Configure Langfuse SDK

Next, set up your Langfuse API keys. You can get these keys by signing up for a free Langfuse Cloud account or by self-hosting Langfuse. These environment variables are essential for the Langfuse client to authenticate and send data to your Langfuse project.

[1]

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

[ ]

Step 3: Initialize Pydantic AI Instrumentation

Now, we initialize the Pydantic AI Instrumentation. This automatically captures Pydantic AI operations and exports OpenTelemetry (OTel) spans to Langfuse.

[3]

Step 4: Basic Pydantic AI Application

Finally, run your Pydantic AI agent and generate trace data that will be sent to Langfuse. In the example below, the agent is executed with a dependency value (the winning square) and natural language input. The output from the tool function is then printed.

Make sure to pass instrument=True while configuring the Agent.

[ ]
[ ]

Step 5: View Traces in Langfuse

After executing the application, navigate to your Langfuse Trace Table. You will find detailed traces of the application's execution, providing insights into the LLM calls, retrieval operations, inputs, outputs, and performance metrics.

Example Trace in Langfuse

Example Trace in Langfuse