Notebooks
L
Langfuse
Integration Langgraph

Integration Langgraph

observabilityllmsgenaicookbookprompt-managementhacktoberfestlarge-language-modelsnextraLangfuselangfuse-docs

Cookbook: LangGraph Integration

What is LangGraph?

LangGraph is an open-source framework by the LangChain team for building complex, stateful, multi-agent applications using large language models (LLMs). LangGraph includes built-in persistence to save and resume state, which enables error recovery and human-in-the-loop workflows.

Goal of this Cookbook

This cookbook demonstrates how Langfuse helps to debug, analyze, and iterate on your LangGraph application using the LangChain integration.

By the end of this cookbook, you will be able to:

  • Automatically trace LangGraph application via the Langfuse integration
  • Monitor advanced multi-agent setups
  • Add scores (like user feedback)
  • Manage your prompts used in LangGraph with Langfuse

Initialize Langfuse

Initialize the Langfuse client with your API keys from the project settings in the Langfuse UI and add them to your environment.

Note: You need to run at least Python 3.11 (GitHub Issue).

[ ]
[ ]

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

[ ]

Example 1: Simple chat app with LangGraph

What we will do in this section:

  • Build a support chatbot in LangGraph that can answer common questions
  • Tracing the chatbot's input and output using Langfuse

We will start with a basic chatbot and build a more advanced multi agent setup in the next section, introducing key LangGraph concepts along the way.

Create Agent

Start by creating a StateGraph. A StateGraph object defines our chatbot's structure as a state machine. We will add nodes to represent the LLM and functions the chatbot can call, and edges to specify how the bot transitions between these functions.

[4]

Add Langfuse as callback to the invocation

Now, we will add then Langfuse callback handler for LangChain to trace the steps of our application: config={"callbacks": [langfuse_handler]}

[ ]

Trace view of chat app in Langfuse

Visualize the chat app

You can visualize the graph using the get_graph method along with a "draw" method

[ ]
Rendering diagram...

Use Langfuse with LangGraph Server

You can add Langfuse as callback when using LangGraph Server

When using the LangGraph Server, the LangGraph Server handles graph invocation automatically. Therefore, you should add the Langfuse callback when declaring the graph.

[7]

Example 2: Multi agent application with LangGraph

What we will do in this section:

  • Build 2 executing agents: One research agent using the LangChain WikipediaAPIWrapper to search Wikipedia and one that uses a custom tool to get the current time.
  • Build an agent supervisor to help delegate the user questions to one of the two agents
  • Add Langfuse handler as callback to trace the steps of the supervisor and executing agents
[ ]

Create tools

For this example, you build an agent to do wikipedia research, and one agent to tell you the current time. Define the tools they will use below:

[9]

Helper utilities

Define a helper function below to simplify adding new agent worker nodes.

[10]

Create agent supervisor

It will use function calling to choose the next worker node OR finish processing.

[ ]

Construct graph

Now we are ready to start building the graph. Below, define the state and worker nodes using the function we just defined. Then we connect all the edges in the graph.

[12]

Add Langfuse as callback to the invocation

Add Langfuse handler as callback: config={"callbacks": [langfuse_handler]}

[ ]
[ ]

See traces in Langfuse

Example traces in Langfuse:

  1. How does photosynthesis work?
  2. What time is it?

Trace view of multi agent in Langfuse

Visualize the agent

You can visualize the graph using the get_graph method along with a "draw" method

[ ]
Rendering diagram...

Multiple LangGraph Agents

There are setups where one LangGraph agent uses one or multiple other LangGraph agents. To combine all corresponding spans in one single trace for the multi agent execution, we can pass a custom trace_id.

First, we generate a trace_id that can be used for both agents to group the agent executions together in one Langfuse trace.

[14]

Next, we set up the sub-agent.

[15]

Then, we set the tool that uses the research-sub-agent to answer questions.

[ ]

Set up a second simple LangGraph agent that uses the new langgraph_research.

[17]
[ ]

Adding scores to traces as scores

Scores are used to evaluate single observations or entire traces. They enable you to implement custom quality checks at runtime or facilitate human-in-the-loop evaluation processes.

In the example below, we demonstrate how to score a specific span for relevance (a numeric score) and the overall trace for feedback (a categorical score). This helps in systematically assessing and improving your application.

β†’ Learn more about Custom Scores in Langfuse.

[ ]

Manage prompts with Langfuse

Use Langfuse prompt management to effectively manage and version your prompts. We add the prompt used in this example via the SDK. In production, however, users would update and manage the prompts via the Langfuse UI instead of using the SDK.

Langfuse prompt management is basically a Prompt CMS (Content Management System). Alternatively, you can also edit and version the prompt in the Langfuse UI.

  • Name that identifies the prompt in Langfuse Prompt Management
  • Prompt with prompt template incl. {{input variables}}
  • labels to include production to immediately use prompt as the default

In this example, we create a system prompt for an assistant that translates every user message into Spanish.

[ ]

View prompt in Langfuse UI

Use the utility method .get_langchain_prompt() to transform the Langfuse prompt into a string that can be used in Langchain.

Context: Langfuse declares input variables in prompt templates using double brackets ({{input variable}}). Langchain uses single brackets for declaring input variables in PromptTemplates ({input variable}). The utility method .get_langchain_prompt() replaces the double brackets with single brackets. In this example, however, we don't use any variables in our prompt.

[ ]

Now we can use the new system prompt string to update our assistant.

[22]
[ ]

Add custom spans to a LangGraph trace

Sometimes it is helpful to add custom spans to a LangGraph trace. This GitHub discussion thread provides an example of how to do this.