Notebooks
L
Langfuse
Example Langgraph Agents

Example Langgraph Agents

observabilityllmsgenaicookbookprompt-managementhacktoberfestlarge-language-modelsnextraLangfuselangfuse-docs

Evaluate LangGraph Agents

In this tutorial, we will learn how to monitor the internal steps (traces) of LangGraph agents and evaluate its performance using Langfuse and Hugging Face Datasets.

This guide covers online and offline evaluation metrics used by teams to bring agents to production fast and reliably. To learn more about evaluation strategies, check out our blog post.

Why AI agent Evaluation is important:

  • Debugging issues when tasks fail or produce suboptimal results
  • Monitoring costs and performance in real-time
  • Improving reliability and safety through continuous feedback

Step 0: Install the Required Libraries

Below we install the langgraph library, langfuse and the Hugging Face datasets library

[ ]

Step 1: Set Environment Variables

Get your Langfuse API keys by signing up for Langfuse cloud or self-hosting Langfuse.

[ ]

With the environment variables set, we can now initialize the Langfuse client. get_client() initializes the Langfuse client using the credentials provided in the environment variables.

[2]
Langfuse client is authenticated and ready!

Step 2: Test Your Instrumentation

Here is a simple Q&A agent. We run it to confirm that the instrumentation is working correctly. If everything is set up correctly, you will see logs/spans in your observability dashboard.

[4]
[ ]

Check your Langfuse Traces Dashboard to confirm that the spans and logs have been recorded.

Example trace in Langfuse:

Example trace in Langfuse

Link to the trace

Step 3: Observe and Evaluate a More Complex Agent

Now that you have confirmed your instrumentation works, let's try a more complex query so we can see how advanced metrics (token usage, latency, costs, etc.) are tracked.

[6]
[7]
[ ]
[ ]
[10]
[11]
[ ]

Trace Structure

Langfuse records a trace that contains spans, which represent each step of your agent’s logic. Here, the trace contains the overall agent run and sub-spans for:

  • The tool call (get_weather)
  • The LLM calls (Responses API with 'gpt-4o')

You can inspect these to see precisely where time is spent, how many tokens are used, and so on:

Trace tree in Langfuse

Link to the trace

Online Evaluation

Online Evaluation refers to evaluating the agent in a live, real-world environment, i.e. during actual usage in production. This involves monitoring the agent’s performance on real user interactions and analyzing outcomes continuously.

We have written down a guide on different evaluation techniques here.

Common Metrics to Track in Production

  1. Costs — The instrumentation captures token usage, which you can transform into approximate costs by assigning a price per token.
  2. Latency — Observe the time it takes to complete each step, or the entire run.
  3. User Feedback — Users can provide direct feedback (thumbs up/down) to help refine or correct the agent.
  4. LLM-as-a-Judge — Use a separate LLM to evaluate your agent’s output in near real-time (e.g., checking for toxicity or correctness).

Below, we show examples of these metrics.

1. Costs

Below is a screenshot showing usage for gpt-4o calls. This is useful to see costly steps and optimize your agent.

Costs

Link to the trace

2. Latency

We can also see how long it took to complete each step. In the example below, the entire run took about 3 seconds, which you can break down by step. This helps you identify bottlenecks and optimize your agent.

Latency

Link to the trace

3. User Feedback

If your agent is embedded into a user interface, you can record direct user feedback (like a thumbs-up/down in a chat UI).

[ ]

User feedback is then captured in Langfuse:

User feedback is being captured in Langfuse

4. Automated LLM-as-a-Judge Scoring

LLM-as-a-Judge is another way to automatically evaluate your agent's output. You can set up a separate LLM call to gauge the output’s correctness, toxicity, style, or any other criteria you care about.

Workflow:

  1. You define an Evaluation Template, e.g., "Check if the text is toxic."
  2. You set a model that is used as judge-model; in this case gpt-4o-mini.
  3. Each time your agent generates output, you pass that output to your "judge" LLM with the template.
  4. The judge LLM responds with a rating or label that you log to your observability tool.

Example from Langfuse:

LLM-as-a-Judge Evaluation Template LLM-as-a-Judge Evaluator

[ ]

You can see that the answer of this example is judged as "not toxic".

LLM-as-a-Judge Evaluation Score

5. Observability Metrics Overview

All of these metrics can be visualized together in dashboards. This enables you to quickly see how your agent performs across many sessions and helps you to track quality metrics over time.

Observability metrics overview

Offline Evaluation

Online evaluation is essential for live feedback, but you also need offline evaluation—systematic checks before or during development. This helps maintain quality and reliability before rolling changes into production.

Dataset Evaluation

In offline evaluation, you typically:

  1. Have a benchmark dataset (with prompt and expected output pairs)
  2. Run your agent on that dataset
  3. Compare outputs to the expected results or use an additional scoring mechanism

Below, we demonstrate this approach with the q&a-dataset, which contains questions and expected answers.

[ ]

Next, we create a dataset entity in Langfuse to track the runs. Then, we add each item from the dataset to the system.

[ ]
[ ]

Dataset items in Langfuse

Running the Agent on the Dataset

First, we define a task function my_task() that wraps our LangGraph agent.

[2]

Finally, we use the experiment runner SDK to run our task function against each dataset item. The experiment runner handles concurrent execution, automatic tracing, and evaluation.

[ ]

You can repeat this process with different agent configurations such as:

  • Models (gpt-5.1, gpt-5-mini, etc.)
  • Prompts
  • Tools (search vs. no search)
  • Complexity of agent (multi agent vs single agent)

Then compare them side-by-side in Langfuse. In this example, I did run the agent 3 times on the 30 dataset questions. For each run, I used a different OpenAI model. You can see that amount of correctly answered questions improves when using a larger model (as expected). The correct_answer score is created by an LLM-as-a-Judge Evaluator that is set up to judge the correctness of the question based on the sample answer given in the dataset.

Dataset run overview Dataset run comparison