Integration Truefoundry
TrueFoundry AI Gateway Integration
What is Truefoundry? TrueFoundry is an enterprise-grade AI Gateway and control plane that lets you deploy, govern, and monitor any LLM or Gen-AI workload behind a single OpenAI-compatible API—bringing rate-limiting, cost controls, observability, and on-prem support to production AI applications.
How Truefoundry Integrates with Langfuse
Truefoundry’s AI Gateway and Langfuse combine to give you enterprise-grade observability, governance, and cost control over every LLM request—set up in minutes.
Unified OpenAI-Compatible Endpoint
Point the Langfuse OpenAI client at Truefoundry’s gateway URL. Truefoundry routes to any supported model (OpenAI, Anthropic, self-hosted, etc.), while Langfuse transparently captures each call—no code changes required.
End-to-End Tracing & Metrics
Langfuse delivers:
- Full request/response logs (including system messages)
- Token usage (prompt, completion, total)
- Latency breakdowns per call
- Cost analytics by model and environment
Drill into any trace in seconds to optimize performance or debug regressions.
Production-Ready Controls
Truefoundry augments your LLM stack with:
- Rate limiting & quotas per team or user
- Budget alerts & spend caps to prevent overruns
- Scoped API keys with RBAC for dev, staging, prod
- On-prem/VPC deployment for full data sovereignty
Prerequisites
Before integrating Langfuse with TrueFoundry, ensure you have:
- TrueFoundry Account: Create a Truefoundry account with atleast one model provider and generate a Personal Access Token by following the instructions in quick start and generating tokens
- Langfuse Account: Sign up for a free Langfuse Cloud account or self-host Langfuse
Step 1: Install Dependencies
Step 2: Set Up Environment Variables
Next, set up your Langfuse API keys. You can get these keys by signing up for a free Langfuse Cloud account or by self-hosting Langfuse. These environment variables are essential for the Langfuse client to authenticate and send data to your Langfuse project.
True
Step 3: Use Langfuse OpenAI Drop-in Replacement
Use Langfuse's OpenAI-compatible client to capture and trace every request routed through the TrueFoundry AI Gateway. Detailed steps for configuring the gateway and generating virtual LLM keys are available in the TrueFoundry documentation.
Step 4: Run an Example
Step 5: See Traces in Langfuse
After running the example, log in to Langfuse to view the detailed traces, including:
- Request parameters
- Response content
- Token usage and latency metrics
- LLM model information through Truefoundry gateway

Note: All other features of Langfuse will work as expected, including prompt management, evaluations, custom dashboards, and advanced observability features. The TrueFoundry integration seamlessly supports the full Langfuse feature set.
Advanced Integration with Langfuse Python SDK
Enhance your observability by combining the automatic tracing with additional Langfuse features.
Using the @observe Decorator
The @observe() decorator automatically wraps your functions and adds custom attributes to traces:
Debug Mode
Enable debug logging for troubleshooting:
import logging
logging.basicConfig(level=logging.DEBUG)
Note: All other features of Langfuse will work as expected, including prompt management, evaluations, custom dashboards, and advanced observability features. The TrueFoundry integration seamlessly supports the full Langfuse feature set.
Learn More
- TrueFoundry AI Gateway Introduction: https://docs.truefoundry.com/gateway/intro-to-llm-gateway
- TrueFoundry Authentication Guide: https://docs.truefoundry.com/gateway/authentication