Deploying Your First Agno Agent on TrueFoundry
In this guide, we’ll show you how to deploy a agno agent on TrueFoundry, a platform designed to simplify AI deployment with minimal DevOps or MLOps expertise. TrueFoundry automates infrastructure management, scaling, and monitoring, allowing you to focus on deriving insights rather than handling deployment complexities. With just a few clicks, you can transform natural language requests into SQL queries and dynamic charts, making data exploration seamless and intelligent—no manual querying required!
If you would like to try this out directly, please visit the TrueFoundry platform and navigate to Live Demos and agno-Streamlit: Live demo of our agent workflow
Architecture Overview
This project consists of several key components working together:
Query Agent: An AI agent powered by agno that:
- Uses GPT-4o for natural language understanding
- Generates appropriate SQL queries for ClickHouse
- Executes the SQL query against a pre-configured database
- Returns the data in tabular format as input for the visualization agent
Visualization Agent: A second AI agent that:
- Determines the most appropriate visualization type given the data
- Generates plots using matplotlib/seaborn
- Handles formatting and styling of visualizations
FastAPI Backend: RESTful API that:
- Coordinates between agents using agno
- Manages asynchronous job processing
- Serves plot images and results
Streamlit Frontend: User interface that:
- Provides an intuitive query interface
- Displays real-time processing status
- Shows interactive visualizations

Data Flow
User submits a natural language query through Streamlit.
- Query Agent uses agno with GPT-4o to generate SQL queries for ClickHouse
- Executes the SQL query against ClickHouse database
- Results are returned in tabular format as input for the Visualization Agent
- Visualization Agent generates visualizations and returns images for display
Getting Started
Clone the Repository
First, navigate to the TrueFoundry Getting Started Examples repository and clone it:
git clone <https://github.com/truefoundry/getting-started-examples.git>
Navigate to the agno Plot Agent Directory:
cd getting-started-examples/plot_agent/agno_plot_agent
Environment Setup
Create and activate a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
Install dependencies:
pip install uv
uv install
Configure Environment Variables
Create a .env
file:
# Truefoundry LLMGateway Configuration if using Truefoundry LLM Gateway for calling models
LLM_GATEWAY_BASE_URL=your_llm_gateway_base_url_here
LLM_GATEWAY_API_KEY=your_llm_gateway_api_key_here
# OPENAI API Configuration if not using Truefoundry LLM Gateway
OPENAI_API_KEY=<your_openai_api_key_here>
CLICKHOUSE_HOST=your_clickhouse_host
CLICKHOUSE_PORT=443
CLICKHOUSE_USER=your_user
CLICKHOUSE_PASSWORD=your_password
CLICKHOUSE_DATABASE=default
agno_VERBOSE=true
Note: When using the TrueFoundry LLM Gateway, the model ID format should be provider-name/model-name
(e.g., openai-main/gpt-4o
). Make sure your .env
file contains the correct LLM Gateway credentials as shown in the Environment Configuration section.

To get clickhouse credentials, create an account on clickhouse, sign in and create a service. After clicking the service you will see a connect button on the middle of left sidebar, which you can click to see the credentials as shown below. You can either create a database by uploading your files or use a predefined one.

Agno Agent implementation
from agno.agent import Agent
from agno.models.openai import OpenAIChat
from plot_tools import PlotTools
from query_tools import QueryTools
import os
# Query Agent for SQL generation - Using TrueFoundry LLM Gateway
sql_agent: Agent = Agent(
model=OpenAIChat(
id="openai-main/gpt-4o", # Format: provider-name/model-name
api_key=os.getenv("LLM_GATEWAY_API_KEY"),
base_url=os.getenv("LLM_GATEWAY_BASE_URL")
),
description="",
instructions=[],
tools=[ClickHouseTools()],
show_tool_calls=True,
markdown=True,
response_model=SQLQueryResult,
structured_outputs=True,
)
# Visualization Agent - Using TrueFoundry LLM Gateway
plot_agent: Agent = Agent(
model=OpenAIChat(
id="openai-main/gpt-4o",
api_key=os.getenv("LLM_GATEWAY_API_KEY"),
base_url=os.getenv("LLM_GATEWAY_BASE_URL")
),
description="",
instructions=[],
tools=[PlotTools()],
markdown=True,
response_model=VisualizationRequest,
structured_outputs=True,
)
Running the Services
- Start agno Workflow:
agno run
- Start FastAPI Backend:
python api.py
- Start Streamlit UI (new terminal):
streamlit run app.py

Deployment on TrueFoundry
Prerequisites
Install TrueFoundry CLI:
pip install -U "truefoundry"
Login to TrueFoundry:
tfy login --host "<https://app.truefoundry.com>"
Deployment Steps
- Navigate to Deployments section in TrueFoundry.

- Click Service at the bottom.
- Select your cluster workspace.
- You can deploy from your laptop, GitHub, or Docker. If deploying from your laptop, ensure you have completed the prerequisites above.
- The TrueFoundry platform will generate a deploy.py file and add it to your project. You’ll need to edit this file to add your environment variables. Find the env section in the generated file and add your credentials:
- Use the generated
deploy.py
and edit theenv
section:
env={
"OPENAI_API_KEY": "your_openai_api_key",
"CLICKHOUSE_HOST": "your_clickhouse_host",
"CLICKHOUSE_PORT": "443",
"CLICKHOUSE_USER": "your_user",
"CLICKHOUSE_PASSWORD": "your_password",
"CLICKHOUSE_DATABASE": "default",
"AGNO_VERBOSE": "true"
},
Replace placeholders with your credentials and environment configurations.
Testing Deployment
Send a test query:
curl -X POST \
-H "Content-Type: application/json" \
-d '{"query": "Show me the cost trends by model over the last week"}' \
<https://agno-plot-agent-demo-8000.aws.demo.truefoundry.cloud/query>
Successful response example:
{
"job_id": "1234-abcd-5678-efgh"
}
API Endpoints
Submit a Query:
curl -X POST <http://localhost:8000/query> -H "Content-Type: application/json" -d '{"query": "Your query here."}'
Check Query Status:
curl -X GET <http://localhost:8000/status/{job_id}>
Retrieve Plot Image:
curl -X GET <http://localhost:8000/plot/{job_id}> > plot.png
Frontend and CORS
Configure CORS in FastAPI:
from fastapi.middleware.cors import CORSMiddleware
app.add_middleware(
CORSMiddleware,
allow_origins=["*"],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
Define Environment Variable in Streamlit:
import os
FASTAPI_ENDPOINT = os.getenv("FASTAPI_ENDPOINT", "<http://localhost:8000>")
Post-deployment Notes
- Test API connectivity from Streamlit to FastAPI.
- Update Streamlit’s
.env
file with the FastAPI endpoint. - Confirm CORS settings allow Streamlit requests.
Monitor and manage your deployment through TrueFoundry by:
- Viewing logs
- Monitoring resource usage
- Setting auto-scaling rules
- Checking backend health (
/health
), API documentation (/docs
), and metrics at/metrics

Add traces to your agent
Tracing helps you understand what’s happening under the hood when an agent run is called. You get to understand the path, tools calls made, context used, latency taken when you run your agent using Truefoundry’s tracing functionality by add very few lines of code.
You need to install the following
pip install traceloop-sdk
And then add the necessary environment variables to enable tracing
"AGNO_VERBOSE": "true", # For detailed agno logs
"TRACELOOP_BASE_URL": "<your_host_name>/api/otel" # "https://internal.devtest.truefoundry.tech/api/otel"
"TRACELOOP_HEADERS"="Authorization=Bearer%20<your_tfy_api_key>"
In your codebase where you define your agent, you just need these lines to enable tracing
from traceloop.sdk import Traceloop
Traceloop.init(app_name="agno")

With these steps, your agno agent workflow is now successfully deployed on TrueFoundry!