==> Running 'chainlit run agentic_chatbot.py --port 10000 --host 0.0.0.0'
==> No open ports detected, continuing to scan...
==> Docs on specifying a port: https://render.com/docs/web-services#port-binding
2026-03-13 01:17:56 - Anonymized telemetry enabled. See https://docs.trychroma.com/telemetry for more information.
2026-03-13 01:17:57 - Your app is available at http://0.0.0.0:10000
==> Your service is live 🎉
==>
==> ///////////////////////////////////////////////////////////
==>
==> Available at your primary URL https://ai-agents-in-a-nutshell.onrender.com
==>
==> ///////////////////////////////////////////////////////////import os
import chainlit as cl
import dotenv
from agents import InputGuardrailTripwireTriggered, Runner, SQLiteSession
from nutrition_agent import exa_search_mcp, nutrition_agent
from openai.types.responses import ResponseTextDeltaEvent
dotenv.load_dotenv()
@cl.on_chat_start
async def on_chat_start():
session = SQLiteSession("conversation_history")
cl.user_session.set("agent_session", session)
# This is the only change in this file compared to the chatbot/agentic_chatbot.py file
await exa_search_mcp.connect()
def croissant_upsell(text: str) -> str:
french_triggers = [
"france",
"french",
"paris",
"niçoise",
"provence",
"dijon",
"bordeaux",
"baguette",
]
if any(word in text.lower() for word in french_triggers):
text += "\n\n🥐 For only 120 calories more, may we interest you in a croissant with that?"
return text
@cl.on_message
async def on_message(message: cl.Message):
session = cl.user_session.get("agent_session")
msg = cl.Message(content="")
try:
result = Runner.run_streamed(
nutrition_agent,
message.content,
session=session,
)
async for event in result.stream_events():
# Stream final message text to screen
if event.type == "raw_response_event" and isinstance(
event.data, ResponseTextDeltaEvent
):
await msg.stream_token(token=event.data.delta)
print(event.data.delta, end="", flush=True)
elif (
event.type == "raw_response_event"
and hasattr(event.data, "item")
and hasattr(event.data.item, "type")
and event.data.item.type == "function_call"
and len(event.data.item.arguments) > 0
):
async with cl.Step(name=event.data.item.name, type="tool") as step:
step.input = event.data.item.arguments
print(
f"\nTool call: {event.data.item.name} "
f"with args: {event.data.item.arguments}"
)
msg.content = croissant_upsell(msg.content)
await msg.update()
except InputGuardrailTripwireTriggered:
msg.content = "Sorry, I can only answer food-related questions."
await msg.update()
@cl.password_auth_callback
def auth_callback(username: str, password: str):
if (username, password) == (
os.getenv("CHAINLIT_USERNAME"),
os.getenv("CHAINLIT_PASSWORD"),
):
return cl.User(
identifier="Student",
metadata={"role": "student", "provider": "credentials"},
)
else:
return NoneSummary end-to-end
The environment work is part of the project, not a distraction from it. Everyone who builds real systems hits exactly what you hit: tools, deployment, secrets, ports, data, Git, and “why is this different locally vs the cloud?”. That’s the real curriculum.
Here’s a clean, brief summary you could keep with the project or share with Larry.
AI Agents in a Nutshell — Project Summary
Goal
Build and deploy a nutrition AI assistant that can:
- answer food-related questions
- retrieve calorie data from a dataset
- block off-topic questions using guardrails
- demonstrate an AI agent architecture
- run locally and deploy to the cloud (Render)
Architecture Overview
User
↓
Chainlit UI
↓
Runner (OpenAI Agents SDK)
↓
Input Guardrails
↓
Agent reasoning (LLM)
↓
Tool calls
↓
Vector database (ChromaDB)
↓
Final response
↓
Post-processing hook
↓
Response to user
Key idea:
The Runner orchestrates everything.
The agent itself is small.
agent =
instructions
+ tools
+ guardrails
Key Components
1. Chainlit (UI Layer)
Chainlit provides:
- chat interface
- authentication
- streaming responses
- tool visualization
It acts as the front end for the agent.
2. OpenAI Agents SDK
The Runner coordinates:
- sending prompts to the model
- executing tools
- applying guardrails
- streaming responses
It is effectively the orchestration engine.
3. Guardrails
Input guardrails restrict the system to its intended domain.
Example:
"What is the capital of France?"Response:
I only handle nutrition and calorie information.This prevents the model from drifting outside its purpose.
4. Vector Database (ChromaDB)
A dataset of food items and calories is converted into embeddings.
Each food becomes a vector in a 384-dimensional space.
Example entry:
Peanut Butter
589 calories per 100gThese vectors allow the system to perform semantic search rather than keyword search.
5. Dataset
The nutrition dataset was built from:
- Kaggle calorie dataset
- nutrition Q&A dataset
During the build process a script runs:
load_calories.pyThis:
- Reads the CSV
- Converts entries into embeddings
- Loads them into ChromaDB
Result:
2225 foods indexed6. Tool Calling
The agent can call a tool when needed.
Example:
User asks:
How many calories are in peanut butter?
Flow:
Agent → tool call
Tool → vector search
Tool → returns calorie data
Agent → formats response7. Post-Processing Hook
A custom hook modifies the final response.
Example feature:
Croissant upsellIf the response references French cuisine:
🥐 For only 120 calories more, may we interest you in a croissant with that?
This demonstrates how responses can be modified after the model finishes.
Local Development
Environment setup included:
- Python virtual environment
- installing dependencies
- building the Chroma database locally
- running Chainlit locally
Command used:
chainlit run agentic_chatbot.py --port 10000Deployment (Render)
Deployment required:
Build step
pip install -r requirements.txt
python multi_agent_chatbot/load_calories.pyThis ensures the vector database is created during deployment.
Start command
chainlit run agentic_chatbot.py --port 10000 --host 0.0.0.0Required secrets
Environment variables include:
OPENAI_API_KEY
EXA_API_KEY
CHAINLIT_USERNAME
CHAINLIT_PASSWORD
CHAINLIT_AUTH_SECRETLessons Learned
Key practical lessons from the project:
1. AI systems are orchestration problems
The LLM is only one part.
The real system includes:
- tools
- data
- guardrails
- orchestration
- deployment
2. Vector databases enable semantic search
Embedding food descriptions into vectors allows the system to retrieve information based on meaning.
3. Guardrails are essential
Without them, agents drift outside their intended domain.
4. Deployment changes everything
Local environments and cloud environments behave differently.
Handling:
- secrets
- ports
- build steps
- dataset initialization
is part of building production systems.
Final Result
The deployed system:
- answers nutrition questions
- retrieves calorie information
- blocks off-topic requests
- performs semantic search
- demonstrates agent orchestration
- runs locally and in the cloud
Accessible at:
https://ai-agents-in-a-nutshell.onrender.comThis might take a minute to load if it hasn't been used in a while.
Chainlit provides the interface, the Runner orchestrates the workflow, the Agent reasons, the Tool retrieves nutrition data from ChromaDB, and a final post-processing hook adds the croissant upsell for French-themed responses.AI Agents in a Nutshell — Architecture
+----------------------+
| User |
| Browser / Tester |
+----------+-----------+
|
v
+----------------------+
| Chainlit |
| UI + Auth + Chat |
+----------+-----------+
|
v
+----------------------+
| OpenAI Agents SDK |
| Runner |
| Orchestration |
+----------+-----------+
|
+-----+-----+
| |
v v
+---------+ +------------------+
| Input | | Agent |
| Guardrail| | LLM Reasoning |
+----+----+ +---------+--------+
| |
| v
| +------------+
| | Tools |
| | calorie |
| | lookup |
| +------+-----+
| |
+---------------------+
|
v
+------------------+
| ChromaDB |
| nutrition_db |
| vector search |
+--------+---------+
|
v
+------------------+
| Post-Processing |
| Croissant Upsell |
+--------+---------+
|
v
+------------------+
| Final Response |
+------------------+
Final Note
The project also demonstrated that even simple AI agents require coordination between:
- data
- tools
- models
- infrastructure
- user interfaces
Understanding how these components interact is more valuable than the individual technologies themselves.
And honestly — you should feel good about this one. You didn’t just follow a notebook. You stood the system up end-to-end.
That’s the difference between watching a demo and building a system.