In this tutorial, we take a deep dive into nanobot, the ultra-lightweight personal AI agent framework from HKUDS that packs full agent capabilities into roughly 4,000 lines of Python. Rather than simply installing and running it out of the box, we crack open the hood and manually recreate each of its core subsystems, the agent loop, tool execution, memory persistence, skills loading, session management, subagent spawning, and cron scheduling, so we understand exactly how they work. We wire everything up with OpenAI’s gpt-4o-mini as our LLM provider, enter our API key securely through the terminal (never exposing it in notebook output), and progressively build from a single tool-calling loop all the way to a multi-step research pipeline that reads and writes files, stores long-term memories, and delegates tasks to concurrent background workers. By the end, we don’t just know how to use nanobots, we understand how to extend them with custom tools, skills, and our own agent architectures.
import sys import os import subprocess def section(title, emoji="🔹"): """Pretty-print a section header.""" width = 72 print(f"n{'═' * width}") print(f" {emoji} {title}") print(f"{'═' * width}n") def info(msg): print(f" ℹ️ {msg}") def success(msg): print(f" ✅ {msg}") def code_block(code): print(f" ┌─────────────────────────────────────────────────") for line in code.strip().split("n"): print(f" │ {line}") print(f" └─────────────────────────────────────────────────") section("STEP 1 · Installing nanobot-ai & Dependencies", "📦") info("Installing nanobot-ai from PyPI (latest stable)...") subprocess.check_call([ sys.executable, "-m", "pip", "install", "-q", "nanobot-ai", "openai", "rich", "httpx" ]) success("nanobot-ai installed successfully!") import importlib.metadata nanobot_version = importlib.metadata.version("nanobot-ai") print(f" 📌 nanobot-ai version: {nanobot_version}") section("STEP 2 · Secure OpenAI API Key Input", "🔑") info("Your API key will NOT be printed or stored in notebook output.") info("It is held only in memory for this session.n") try: from google.colab import userdata OPENAI_API_KEY = userdata.get("OPENAI_API_KEY") if not OPENAI_API_KEY: raise ValueError("Not set in Colab secrets") success("Loaded API key from Colab Secrets ('OPENAI_API_KEY').") info("Tip: You can set this in Colab → 🔑 Secrets panel on the left sidebar.") except Exception: import getpass OPENAI_API_KEY = getpass.getpass("Enter your OpenAI API key: ") success("API key captured securely via terminal input.") os.environ["OPENAI_API_KEY"] = OPENAI_API_KEY import openai client = openai.OpenAI(api_key=OPENAI_API_KEY) try: client.models.list() success("OpenAI API key validated — connection successful!") except Exception as e: print(f" ❌ API key validation failed: {e}") print(" Please restart and enter a valid key.") sys.exit(1) section("STEP 3 · Configuring nanobot for OpenAI", "⚙️") import json from pathlib import Path NANOBOT_HOME = Path.home() / ".nanobot" NANOBOT_HOME.mkdir(parents=True, exist_ok=True) WORKSPACE = NANOBOT_HOME / "workspace" WORKSPACE.mkdir(parents=True, exist_ok=True) (WORKSPACE / "memory").mkdir(parents=True, exist_ok=True) config = { "providers": { "openai": { "apiKey": OPENAI_API_KEY } }, "agents": { "defaults": { "model": "openai/gpt-4o-mini", "maxTokens": 4096, "workspace": str(WORKSPACE) } }, "tools": { "restrictToWorkspace": True } } config_path = NANOBOT_HOME / "config.json" config_path.write_text(json.dumps(config, indent=2)) success(f"Config written to {config_path}") agents_md = WORKSPACE / "AGENTS.md" agents_md.write_text( "# Agent Instructionsnn" "You are nanobot 🐈, an ultra-lightweight personal AI assistant.n" "You are helpful, concise, and use tools when needed.n" "Always explain your reasoning step by step.n" ) soul_md = WORKSPACE / "SOUL.md" soul_md.write_text( "# Personalitynn" "- Friendly and approachablen" "- Technically precisen" "- Uses emoji sparingly for warmthn" ) user_md = WORKSPACE / "USER.md" user_md.write_text( "# User Profilenn" "- The user is exploring the nanobot framework.n" "- They are interested in AI agent architectures.n" ) memory_md = WORKSPACE / "memory" / "MEMORY.md" memory_md.write_text("# Long-term Memorynn_No memories stored yet._n") success("Workspace bootstrap files created:") for f in [agents_md, soul_md, user_md, memory_md]: print(f" 📄 {f.relative_to(NANOBOT_HOME)}") section("STEP 4 · nanobot Architecture Deep Dive", "🏗️") info("""nanobot is organized into 7 subsystems in ~4,000 lines of code: ┌──────────────────────────────────────────────────────────┐ │ USER INTERFACES │ │ CLI · Telegram · WhatsApp · Discord │ └──────────────────┬───────────────────────────────────────┘ │ InboundMessage / OutboundMessage ┌──────────────────▼───────────────────────────────────────┐ │ MESSAGE BUS │ │ publish_inbound() / publish_outbound() │ └──────────────────┬───────────────────────────────────────┘ │ ┌──────────────────▼───────────────────────────────────────┐ │ AGENT LOOP (loop.py) │ │ ┌─────────┐ ┌──────────┐ ┌────────────────────┐ │ │ │ Context │→ │ LLM │→ │ Tool Execution │ │ │ │ Builder │ │ Call │ │ (if tool_calls) │ │ │ └─────────┘ └──────────┘ └────────┬───────────┘ │ │ ▲ │ loop back │ │ │ ◄───────────────────┘ until done │ │ ┌────┴────┐ ┌──────────┐ ┌────────────────────┐ │ │ │ Memory │ │ Skills │ │ Subagent Mgr │ │ │ │ Store │ │ Loader │ │ (spawn tasks) │ │ │ └─────────┘ └──────────┘ └────────────────────┘ │ └──────────────────────────────────────────────────────────┘ │ ┌──────────────────▼───────────────────────────────────────┐ │ LLM PROVIDER LAYER │ │ OpenAI · Anthropic · OpenRouter · DeepSeek · ... │ └───────────────────────────────────────────────────────────┘ The Agent Loop iterates up to 40 times (configurable): 1. ContextBuilder assembles system prompt + memory + skills + history 2. LLM is called with tools definitions 3. If response has tool_calls → execute tools, append results, loop 4. If response is plain text → return as final answer """)
We set up the full foundation of the tutorial by importing the required modules, defining helper functions for clean section display, and installing the nanobot dependencies inside Google Colab. We then securely load and validate the OpenAI API key so the rest of the notebook can interact with the model without exposing credentials in the notebook output. After that, we configure the nanobot workspace and create the core bootstrap files, such as AGENTS.md and SOUL.md, USER.md, and MEMORY.md, and study the high-level architecture so we understand how the framework is organized before moving into implementation.
section("STEP 5 · The Agent Loop — Core Concept in Action", "🔄") info("We'll manually recreate nanobot's agent loop pattern using OpenAI.") info("This is exactly what loop.py does internally.n") import json as _json import datetime TOOLS = [ { "type": "function", "function": { "name": "get_current_time", "description": "Get the current date and time.", "parameters": {"type": "object", "properties": {}, "required": []} } }, { "type": "function", "function": { "name": "calculate", "description": "Evaluate a mathematical expression.", "parameters": { "type": "object", "properties": { "expression": { "type": "string", "description": "Math expression to evaluate, e.g. '2**10 + 42'" } }, "required": ["expression"] } } }, { "type": "function", "function": { "name": "read_file", "description": "Read the contents of a file in the workspace.", "parameters": { "type": "object", "properties": { "path": { "type": "string", "description": "Relative file path within the workspace" } }, "required": ["path"] } } }, { "type": "function", "function": { "name": "write_file", "description": "Write content to a file in the workspace.", "parameters": { "type": "object", "properties": { "path": {"type": "string", "description": "Relative file path"}, "content": {"type": "string", "description": "Content to write"} }, "required": ["path", "content"] } } }, { "type": "function", "function": { "name": "save_memory", "description": "Save a fact to the agent's long-term memory.", "parameters": { "type": "object", "properties": { "fact": {"type": "string", "description": "The fact to remember"} }, "required": ["fact"] } } } ] def execute_tool(name: str, arguments: dict) -> str: """Execute a tool call — mirrors nanobot's ToolRegistry.execute().""" if name == "get_current_time": elif name == "calculate": expr = arguments.get("expression", "") try: result = eval(expr, {"__builtins__": {}}, {"abs": abs, "round": round, "min": min, "max": max}) return str(result) except Exception as e: return f"Error: {e}" elif name == "read_file": fpath = WORKSPACE / arguments.get("path", "") if fpath.exists(): return fpath.read_text()[:4000] return f"Error: File not found — {arguments.get('path')}" elif name == "write_file": fpath = WORKSPACE / arguments.get("path", "") fpath.parent.mkdir(parents=True, exist_ok=True) fpath.write_text(arguments.get("content", "")) return f"Successfully wrote {len(arguments.get('content', ''))} chars to {arguments.get('path')}" elif name == "save_memory": fact = arguments.get("fact", "") mem_file = WORKSPACE / "memory" / "MEMORY.md" existing = mem_file.read_text() timestamp = datetime.datetime.now().strftime("%Y-%m-%d %H:%M") mem_file.write_text(existing + f"n- [{timestamp}] {fact}n") return f"Memory saved: {fact}" return f"Unknown tool: {name}" def agent_loop(user_message: str, max_iterations: int = 10, verbose: bool = True): """ Recreates nanobot's AgentLoop._process_message() logic. The loop: 1. Build context (system prompt + bootstrap files + memory) 2. Call LLM with tools 3. If tool_calls → execute → append results → loop 4. If text response → return final answer """ system_parts = [] for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]: fpath = WORKSPACE / md_file if fpath.exists(): system_parts.append(fpath.read_text()) mem_file = WORKSPACE / "memory" / "MEMORY.md" if mem_file.exists(): system_parts.append(f"n## Your Memoryn{mem_file.read_text()}") system_prompt = "nn".join(system_parts) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_message} ] if verbose: print(f" 📨 User: {user_message}") print(f" 🧠 System prompt: {len(system_prompt)} chars " f"(from {len(system_parts)} bootstrap files)") print() for iteration in range(1, max_iterations + 1): if verbose: print(f" ── Iteration {iteration}/{max_iterations} ──") response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, tools=TOOLS, tool_choice="auto", max_tokens=2048 ) choice = response.choices[0] message = choice.message if message.tool_calls: if verbose: print(f" 🔧 LLM requested {len(message.tool_calls)} tool call(s):") messages.append(message.model_dump()) for tc in message.tool_calls: fname = tc.function.name args = _json.loads(tc.function.arguments) if tc.function.arguments else {} if verbose: print(f" → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})") result = execute_tool(fname, args) if verbose: print(f" ← {result[:100]}{'...' if len(result) > 100 else ''}") messages.append({ "role": "tool", "tool_call_id": tc.id, "content": result }) if verbose: print() else: final = message.content or "" if verbose: print(f" 💬 Agent: {final}n") return final return "⚠️ Max iterations reached without a final response." print("─" * 60) print(" DEMO 1: Time-aware calculation with tool chaining") print("─" * 60) result1 = agent_loop( "What is the current time? Also, calculate 2^20 + 42 for me." ) print("─" * 60) print(" DEMO 2: File creation + memory storage") print("─" * 60) result2 = agent_loop( "Write a haiku about AI agents to a file called 'haiku.txt'. " "Then remember that I enjoy poetry about technology." )
We manually recreate the heart of nanobot by defining the tool schemas, implementing their execution logic, and building the iterative agent loop that connects the LLM to tools. We assemble the prompt from the workspace files and memory, send the conversation to the model, detect tool calls, execute them, append the results back into the conversation, and keep looping until the model returns a final answer. We then test this mechanism with practical examples that involve time lookups, calculations, file writing, and memory saving, so we can see the loop operate exactly like the internal nanobot flow.
section("STEP 6 · Memory System — Persistent Agent Memory", "🧠") info("""nanobot's memory system (memory.py) uses two storage mechanisms: 1. MEMORY.md — Long-term facts (always loaded into context) 2. YYYY-MM-DD.md — Daily journal entries (loaded for recent days) Memory consolidation runs periodically to summarize and compress old entries, keeping the context window manageable. """) mem_content = (WORKSPACE / "memory" / "MEMORY.md").read_text() print(" 📂 Current MEMORY.md contents:") print(" ┌─────────────────────────────────────────────") for line in mem_content.strip().split("n"): print(f" │ {line}") print(" └─────────────────────────────────────────────n") today = datetime.datetime.now().strftime("%Y-%m-%d") daily_file = WORKSPACE / "memory" / f"{today}.md" daily_file.write_text( f"# Daily Log — {today}nn" "- User ran the nanobot advanced tutorialn" "- Explored agent loop, tools, and memoryn" "- Created a haiku about AI agentsn" ) success(f"Daily journal created: memory/{today}.md") print("n 📁 Workspace contents:") for item in sorted(WORKSPACE.rglob("*")): if item.is_file(): rel = item.relative_to(WORKSPACE) size = item.stat().st_size print(f" {'📄' if item.suffix == '.md' else '📝'} {rel} ({size} bytes)") section("STEP 7 · Skills System — Extending Agent Capabilities", "🎯") info("""nanobot's SkillsLoader (skills.py) reads Markdown files from the skills/ directory. Each skill has: - A name and description (for the LLM to decide when to use it) - Instructions the LLM follows when the skill is activated - Some skills are 'always loaded'; others are loaded on demand Let's create a custom skill and see how the agent uses it. """) skills_dir = WORKSPACE / "skills" skills_dir.mkdir(exist_ok=True) data_skill = skills_dir / "data_analyst.md" data_skill.write_text("""# Data Analyst Skill ## Description Analyze data, compute statistics, and provide insights from numbers. ## Instructions When asked to analyze data: 1. Identify the data type and structure 2. Compute relevant statistics (mean, median, range, std dev) 3. Look for patterns and outliers 4. Present findings in a clear, structured format 5. Suggest follow-up questions ## Always Available false """) review_skill = skills_dir / "code_reviewer.md" review_skill.write_text("""# Code Reviewer Skill ## Description Review code for bugs, security issues, and best practices. ## Instructions When reviewing code: 1. Check for common bugs and logic errors 2. Identify security vulnerabilities 3. Suggest performance improvements 4. Evaluate code style and readability 5. Rate the code quality on a 1-10 scale ## Always Available true """) success("Custom skills created:") for f in skills_dir.iterdir(): print(f" 🎯 {f.name}") print("n 🧪 Testing skill-aware agent interaction:") print(" " + "─" * 56) skills_context = "nn## Available Skillsn" for skill_file in skills_dir.glob("*.md"): content = skill_file.read_text() skills_context += f"n### {skill_file.stem}n{content}n" result3 = agent_loop( "Review this Python code for issues:nn" "```pythonn" "def get_user(id):n" " query = f'SELECT * FROM users WHERE id = {id}'n" " result = db.execute(query)n" " return resultn" "```" )
We move into the persistent memory system by inspecting the long-term memory file, creating a daily journal entry, and reviewing how the workspace evolves after earlier interactions. We then extend the agent with a skills system by creating markdown-based skill files that describe specialized behaviors such as data analysis and code review. Finally, we simulate how skill-aware prompting works by exposing these skills to the agent and asking it to review a Python function, which helps us see how nanobot can be guided through modular capability descriptions.
section("STEP 8 · Custom Tool Creation — Extending the Agent", "🔧") info("""nanobot's tool system uses a ToolRegistry with a simple interface. Each tool needs: - A name and description - A JSON Schema for parameters - An execute() method Let's create custom tools and wire them into our agent loop. """) import random CUSTOM_TOOLS = [ { "type": "function", "function": { "name": "roll_dice", "description": "Roll one or more dice with a given number of sides.", "parameters": { "type": "object", "properties": { "num_dice": {"type": "integer", "description": "Number of dice to roll", "default": 1}, "sides": {"type": "integer", "description": "Number of sides per die", "default": 6} }, "required": [] } } }, { "type": "function", "function": { "name": "text_stats", "description": "Compute statistics about a text: word count, char count, sentence count, reading time.", "parameters": { "type": "object", "properties": { "text": {"type": "string", "description": "The text to analyze"} }, "required": ["text"] } } }, { "type": "function", "function": { "name": "generate_password", "description": "Generate a random secure password.", "parameters": { "type": "object", "properties": { "length": {"type": "integer", "description": "Password length", "default": 16} }, "required": [] } } } ] _original_execute = execute_tool def execute_tool_extended(name: str, arguments: dict) -> str: if name == "roll_dice": n = arguments.get("num_dice", 1) s = arguments.get("sides", 6) rolls = [random.randint(1, s) for _ in range(n)] return f"Rolled {n}d{s}: {rolls} (total: {sum(rolls)})" elif name == "text_stats": text = arguments.get("text", "") words = len(text.split()) chars = len(text) sentences = text.count('.') + text.count('!') + text.count('?') reading_time = round(words / 200, 1) return _json.dumps({ "words": words, "characters": chars, "sentences": max(sentences, 1), "reading_time_minutes": reading_time }) elif name == "generate_password": import string length = arguments.get("length", 16) chars = string.ascii_letters + string.digits + "!@#$%^&*" pwd = ''.join(random.choice(chars) for _ in range(length)) return f"Generated password ({length} chars): {pwd}" return _original_execute(name, arguments) execute_tool = execute_tool_extended ALL_TOOLS = TOOLS + CUSTOM_TOOLS def agent_loop_v2(user_message: str, max_iterations: int = 10, verbose: bool = True): """Agent loop with extended custom tools.""" system_parts = [] for md_file in ["AGENTS.md", "SOUL.md", "USER.md"]: fpath = WORKSPACE / md_file if fpath.exists(): system_parts.append(fpath.read_text()) mem_file = WORKSPACE / "memory" / "MEMORY.md" if mem_file.exists(): system_parts.append(f"n## Your Memoryn{mem_file.read_text()}") system_prompt = "nn".join(system_parts) messages = [ {"role": "system", "content": system_prompt}, {"role": "user", "content": user_message} ] if verbose: print(f" 📨 User: {user_message}") print() for iteration in range(1, max_iterations + 1): if verbose: print(f" ── Iteration {iteration}/{max_iterations} ──") response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, tools=ALL_TOOLS, tool_choice="auto", max_tokens=2048 ) choice = response.choices[0] message = choice.message if message.tool_calls: if verbose: print(f" 🔧 {len(message.tool_calls)} tool call(s):") messages.append(message.model_dump()) for tc in message.tool_calls: fname = tc.function.name args = _json.loads(tc.function.arguments) if tc.function.arguments else {} if verbose: print(f" → {fname}({_json.dumps(args, ensure_ascii=False)[:80]})") result = execute_tool(fname, args) if verbose: print(f" ← {result[:120]}{'...' if len(result) > 120 else ''}") messages.append({ "role": "tool", "tool_call_id": tc.id, "content": result }) if verbose: print() else: final = message.content or "" if verbose: print(f" 💬 Agent: {final}n") return final return "⚠️ Max iterations reached." print("─" * 60) print(" DEMO 3: Custom tools in action") print("─" * 60) result4 = agent_loop_v2( "Roll 3 six-sided dice for me, then generate a 20-character password, " "and finally analyze the text stats of this sentence: " ) section("STEP 9 · Multi-Turn Conversation — Session Management", "💬") info("""nanobot's SessionManager (session/manager.py) maintains conversation history per session_key (format: 'channel:chat_id'). History is stored in JSON files and loaded into context for each new message. Let's simulate a multi-turn conversation with persistent state. """)
We expand the agent’s capabilities by defining new custom tools such as dice rolling, text statistics, and password generation, and then wiring them into the tool execution pipeline. We update the executor, merge the built-in and custom tool definitions, and create a second version of the agent loop that can reason over this larger set of capabilities. We then run a demo task that forces the model to chain multiple tool invocations, demonstrating how easy it is to extend nanobot with our own functions while keeping the same overall interaction pattern.
class SimpleSessionManager: """ Minimal recreation of nanobot's SessionManager. Stores conversation history and provides context continuity. """ def __init__(self, workspace: Path): self.workspace = workspace self.sessions: dict[str, list[dict]] = {} def get_history(self, session_key: str) -> list[dict]: return self.sessions.get(session_key, []) def add_turn(self, session_key: str, role: str, content: str): if session_key not in self.sessions: self.sessions[session_key] = [] self.sessions[session_key].append({"role": role, "content": content}) def save(self, session_key: str): fpath = self.workspace / f"session_{session_key.replace(':', '_')}.json" fpath.write_text(_json.dumps(self.sessions.get(session_key, []), indent=2)) def load(self, session_key: str): fpath = self.workspace / f"session_{session_key.replace(':', '_')}.json" if fpath.exists(): self.sessions[session_key] = _json.loads(fpath.read_text()) session_mgr = SimpleSessionManager(WORKSPACE) SESSION_KEY = "cli:tutorial_user" def chat(user_message: str, verbose: bool = True): """Multi-turn chat with session persistence.""" session_mgr.add_turn(SESSION_KEY, "user", user_message) system_parts = [] for md_file in ["AGENTS.md", "SOUL.md"]: fpath = WORKSPACE / md_file if fpath.exists(): system_parts.append(fpath.read_text()) system_prompt = "nn".join(system_parts) history = session_mgr.get_history(SESSION_KEY) messages = [{"role": "system", "content": system_prompt}] + history if verbose: print(f" 👤 You: {user_message}") print(f" (conversation history: {len(history)} messages)") response = client.chat.completions.create( model="gpt-4o-mini", messages=messages, max_tokens=1024 ) reply = response.choices[0].message.content or "" session_mgr.add_turn(SESSION_KEY, "assistant", reply) session_mgr.save(SESSION_KEY) if verbose: print(f" 🐈 nanobot: {reply}n") return reply print("─" * 60) print(" DEMO 4: Multi-turn conversation with memory") print("─" * 60) chat("Hi! My name is Alex and I'm building an AI agent.") chat("What's my name? And what am I working on?") chat("Can you suggest 3 features I should add to my agent?") success("Session persisted with full conversation history!") session_file = WORKSPACE / f"session_{SESSION_KEY.replace(':', '_')}.json" session_data = _json.loads(session_file.read_text()) print(f" 📄 Session file: {session_file.name} ({len(session_data)} messages)") section("STEP 10 · Subagent Spawning — Background Task Delegation", "🚀") info("""nanobot's SubagentManager (agent/subagent.py) allows the main agent to delegate tasks to independent background workers. Each subagent: - Gets its own tool registry (no SpawnTool to prevent recursion) - Runs up to 15 iterations independently - Reports results back via the MessageBus Let's simulate this pattern with concurrent tasks. """) import asyncio import uuid async def run_subagent(task_id: str, goal: str, verbose: bool = True): """ Simulates nanobot's SubagentManager._run_subagent(). Runs an independent LLM loop for a specific goal. """ if verbose: print(f" 🔹 Subagent [{task_id[:8]}] started: {goal[:60]}") response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "You are a focused research assistant. " "Complete the assigned task concisely in 2-3 sentences."}, {"role": "user", "content": goal} ], max_tokens=256 ) result = response.choices[0].message.content or "" if verbose: print(f" ✅ Subagent [{task_id[:8]}] done: {result[:80]}...") return {"task_id": task_id, "goal": goal, "result": result} async def spawn_subagents(goals: list[str]): """Spawn multiple subagents concurrently — mirrors SubagentManager.spawn().""" tasks = [] for goal in goals: task_id = str(uuid.uuid4()) tasks.append(run_subagent(task_id, goal)) print(f"n 🚀 Spawning {len(tasks)} subagents concurrently...n") results = await asyncio.gather(*tasks) return results goals = [ "What are the 3 key components of a ReAct agent architecture?", "Explain the difference between tool-calling and function-calling in LLMs.", "What is MCP (Model Context Protocol) and why does it matter for AI agents?", ] try: loop = asyncio.get_running_loop() import nest_asyncio nest_asyncio.apply() subagent_results = asyncio.get_event_loop().run_until_complete(spawn_subagents(goals)) except RuntimeError: subagent_results = asyncio.run(spawn_subagents(goals)) except ModuleNotFoundError: print(" ℹ️ Running subagents sequentially (install nest_asyncio for async)...n") subagent_results = [] for goal in goals: task_id = str(uuid.uuid4()) response = client.chat.completions.create( model="gpt-4o-mini", messages=[ {"role": "system", "content": "Complete the task concisely in 2-3 sentences."}, {"role": "user", "content": goal} ], max_tokens=256 ) r = response.choices[0].message.content or "" print(f" ✅ Subagent [{task_id[:8]}] done: {r[:80]}...") subagent_results.append({"task_id": task_id, "goal": goal, "result": r}) print(f"n 📋 All {len(subagent_results)} subagent results collected!") for i, r in enumerate(subagent_results, 1): print(f"n ── Result {i} ──") print(f" Goal: {r['goal'][:60]}") print(f" Answer: {r['result'][:200]}")
We simulate multi-turn conversation management by building a lightweight session manager that stores, retrieves, and persists conversation history across turns. We use that history to maintain continuity in the chat, allowing the agent to remember details from earlier in the interaction and respond more coherently and statefully. After that, we model subagent spawning by launching concurrent background tasks that each handle a focused objective, which helps us understand how nanobot can delegate parallel work to independent agent workers.
section("STEP 11 · Scheduled Tasks — The Cron Pattern", "⏰") info("""nanobot's CronService (cron/service.py) uses APScheduler to trigger agent actions on a schedule. When a job fires, it creates an InboundMessage and publishes it to the MessageBus. Let's demonstrate the pattern with a simulated scheduler. """) from datetime import timedelta class SimpleCronJob: """Mirrors nanobot's cron job structure.""" def __init__(self, name: str, message: str, interval_seconds: int): self.id = str(uuid.uuid4())[:8] self.name = name self.message = message self.interval = interval_seconds self.enabled = True self.last_run = None self.next_run = datetime.datetime.now() + timedelta(seconds=interval_seconds) jobs = [ SimpleCronJob("morning_briefing", "Give me a brief morning status update.", 86400), SimpleCronJob("memory_cleanup", "Review and consolidate my memories.", 43200), SimpleCronJob("health_check", "Run a system health check.", 3600), ] print(" 📋 Registered Cron Jobs:") print(" ┌────────┬────────────────────┬──────────┬──────────────────────┐") print(" │ ID │ Name │ Interval │ Next Run │") print(" ├────────┼────────────────────┼──────────┼──────────────────────┤") for job in jobs: interval_str = f"{job.interval // 3600}h" if job.interval >= 3600 else f"{job.interval}s" print(f" │ {job.id} │ {job.name:<18} │ {interval_str:>8} │ {job.next_run.strftime('%Y-%m-%d %H:%M')} │") print(" └────────┴────────────────────┴──────────┴──────────────────────┘") print(f"n ⏰ Simulating cron trigger for '{jobs[2].name}'...") cron_result = agent_loop_v2(jobs[2].message, verbose=True) section("STEP 12 · Full Agent Pipeline — End-to-End Demo", "🎬") info("""Now let's run a complex, multi-step task that exercises the full nanobot pipeline: context building → tool use → memory → file I/O. """) print("─" * 60) print(" DEMO 5: Complex multi-step research task") print("─" * 60) complex_result = agent_loop_v2( "I need you to help me with a small project:n" "1. First, check the current timen" "2. Write a short project plan to 'project_plan.txt' about building " "a personal AI assistant (3-4 bullet points)n" "3. Remember that my current project is 'building a personal AI assistant'n" "4. Read back the project plan file to confirm it was saved correctlyn" "Then summarize everything you did.", max_iterations=15 ) section("STEP 13 · Final Workspace Summary", "📊") print(" 📁 Complete workspace state after tutorial:n") total_files = 0 total_bytes = 0 for item in sorted(WORKSPACE.rglob("*")): if item.is_file(): rel = item.relative_to(WORKSPACE) size = item.stat().st_size total_files += 1 total_bytes += size icon = {"md": "📄", "txt": "📝", "json": "📋"}.get(item.suffix.lstrip("."), "📎") print(f" {icon} {rel} ({size:,} bytes)") print(f"n ── Summary ──") print(f" Total files: {total_files}") print(f" Total size: {total_bytes:,} bytes") print(f" Config: {config_path}") print(f" Workspace: {WORKSPACE}") print("n 🧠 Final Memory State:") mem_content = (WORKSPACE / "memory" / "MEMORY.md").read_text() print(" ┌─────────────────────────────────────────────") for line in mem_content.strip().split("n"): print(f" │ {line}") print(" └─────────────────────────────────────────────") section("COMPLETE · What's Next?", "🎉") print(""" You've explored the core internals of nanobot! Here's what to try next: 🔹 Run the real CLI agent: nanobot onboard && nanobot agent 🔹 Connect to Telegram: Add a bot token to config.json and run `nanobot gateway` 🔹 Enable web search: Add a Brave Search API key under tools.web.search.apiKey 🔹 Try MCP integration: nanobot supports Model Context Protocol servers for external tools 🔹 Explore the source (~4K lines): https://github.com/HKUDS/nanobot 🔹 Key files to read: • agent/loop.py — The agent iteration loop • agent/context.py — Prompt assembly pipeline • agent/memory.py — Persistent memory system • agent/tools/ — Built-in tool implementations • agent/subagent.py — Background task delegation """)
We demonstrate the cron-style scheduling pattern by defining simple scheduled jobs, listing their intervals and next run times, and simulating the triggering of an automated agent task. We then run a larger end-to-end example that combines context building, tool use, memory updates, and file operations into a single multi-step workflow, so we can see the full pipeline working together in a realistic task. At the end, we inspect the final workspace state, review the stored memory, and close the tutorial with clear next steps that connect this notebook implementation to the real nanobot project and its source code.
In conclusion, we walked through every major layer of the nanobot’s architecture, from the iterative LLM-tool loop at its core to the session manager that gives our agent conversational memory across turns. We built five built-in tools, three custom tools, two skills, a session persistence layer, a subagent spawner, and a cron simulator, all while keeping everything in a single runnable script. What stands out is how nanobot proves that a production-grade agent framework doesn’t need hundreds of thousands of lines of code; the patterns we implemented here, context assembly, tool dispatch, memory consolidation, and background task delegation, are the same patterns that power far larger systems, just stripped down to their essence. We now have a working mental model of agentic AI internals and a codebase small enough to read in one sitting, which makes nanobot an ideal choice for anyone looking to build, customize, or research AI agents from the ground up.
Check out the Full Codes here. Also, feel free to follow us on Twitter and don’t forget to join our 120k+ ML SubReddit and Subscribe to our Newsletter. Wait! are you on telegram? now you can join us on telegram as well.
Michal Sutter
Michal Sutter is a data science professional with a Master of Science in Data Science from the University of Padova. With a solid foundation in statistical analysis, machine learning, and data engineering, Michal excels at transforming complex datasets into actionable insights.

