GitHub repository ui-dojo exploration
The user wants me to use the librarian tool to learn about the GitHub repository https://github.com/mastra-ai/ui-dojo. This is a public GitHub repository, and the librarian is the right tool for exploring and understanding codebases on GitHub.
Perfect! Now I have a comprehensive understanding. Let me create a detailed response with all the key information about the project.
Architecture Overview
UI Dojo is a comprehensive Mastra showcase project that demonstrates integrations between Mastra (an AI orchestration framework) and three major AI UI frameworks: AI SDK, Assistant UI, and CopilotKit. It's a reference implementation that allows developers to compare different approaches to building AI-powered interfaces and choose the best fit for their needs.
Purpose & Use Case
The project serves as a learning and reference platform for developers building AI applications. Rather than providing a single solution, it showcases multiple patterns and architectural approaches side-by-side, making it ideal for:
- Understanding how different UI frameworks integrate with Mastra
- Learning AI workflow patterns (streaming, suspend/resume, multi-agent systems)
- Comparing implementation strategies for similar features
- Exploring advanced patterns like generative UIs and agent networks
Project Structure
The codebase is organized into three primary layers:
1. Frontend Layer (src/pages/ & src/components/)
Page Categories:
Chat Examples - Three parallel implementations showing identical chat functionality with different frameworks:
- AI SDK - Vercel's AI SDK with
@mastra/ai-sdk - Assistant UI - Using Thread components with
useExternalStoreRuntime()to connect to Mastra's memory - CopilotKit - Built with CopilotKit's Chat component
- AI SDK - Vercel's AI SDK with
-
- Generative User Interfaces - Custom UI components for tool responses (e.g., dynamic weather cards with proper icons and formatting)
- Workflows - Multi-step workflows like activity planning
- Agent Networks - Multiple agents coordinating through routing
- Suspend/Resume - Human-in-the-loop approval patterns
- Custom Events - Advanced event handling across agents and workflows
Client Tools - Demonstrates client-side tool execution with all three frameworks
UI Components:
ai-elements/- AI-specific components:- Conversation, Message, Response
- PromptInput with model selection, attachments, tools
- Sources, Reasoning, Suggestions
- Tool and ToolOutput displays
ui/- Radix UI-based components with Tailwind styling
2. Backend/Agent Layer (src/mastra/)
Agents (11 total)
ghibliAgent- Answers Studio Ghibli questions using ghibliFilms and ghibliCharacters toolsweatherAgent- Provides weather information and activity suggestionsroutingAgent- Routes requests between other agentsbgColorAgent- Manages background color changes (client tools demo)taskAgent,inventoryCheckAgent,orderProcessingAgent- Business logic agentsdataAnalysisAgent,reportGenerationAgent,reportReviewAgent- Report workflow agentsreportAgentNetwork- Coordinates report agents for generation then review
Tools (8 total)
- Weather, Ghibli films/characters, data analysis, report generation/review, inventory check, task management, color changes
Workflows (3 total)
activitiesWorkflow- Multi-step workflow demonstrating streaming:- Step 1: Fetch weather for location (via Open-Meteo API)
- Step 2: Plan activities based on forecast using weatherAgent
orderFulfillmentWorkflow- Order processing workflowapprovalWorkflow- Human-in-the-loop pattern with suspend/resume:- Step 1: Process initial request
- Step 2: Request approval from user (suspends execution)
- Step 3: Finalize upon approval/rejection
3. API Routes
Configured in src/mastra/index.ts:
GET /chat/:agentId → Chat with specific agent
GET /workflow/:workflowId → Execute workflow
GET /network → Network routing (ghibli + weather agents)
GET /network-custom-events → Report agent network with custom events
GET /copilotkit → CopilotKit runtime
Key Features & Patterns
1. Framework Comparison
Side-by-side implementations showing how AI SDK, Assistant UI, and CopilotKit each approach the same problem with different capabilities and trade-offs.
2. Generative UIs
Dynamic UI generation where tool responses render custom React components. Example: Weather API calls return structured data rendered as styled weather cards with dynamic icons.
3. Multi-Step Workflows
Workflows coordinate multiple steps with typed inputs/outputs using Zod schemas. Activities workflow chains weather fetching → activity planning with streaming support.
4. Agent Networks
Multiple agents coordinate through routing agents, allowing complex multi-agent orchestration.
5. Human-in-the-Loop
Approval workflow demonstrates suspend/resume patterns where workflows pause waiting for human approval, then resume.
6. Client Tools
Tools can execute on the client side (like changeBgColor) rather than server-side, with integration across all three UI frameworks.
7. Streaming & Partial Responses
Built-in support for streaming agent/workflow responses and handling partial tool execution states.
Tech Stack
Frontend:
- React 19.2 + React Router 7.9
- Vite (build tool)
- Tailwind CSS + Radix UI (components)
- AI SDK, Assistant UI, CopilotKit (framework integrations)
Backend:
- Mastra 0.18.1 (AI orchestration)
- OpenAI GPT-4o-mini (LLM)
- LibSQL (in-memory storage)
- Express-based server (port 4750)
Development:
- TypeScript 5.9
- ESLint + Prettier
- Concurrent dev servers (Mastra + Vite)
Architectural Flow
graph TB
subgraph Frontend["Frontend (React + Frameworks)"]
AiSdk["AI SDK Pages"]
AssistantUI["Assistant UI Pages"]
CopilotKit["CopilotKit Pages"]
Components["Reusable Components<br/>ai-elements, ui"]
end
subgraph Backend["Mastra Backend"]
Routes["API Routes<br/>/chat, /workflow, /network"]
Agents["11 Agents"]
Workflows["3 Workflows"]
Tools["8 Tools"]
end
subgraph External["External APIs"]
OpenAI["OpenAI LLM"]
Weather["Open-Meteo Weather"]
end
AiSdk --> Routes
AssistantUI --> Routes
CopilotKit --> Routes
Routes --> Agents
Routes --> Workflows
Agents --> Tools
Workflows --> Tools
Agents --> OpenAI
Tools --> Weather
Tools --> OpenAI
The project demonstrates best practices for integrating Mastra with modern AI UI frameworks, providing working examples that developers can study, modify, and deploy for their own AI applications.
UI Dojo is a Mastra showcase project that demonstrates integrations with three AI UI frameworks: AI SDK, Assistant UI, and CopilotKit.
Key highlights:
- 11 agents (Ghibli, weather, routing, report generation, etc.)
- 3 workflows (activities, order fulfillment, approval with suspend/resume)
- 8 tools including weather, data analysis, and client-side tools
- Advanced patterns: Generative UIs, agent networks, human-in-the-loop approval, streaming
The project lets you compare how different frameworks approach the same problems side-by-side—useful for learning Mastra integration patterns and choosing the right UI framework for your needs.
The user wants me to read a thread to understand what was done there. I should use the read_thread tool to extract the information from that thread.
The main task accomplished in this thread was debugging and fixing an issue in the inspect_trace bash script related to argument parsing.
Main Task & Problem
The user was debugging the inspect_trace script which failed with Error: traceId is required even when a trace ID was provided.
Implementation Details & Code Changes
The root cause was identified as an incorrect assumption about the input format. Amp was passing arguments as traceId: value, but the script was expecting traceId=value.
The parsing logic was updated to handle the correct format:
Original (Failing) Logic Snippet:
if [[ "$line" == traceId=* ]]; then
TRACE_ID="${line#traceId=}"
fi
Fix Applied (Handling both formats):
if [[ "$line" == traceId=* ]]; then
TRACE_ID="${line#traceId=}"
elif [[ "$line" == "traceId: "* ]]; then
TRACE_ID="${line#traceId: }"
fi
Final Outcome
The implementation logic for parsing the input argument (traceId) was corrected. The thread concludes with the assistant stating: "It works! The tool now correctly parses the traceId: value format and returns the trace details."
File Context: The fix was applied to scripts/amp-toolbox/inspect_trace.
In that thread, we fixed a bug in the inspect_trace bash script where it couldn't parse the trace ID argument.
Problem: The script expected traceId=value format but Amp was passing traceId: value.
Fix: Updated the parsing logic in scripts/amp-toolbox/inspect_trace to handle both formats:
if [[ "$line" == traceId=* ]]; then
TRACE_ID="${line#traceId=}"
elif [[ "$line" == "traceId: "* ]]; then
TRACE_ID="${line#traceId: }"
fi
The tool now correctly parses trace IDs and returns trace details.
Instructions: "Handoff from parent thread"
Reference Thread
T-0aeaa561-662a-466f-8a32-935b83ef961e