Overview
The speaker describes the rapid five-day development process for T3 Chat, a high-performance AI chat app, detailing technical challenges, decisions, and optimizations. The devlog covers project milestones, problem-solving strategies, and performance improvements, aiming to build the fastest AI chat experience available.
Project Motivation & Initial Steps
- Frustration with existing AI chat UIs and desire for a faster, more user-friendly experience.
- Inspiration drawn from DeepSeek V3's open-source model, strong performance, and low cost.
- Initial exploration of open-source starter kits proved unsatisfactory, leading to a custom approach.
Development Breakdown by Day
Day 1: Scaffold & Core Setup
- Started with v0 scaffolding and Versel SDK; created a rough UI and streaming functionality.
- Prioritized client-side routing via React Router to decouple navigation from the server.
- Built a basic sync layer but struggled with data persistence and state management.
Day 2: Team Expansion & UI Overhaul
- CTO Mark joined the project to accelerate progress.
- Major UI improvements: implemented tabs, refined chat box, and improved message handling.
- Switched from context-based sync to the Dexie library for local data storage using IndexedDB.
- Chose the name T3 Chat after securing the domain.
Day 3: SDK Frustrations & Data Layer Rebuild
- Removed remaining Versel SDK dependencies, moving all client data to Dexie.
- Encountered external issues (malware false positives, support requests) and local auth complications.
- Built a basic authentication layer but faced challenges without relying on third-party auth providers.
Day 4: Streamlining & Authentication
- Finalized authentication using OpenAuth; considered but struggled with Clerk integration.
- Oscillated between frameworks (Next, Vite, React) before returning to Next.js.
- Set up issue tracking with Linear and enabled React Compiler for performance.
Day 5: Sync Layer & Model Evaluation
- Explored and rejected third-party sync solutions (Zero, Jazz) in favor of a custom Dexie sync implementation.
- Conducted in-depth model benchmarking; optimized for speed and cost with GPT-4o Mini on Azure.
- Added a homepage, enabled model selection, and refined local/cloud data synchronization.
Final Day: UI Polish & Performance Tweaks
- Overhauled UI (sidebar, input box, onboarding flow) and addressed Stripe payment issues.
- Collaborated with performance experts (e.g., Aiden from Million) to optimize rendering and chunk Markdown.
- Implemented memoization strategies to minimize unnecessary re-renders, achieving high framerates.
Technical Challenges & Solutions
- Persistent problems with third-party libraries led to custom solutions for sync and data handling.
- Performance bottlenecks addressed by chunking Markdown and memoizing UI components.
- Stripe integration for payments required significant troubleshooting and validation.
Lessons Learned & Reflections
- Local-first architecture varies dramatically between apps, often requiring custom solutions over generic libraries.
- Performance in React is achievable with focused optimization and careful data management.
- External interruptions and competing obligations (events, support) impacted development velocity.
Decisions
- Abandoned third-party sync solutions in favor of a custom Dexie-based approach.
- Committed to a local-first architecture for chat history and performance.
- Selected GPT-4o Mini on Azure as the default model for balanced speed and cost.
Action Items
- Tonight – Mark & Speaker: Finalize and rigorously test Stripe payment flow before launch.
- TBD – Speaker: Further optimize code block rendering performance, possibly by replacing Prism.
- TBD – Team: Add user-selectable model switching to the app.
- TBD – Community: Gather and address performance feedback via Discord feedback channel.
Questions / Follow-Ups
- Solicit user feedback on performance differences between T3 Chat and alternatives.
- Monitor and improve any authentication or data sync issues reported post-launch.