The meeting focused on developing a comprehensive prompt to guide AI in transforming traditional business systems—particularly go-to-market (GTM) processes—into AI-driven systems.
Key considerations included the need for deep context gathering, constraint setting, and distinguishing between tasks best handled by AI versus humans.
The group discussed frameworks, practical examples, and the importance of an iterative, interrogative process to thoroughly understand and segment problems before designing AI solutions.
The final output aims to produce an "AI blueprint" artifact branded to the company (Blueprint/Jordan Crawford), emphasizing modularity and adaptability as AI capabilities evolve.
Action Items
(No date specified – Jordan Crawford): Finalize and share the comprehensive mega prompt/framework for AI system design in GTM contexts.
(No date specified – Jordan Crawford): Provide sample prompts and data prompts to illustrate effective interrogation and structuring.
(No date specified – Team/Interested Parties): Research current token/output limitations for Claude Opus 4 to optimize prompt and artifact design.
(No date specified – Team/Interested Parties): Explore and incorporate relevant mental models and frameworks (e.g., inversion, Five W's, Shane Parish's work) into the prompt structure.
Structuring an AI System for Business Processes
Stressed the importance of giving the AI system context on the problem, sender, and receiver; context can come from various sources (CRM, user interviews, recordings, etc.).
The AI prompt should relentlessly question the user (Socratic method) to uncover full problem context before suggesting solutions.
Emphasized that the AI agent must identify its own role, verify its outputs, and structure user interaction to tailor the AI system for the given problem.
Outlined that the AI should assist in clearly segmenting tasks: distinguishing those suitable for AI (e.g., research, sorting, messaging) and those for humans (e.g., discernment, direct communication).
Principles and Frameworks for Effective AI Deployment
Creative constraint with context is crucial: AI performs best when tasked within clear boundaries and ample contextual information.
Tasks must be broken down into discrete, well-constrained subtasks for effective AI intervention.
Use inversion and "Five W's" questioning to reveal opportunities for AI involvement and identify where human expertise is required.
Recommended reviewing mental models and frameworks from thought leaders (e.g., Shane Parish) for structuring thought processes.
Practical Examples
In GTM (go-to-market) settings, AI can handle account research, prospect identification, and message drafting—but not direct sales calls or high-level value communication.
AI should interrogate user goals and job functions to break them into atomic units, then recommend optimal divisions of labor between AI and human team members.
The process should yield a tailored artifact—an "AI blueprint" for the organization—detailing which tools, agents, and processes to use, and clear interaction instructions.
Branding and Productization
Emphasized branding the framework and outputs as invented by Jordan Crawford (Blueprint) for recognition.
Mentioned "Agent 7," an AI agent building course, as a comparable output and example of productizing complex AI frameworks.
Decisions
Focus prompt/framework development on go-to-market (GTM) business systems — Based on expertise and relevance, the scope for initial AI system design targets sales, research, and related GTM functions.
Interrogative, context-rich approach adopted as best practice — Determined that relentless questioning and context-gathering is essential before attempting to AI-ify any business function.
Open Questions / Follow-Ups
Confirm current maximum token/output size for Claude Opus 4 and implications for artifact/prompt length.
How should the mega prompt evolve as AI capabilities change over time?
What additional frameworks or mental models should be integrated to further refine the AI system design process?