Overview
This interview with Pablo Arredondo (Co-founder of Casetext, now VP of Co-Counsel at Thomson Reuters) covers the transformative impact of AI, especially large language models, on legal research and legal practice. The discussion tracks the evolution of legal technology, product development strategies, reliability concerns, regulatory issues, and the future of AI in law.
History and Evolution of Legal Technology
- Legal research began with citation graphs and systematic books before computers.
- Early digitalization brought keyword-based searching and Boolean operators, which had major limitations.
- Casetext’s initial innovations addressed citation blind spots using "soft citation" techniques inspired by other industries.
- Large language models (LLMs) like BERT improved search by understanding meaning rather than literals.
Adoption of Large Language Models in Law
- Casetext pivoted to LLMs after seeing a GPT-4 demo, recognizing substantial improvements over prior models.
- Early LLM use flagged nuanced legal relationships (e.g., overrulings) that keyword methods missed.
- Introduction of retrieval-augmented generation (RAG) to anchor AI answers in real legal texts, addressing hallucination risks.
Product Development and User Experience
- Co-Counsel offers a chat interface and a structured results tab to manage and review discrete legal tasks.
- Skills include legal research, deposition prep, timeline creation, contract review, and policy compliance checks.
- Both litigators and transactional attorneys find value due to broad skill coverage.
Reliability, Evaluation, and Quality Control
- Casetext/Co-Counsel emphasizes rigorous internal testing frameworks, manual oversight, and user feedback loops.
- Combined manual and automated testing ensures legal outputs meet strict standards; new features undergo thorough validation before release.
- Partnerships with Thomson Reuters integrated mature editorial practices and quality assurance.
Technical Approaches and Infrastructure
- Domain-specific chunking and in-house embeddings systems were preferred for case law retrieval.
- Homegrown technical solutions are continuously evaluated against evolving open-source and commercial offerings.
- Security, privacy, and compliance (SOC 2, data deletion on demand) are central, especially for law firm clients.
Pricing and Value Proposition
- Co-Counsel is priced at ~$200/month per attorney, justified by the high-value, time-saving nature of legal tasks automated.
- Market has generally accepted the price, especially when compared to hiring additional legal staff.
- Pricing is intended to reflect product quality and reliability, with ongoing adjustments as models become cheaper.
Regulatory and Ethical Considerations
- Product is not marketed as a replacement for lawyers to the general public, prioritizing professional use to avoid unauthorized practice concerns.
- Existing legal rules (duties of competence, candor) mostly suffice for AI use, though regulatory discussions are ongoing.
- AI will likely be central to expanding access to legal services, but expert oversight remains crucial.
Future Directions in Legal AI
- LLMs still lack broad contextual and strategic understanding; human advocacy and original legal writing remain important.
- Potential for AI arbitration and AI-powered dispute resolution is being explored.
- Legal profession and judiciary (e.g., Chief Justice Roberts) are increasingly positive about AI’s role in improving justice delivery.
Decisions
- Pivot to LLMs: After GPT-4 demo, company shifted focus to leverage new AI capabilities.
- Restrict access to professionals: Decision to market Co-Counsel only to licensed attorneys, not the general public.
Action Items
- TBD – Product Team: Continue evaluating new embedding and RAG options for performance improvements.
- TBD – Quality Assurance: Ongoing enhancement of internal evaluation and feedback systems.
Recommendations / Advice
- Anchor AI systems in verifiable legal sources to avoid hallucinations.
- Combine automation with human review for mission-critical legal tasks.
- Legal tech builders should prioritize trust, reliability, and responsible deployment.
Questions / Follow-Ups
- Ongoing: How should product liability and right to train issues be addressed in AI law?
- Ongoing: What is the optimal balance between automated and manual evaluations for legal AI tools?
- Ongoing: Will future versions of LLMs be able to fully replace human legal reasoning and judgment?