What “Private AI” Means (in Plain English)
Private AI means the system runs on infrastructure inside your firm. No prompts are retained by a third party. Your firm controls the appliance, the logs, and the retention policy. There is no vendor API sitting between your attorneys and their documents.
The simplest test: if the internet cable is unplugged, it still works. That is the line between “private” and “hosted with privacy settings.”
- Hosted AI — your data travels to a vendor’s servers; confidentiality depends on their terms of service and security posture.
- Deployed AI — the system runs in your building; data stays inside your network; you control access, logging, and retention end-to-end.
Why Legal Teams Are Moving Now
Three signals show that legal AI has crossed from experimentation to adoption:
- CoCounsel reached 1,000,000 users — Thomson Reuters reported that one million professionals now use its AI assistant, a clear signal that legal teams are deploying AI in production, not just running pilots.
- ~240 hours/year per professional — the Thomson Reuters Future of Professionals 2025 report benchmarks AI-driven productivity gains at roughly $19,000/year per legal professional.
- ABA Formal Opinion 512 — the American Bar Association has issued formal ethics guidance on lawyers’ use of generative AI, covering confidentiality, competence, and supervision obligations.
The question is no longer whether to adopt AI. It is how to adopt it without compromising confidentiality or privilege.
Why RAG Is the Default Architecture for Legal AI
Retrieval-Augmented Generation (RAG) is the architecture behind most serious legal AI tools. The system retrieves relevant passages from your documents first, then generates an answer grounded in those passages. The attorney sees the answer and the sources, and can verify every claim against the original text.
This matters because large language models hallucinate — even when fine-tuned for legal work. Stanford HAI research found that legal AI models produce incorrect or unsupported statements in one out of every six or more benchmark queries. Hallucination is not a bug that gets patched; it is a property of how generative models work.
Any responsible legal AI system must assume hallucinations will occur and design for verification. That means: page-level citations on every answer, clickable source links, and an audit trail that logs what was retrieved and what was generated.
RAG does not eliminate hallucinations. It makes them detectable. An attorney can check whether the cited passage actually supports the generated answer — the same way they would verify a junior associate’s memo.
The Pilot Checklist: What Your IT + GC Should Measure
If you are evaluating a private AI system, these six items give your team concrete metrics — not vendor promises.
- Time saved by workflow. Measure hours saved on specific tasks: chronology generation, clause comparison tables, precedent search. Compare against your current manual baseline.
- Citation coverage rate. What percentage of generated answers include at least one source citation? Target: 100% of factual claims.
- Citation correctness (spot checks). Pull 20–30 cited answers at random. Verify that the cited passage actually supports the claim. This catches hallucinations that look well-sourced.
- Matter isolation tests. Confirm that users assigned to Matter A cannot retrieve documents from Matter B. Test with cross-matter queries.
- Audit export review. Export the full query log. Confirm it captures: user, timestamp, query text, retrieved sources, and generated response.
- Zero outbound egress verification. Have IT check firewall logs or run
netstaton the appliance during active use. Confirm no outbound connections from core document processing services.
Where Rendex Fits
Rendex is a private AI system deployed on a Puget C132-4U rackmount workstation — a 4U rack-mounted appliance installed in your server room by a field engineer. Every component — the language model, the vector database, the search index, the web interface — runs locally. There is no cloud component and no outbound network connection.
Every answer includes page-level citations. Every query is logged to an append-only audit trail. Matter-level access controls enforce ethical walls at the query layer. The system works with the internet cable unplugged.
If your firm is evaluating AI but compliance has blocked cloud tools, this is the architecture built to support security and compliance review.
This article is provided for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your firm’s circumstances.
Private AI built for security and compliance review
Rendex runs on your hardware. Every answer is cited. No outbound egress required after setup. See it with your documents.