The Paper
In a publication released this month in AIRe Report (Lexxion), attorneys Jonathan S. Marashlian, Susan Duarte, and Fusheng Zhou of Marashlian & Donahue examine how generative AI is transforming legal service delivery — specifically in corporate engagements between outside counsel and in-house legal departments.
Their central argument: generative AI creates significant efficiency gains, but it also introduces real risks to professional judgment, confidentiality, and the core trust that defines the attorney-client relationship. The paper proposes a model the authors call "collaborative vigilance" — a framework for responsibly integrating AI while preserving the independence and quality that clients expect from their lawyers.
Collaborative Vigilance — a model enabling clients and attorneys to integrate AI into legal workflows while preserving “the independence, quality, and trust at the core of the attorney-client relationship.” The authors argue that law firms and legal departments must establish clear, ethical AI governance policies that safeguard professional responsibilities.
Why This Matters Now
This paper lands at a specific moment. The Heppner ruling in February demonstrated what happens when AI adoption outpaces governance: privilege is waived, work product is exposed, and the firm's most fundamental obligation to its client — confidentiality — fails.
The Marashlian & Donahue framework takes the next step. Where Heppner showed the consequences of ungoverned AI use, "collaborative vigilance" describes what the governance should look like.
The paper identifies several pressure points that matter for any firm considering AI adoption:
- Overreliance on AI output. Attorneys who trust AI-generated analysis without independent verification risk breaching their duty of competence.
- Confidentiality exposure. Every prompt sent to a cloud AI platform transmits client information to a third party. In corporate engagements, this may also violate contractual obligations and NDAs.
- Erosion of professional judgment. If AI becomes a substitute for legal reasoning rather than a tool that supports it, the quality of counsel degrades — and the client relationship suffers.
- Lack of institutional governance. Without clear policies, AI use becomes ad hoc. Individual attorneys make tool choices that expose the entire firm.
What "Collaborative Vigilance" Requires from Technology
The paper's recommendations are governance-level. But governance doesn't exist in a vacuum — it has to be enforceable by the technology the firm deploys. If your AI governance policy says "client data must remain confidential," but your AI tool sends every query to an external API, the policy is theater.
Here's what the "collaborative vigilance" framework actually demands from the technology layer:
- No third-party data exposure. If confidentiality is non-negotiable, prompts and documents cannot be visible to any third party — including the AI provider or the cloud operator. This rules out any shared-tenancy AI service where plaintext data touches someone else's infrastructure.
- Counsel-directed workflows. AI should operate under lawyer supervision, not as a standalone tool attorneys use without oversight. This means authentication, role-based access, and audit trails that tie every query to a specific user and matter.
- Source attribution. To avoid overreliance, attorneys need to verify AI output against source documents. The system must cite exactly where an answer came from — document, section, and page.
- Matter-level isolation. In corporate engagements, ethical walls between matters are required. The technology must enforce access controls at the matter level, not just the user level.
- Institutional control. The firm — not the vendor — must control the AI system. That means no vendor terms of service that grant data access, no model training on firm data, and no dependence on external availability.
The Gap Between Policy and Implementation
Most firms drafting AI governance policies today face a structural problem: the tools available to them can't enforce the policies they're writing.
Shared-tenancy legal AI platforms — even "enterprise" ones — route your data through someone else's inference service. They may offer encryption in transit and contractual confidentiality provisions, but the provider can still read plaintext prompts and documents on their servers, governed by their terms. After Heppner, that distinction matters in court.
The authors' call for "clear, ethical AI governance policies" is correct. But a policy without technical enforcement is a liability memo waiting to happen. The question for managing partners is: does your AI infrastructure actually support the governance framework your GC is drafting?
Where Rendex Fits
Rendex was built for exactly this scenario. Every component runs inside a dedicated Azure confidential VM sealed by AMD SEV-SNP memory encryption and an NVIDIA H100 TEE — the language model, the vector database, the document index, and the query engine. Not even the cloud operator can read plaintext inside the enclave. No outbound egress required after setup.
When an attorney queries Rendex, the question goes to a model running inside the firm's sealed enclave. The answer cites the exact source document. The query is logged to an append-only audit trail. Matter-level permissions enforce ethical walls at the infrastructure layer, not just the policy layer.
This is what "collaborative vigilance" looks like when it's implemented as technology rather than written as policy. The governance framework Marashlian & Donahue describe is sound. The question is whether your firm's AI architecture can support it.
This article is provided for informational purposes only and does not constitute legal advice. Consult qualified counsel for guidance specific to your firm's circumstances.
Governance your GC can actually enforce
Rendex runs inside a hardware-sealed Azure confidential enclave. Every answer is cited. Every query is logged. No outbound egress required after setup.