Colorado Steps Up on AI:
New Advisory Committee Sparks Dialogue at CJI’s CLE
By Thorvald Nelson,
CJI Board Chair and Partner at Holland and Hart LLP
Joseph Kennedy,
CJI Board Member and Founder of RainMaker Strategy Development (RSD)
Colorado is making sure the state’s legal system keeps pace with the rapid changes spurred by AI while putting best practices and guardrails in place. In September, Chief Justice Monica Márquez announced the creation of the Legal Technology Advisory Committee, a group tasked with helping legal professionals, judges, and litigants navigate the fast-moving world of AI while making sure the public is protected from AI’s worst faults.
Leading the charge will be Judge Lino Lipinsky of the Colorado Court of Appeals and a board member emeritus of the Colorado Judicial Institute (CJI). As chair, Judge Lipinsky will lead the committee in developing a practical, ethics-focused AI guidance document for legal professionals, judges, and members of the public. Because AI is not a fleeting trend, said Judge Lipinsky, “all legal professionals and judicial officers must understand how to harness AI’s benefits and avoid its risks, while protecting our justice system’s fairness and integrity.”
In the announcement, Chief Justice Márquez emphasized the urgency of this work: “As AI evolves, our legal community must have the right tools and knowledge to use its capabilities responsibly. The creation of the Legal Technology Advisory Committee will help legal professionals, the judiciary, and litigants keep pace with the rapid AI technology changes that affect our legal system and how people interact with the courts.” The membership of the committee will include judicial officers, clerks of court, legal practitioners, and subject matter experts.
The committee’s charge reflects growing concerns about AI hallucinations, deepfakes, bias, and the risk of inadvertently disclosing privileged information and attorney work product. At the same time, it recognizes AI’s potential to improve efficiency, reduce costs, and expand access to justice.
CJI addressed these growing challenges and opportunities at its recent CLE, AI Today – What the Legal Community Needs to Know Right Now. The panel discussion featured Judge Lipinsky alongside United States Magistrate Judge Maritza Dominguez Braswell and Katina Banks, a Knowledge Attorney with the event’s host, Gibson Dunn. The panel delivered a clear message: AI is here, and legal professionals must engage now to shape its responsible use.
All three speakers emphasized that AI is already revolutionizing legal practice and court operations, but successful integration depends on strong governance, ethics, and human oversight. Start small, they said, set guardrails, train your teams, and align with Colorado’s Rules of Professional Conduct. The speakers also noted the importance of keeping abreast of new AI technologies, particularly AI tools designed for the legal profession.
The panel agreed that ignoring AI is no longer a viable professional strategy. AI adoption is accelerating, with 80% of professionals expecting a transformative impact. Yet only 30 percent of law firms actively use AI. Courts and firms are moving from curiosity to implementation. Courts are drafting AI policies, and firms are embedding AI into day-to-day workflows. However, the risks remain real with hallucinations, bias, confidentiality breaches, and deepfakes threatening trust and integrity.
Practical Uses in Firms Today
Reporting that firms with a clear AI strategy and leadership buy-in are 2.5 times more likely to see strong returns, Banks outlined how firms are already leveraging AI:
- Legal research: AI-assisted tools (e.g., Westlaw’s Deep Research) speed up analysis and help identify key issues.
- Drafting & summarizing: Tools like Harvey and CoCounsel generate first drafts, summarize pleadings, and create deposition transcripts and redlines.
- Contract review & diligence: AI can automate checklist compliance, extract material changes, and compare templates.
- Workflow automation: Automating chronologies, timelines, and checklists for litigation and transactions can speed up workflow.
Governance & Ethics
Judge Lipinsky connected AI use to specific sections of the Colorado Rules of Professional Conduct:
- Competence (Rule 1.1): Legal professionals must understand the benefits and risks of the technology available to them.
- Candor (Rule 3.3): Submitting hallucinated citations and AI-generated deepfake exhibits violates the duty of candor.
- Client communication (Rule 1.4): Consider disclosing AI use, especially if tools are open or self-learning.
- Fees (Rule 1.5): Reasonableness of the time spent on legal tasks will be judged against AI-enabled efficiencies.
- Confidentiality (Rule 1.6): Avoid uploading privileged material or work product to open systems.
- Supervision (Rule 5.1): Senior legal professionals must train and monitor their staff.
Judge Lipinsky shared cautionary AI tales, such as Mata v. Avianca, where a lawyer was sanctioned for fake citations from ChatGPT, and Colorado’s own Al-Hamim v. Star Hearthstone, LLC, in which a division of the Colorado Court of Appeals warned that the inclusion of AI hallucinations in future court filings could trigger sanctions. However, Judge Lipinsky cautioned that there are other dangerous AI-related threats, including deepfakes and the “liar’s dividend,” where individuals can challenge authentic evidence by claiming it’s AI-generated.
AI Across an Industry
Magistrate Judge Braswell wrapped up the evening with a look at the big picture:
- AI adoption is growing exponentially, with over 900M global users.
- The courts are experimenting with docket management, public comment analysis, and even arbitration.
- Risks span societal (job displacement, trust), organizational (change management), and personal (over-reliance, cognitive decline).
However, she cautioned her audience not to let the risks cause paralysis. They have the opportunity, she reminded them, to engage with the debate now to shape AI’s responsible use.
The initial guidance document from Judge Lipinsky’s Legal Technology Advisory Committee is due by October 2026. The committee will begin by considering the issues that the document should address, such as confidentiality, bias, and the growing risks of AI-generated hallucinations and deepfakes.
SIDE BAR: The AI Today Panel’s Action Plan
- Start small: Pilot tools before organization-wide rollout.
- Adopt an AI acceptable-use policy: Define what’s allowed, what’s prohibited.
- Mandatory training: Teach prompt design, risk awareness, and tool limitations.
- Documented human review: Never rely solely on AI output.
- Update engagement letters: Consider disclosing AI use and seek client consent.
- Align with Colorado’s ethics rules: With special attention to confidentiality and fees.
