From “Do We Have a Policy?” to “Are We Governing It?”
A 60-minute, board-ready framework for governing AI acceptable use
The moment in the boardroom
The question usually lands near the end of the agenda.
Someone asks: “Do we have a policy for staff using AI tools like ChatGPT?”
Management says yes, and a 7-page PDF appears in the board portal.
Most trustees skim the first page, file it under “IT handled it,” and move on.
The problem: a policy PDF is not the same as governance.
Boards are being asked to “approve AI acceptable-use policies” without a way to see where AI is in-bounds, who owns the risk, or how they’ll know if the policy is working.
What boards need isn’t more language.
They need a simple, repeatable framework they can use to govern AI acceptable use in a single 60-minute meeting.
The problem: Policy ≠ governance
Across universities, nonprofits, and healthcare systems, three gaps show up again and again:
Policies are written as IT artifacts, not governance tools.
They describe tools and technical controls, but they don’t make it obvious what staff can do, what requires approval, and what is prohibited.
Boards approve once, then never see evidence.
After the initial vote, there is no cadence, no owner in front of the board, and no recurring dashboard. AI use drifts; oversight doesn’t.
The “hard middle” is missing.
Most policies skip Level 2 (BOUNDARIES) on the maturity ladder—clear boundaries by use case and data type—and jump straight to Level 4 aspirations (”strategic, innovative AI”). Without that middle work, governance becomes impossible.
The result is a false sense of security.
You feel like you have governance because a policy exists, but you don’t actually have rules, roles, or receipts.
A simple board-ready lens: Rules. Roles. Receipts.
Boards remember simple nouns. Think of AI acceptable-use governance through three lenses:
Rules (Boundaries) – What’s allowed, under what conditions, and what’s out-of-bounds?
Roles (Ownership & cadence) – Who owns this and how does governance run between board meetings?
Receipts (Evidence & learning) – How will we know whether the policy is being followed and whether it needs to change?
Apply these lenses to any acceptable-use policy and you’ll immediately see whether you have governance—or just a document.
Lens 1: Rules (Boundaries)
Question: What’s allowed, under what conditions, and what’s out-of-bounds?
A board-ready policy makes boundaries obvious:
Allowed: everyday uses that are in-bounds (e.g., drafting non-confidential emails, brainstorming ideas, summarizing public information).
Conditional: uses that require approvals or controls (e.g., working with internal but non-sensitive data, using specific vendor tools).
Prohibited: uses that are simply not allowed (e.g., putting protected student records, protected health information, or donor financial data into consumer tools).
At Level 2 (BOUNDARIES) on the AI Use Policy Maturity Ladder, every use case and data type your staff cares about can be mapped to one of those three buckets.
If your current policy doesn’t do that, it’s hard for staff to comply and impossible for the board to tell what risk it has actually approved.
Board prompts:
“Show us the Allowed / Conditional / Prohibited table by data type and use case.”
“Which parts of the maturity ladder are we explicitly targeting in the next 12 months?”
Most mission-driven institutions should target Level 2: Boundaries within 30–60 days: a clear, written map of what’s in-bounds and what isn’t, with examples staff actually recognize.
Lens 2: Roles (Ownership & cadence)
Question: Who owns this and how does it run between board meetings?
AI acceptable-use governance fails when it becomes “everyone’s job” or “IT’s job.”
Boards should expect to see:
A named policy owner at the executive level (with time budgeted, not “in spare hours”).
A documented escalation path for questions and exceptions.
A clear governance cadence:
Monthly: owner reviews questions, exceptions, and any incidents.
Quarterly: leadership team reviews and adjusts; prepares one-page board dashboard.
Annually: board or committee receives a deeper briefing and approves policy updates.
This is where the policy becomes part of your operating system, not just an attachment.
Board prompts:
“Who is the single executive owner of AI acceptable use, and what percentage of their role is this?”
“What is the monthly and quarterly cadence for reviewing use, exceptions, and incidents?”
“When will this come back to the board, and in what format?”
Most institutions can get to this level of ownership and cadence in the same 30–60 day window as the Level 2 boundaries work—if it’s made a clear deliverable.
Lens 3: Receipts (Evidence & learning)
Question: How will we know whether the policy is being followed and whether it needs to change?
Boards don’t need exhaustive logs or surveillance.
They need a short, decision-oriented view they can see every quarter.
A practical starting dashboard fits on one page and includes:
Exceptions: Number of exception requests this quarter; how many were approved; a short list of why.
Questions: Number of policy questions received; whether they’re increasing or decreasing; most common themes.
Training: Percentage of staff who have completed basic AI acceptable-use training.
Incidents: Any policy violations or data-related incidents (yes/no, with simple categories—not a full report).
Tools: Changes to the “approved tools” list (what’s new, what was retired, and why).
That’s enough for a board to exercise real oversight: Are staff using the policy? Is leadership learning and adapting? Is risk moving up or down?
Board prompts:
“Show us a one-page quarterly dashboard so we can see exceptions, questions, training, incidents, and tool changes at a glance.”
“If that one page changed significantly next quarter, would we know what decisions to revisit?”
A 60-minute agenda your board can actually use
If you chair a governance, risk, or academic affairs committee, here’s a simple agenda you can drop straight into your next meeting:
10 minutes – What’s happening
Where AI is already being used across the institution; current level on the maturity ladder.
15 minutes – Rules (Boundaries)
Walk through the Allowed / Conditional / Prohibited view. Confirm target: Level 2 within 30–60 days.
15 minutes – Roles (Ownership & cadence)
Name the executive owner, confirm the monthly and quarterly governance rhythm.
15 minutes – Receipts (Evidence & learning)
Review or approve the design of a one-page quarterly dashboard.
5 minutes – Decision
Approve: Level 2 acceptable-use policy, named owner, and quarterly reporting cadence.
That’s it. One meeting, one decision, and a clear path to responsible AI use.
30/60/90: If I were your COO
If I were sitting in the COO seat at your institution, here’s how I’d run this:
Next 30 days
Mark up your existing AI or technology-use policy against Rules, Roles, Receipts and highlight gaps in red.
Draft a simple Allowed / Conditional / Prohibited table by data type and use case, using language your staff actually uses.
Identify and appoint a single executive owner for AI acceptable use (with an explicit time allocation).
Next 60 days
Bring a Level 2 policy draft and the three-lens summary to the cabinet and then to the board/committee for review and approval.
Define the monthly executive cadence and quarterly dashboard format; no tooling changes are required to start—this can be manual at first.
Prepare the 60-minute committee agenda and timeline.
Next 90 days
Run one full monthly governance cycle and bring the first one-page dashboard to the board or committee.
Capture what you learned (types of questions, exception patterns, training gaps) and adjust the policy accordingly.
Confirm your 6–12 month target: move from Level 2 (Boundaries) to Level 3 (Enablement) with approved tools, training, and workflow integration.
The move from Level 0 to Level 2 (no policy → clear boundaries + ownership) is a 90-day project for most mission-driven institutions.
The move from Level 2 to Level 3 (Enablement) is the next 6–12 months of work.
What to say at your next board or committee meeting
You can adapt this language directly:
Our staff are already using AI tools in their daily work. Rather than treat this as a technology project, we’re treating it as a governance question.
Over the next 60 days, we’re moving to a Level 2 acceptable-use policy that makes our boundaries explicit—what’s allowed, what’s conditional, and what’s prohibited—with a named executive owner and a quarterly dashboard.
We’ll use a simple framework: Rules, Roles, and Receipts. Tonight, we’re asking you to endorse that framework and the path to Level 2, and we’ll return next quarter with evidence of how it’s working.
That’s a board-ready message: specific, time-bound, and governable.
📥 Supporting artifacts
This post comes with two short tools you can download and use immediately.
Board AI Use Policy Brief (One-Pager)
A single page that explains where you are, where you’re going, and how you’ll govern AI acceptable use. Use this in board packets, committee agendas, or forwarded internally.
Operator Implementation Checklist (One-Pager)
A practical checklist for your COO / CIO / policy owner to stand up Rules, Roles, and Receipts in 90 days. Organized by the three lenses with checkboxes and concrete steps.
Navigate the Series
✓ Post 1: The Acceptable-Use Crisis (foundation: the maturity ladder and policy template)
✓ Post 2: From Policy to Governance (you are here: Rules, Roles, Receipts framework)
→ Post 3: AI Oversight Beyond Acceptable Use (coming Week 5: portfolio, vendors, risk categories, strategy)
→ Post 4: Data Boundaries (coming Week 7)
→ Post 5: Building AI Capacity Without a CAIO (coming Week 9)
→ Post 6: Mission Impact Metrics (coming Week 11)
If you’re navigating this now
If your board just asked “Do we have a policy?” and you’re not sure how to answer governance-style, you’re in the right place.
The Rules / Roles / Receipts framework gives you a way to structure the conversation in your cabinet, then bring a clean 60-minute agenda to your board.
Most institutions can move from “we have a PDF” to “we have governance” in 90 days.
Starting Spring 2026, I’m available for:
Short board briefings (60–90 minutes): walk your committee through Rules, Roles, and Receipts with your specific policy.
Governance sprints (4–8 weeks): audit your current policy, draft a Level 2 version, set up your monthly/quarterly cadence, and train your executive owner.
Contact: kirk@kizata.com
Or reply to this email and I’ll send a one-page sprint overview.
Forward this to your board
Staff AI adoption is outpacing governance. This post gives you a three-lens framework (Rules, Roles, Receipts) and a board-ready checklist to move from “do we have a policy?” to “are we governing it?” in 90 days.
This is Post 2 in the “AI Under Governance” series.
Share this post.
About this series: “AI Under Governance for Mission-Driven Institutions” is a 6-post series on the governance and operating challenges leaders face as they scale AI use responsibly. Each post includes a framework, a downloadable artifact, and a 90-day action plan.
About the author: Kirk Tramble helps mission-driven institutions build AI governance systems under real constraints—no dedicated AI budgets, no Chief AI Officers, no corporate resources.
Practical AI governance for mission-driven institutions
© 2026 Kirk Tramble | operatornotes.substack.com

