MINIMUM VIABLE ACCEPTABLE-USE POLICY FOR AI TOOLS
A One-Page Template for Mission-Driven Institutions
Organization: _________________________________
Effective Date: _________________________________
Last Revised: _________________________________
Policy Owner: _________________________________ (Cabinet member)
Purpose
This policy establishes guidelines for responsible use of artificial intelligence (AI) tools by [Institution] staff, faculty, students, and authorized users. The policy balances innovation and productivity with institutional risk management, data stewardship, and mission integrity.
Scope
This policy applies to all institutional use of AI tools, including but not limited to:
· General-purpose AI systems (ChatGPT, Claude, Gemini, Copilot, etc.)
· Specialized AI applications (design, coding, analysis tools)
· AI embedded in institutional systems (learning management, student information, financial systems)
Exclusion: Institutional AI systems approved and deployed by IT/leadership are outside this policy’s scope.
Definitions
Confidential Data: Student records, patient/client information, donor records, financial information, personnel data, research data, intellectual property.
Institutional Data: Data created or maintained by [Institution], including student/client demographics, academic records, institutional communications.
Approved AI Tools: Tools pre-screened by IT/Compliance for security and privacy standards. [List or link to current list; update quarterly.]
Consumer AI Tools: Publicly available AI tools not reviewed by IT (ChatGPT, Claude free tier, Gemini, etc.).
Acceptable Use
Staff, faculty, and students may use AI tools for:
· Brainstorming and ideation
· Drafting initial emails, documents, or outlines (requiring human review before institutional use)
· Summarizing or extracting key points from documents
· Writing assistance and editing suggestions
· Coding assistance and debugging
· Data analysis and visualization (with non-confidential data)
· Administrative process optimization (scheduling, planning)
· Translation and language support
· Other uses that don’t violate prohibitions below
Important: AI-generated content must be reviewed, edited, and approved by a staff or faculty member before it represents [Institution] or is used for any institutional purpose.
Conditional Use (Requires Approval)
Staff, faculty, and students must obtain written approval before using AI for:
· Creating assessments, rubrics, or grading criteria (academic integrity review required)
· Analyzing student/client/patient data or outcomes
· Creating institutional communications (admissions letters, financial aid notifications, recruitment materials)
· Developing curricula or course materials that will be used or shared
· Conducting or publishing research
· Making recommendations that affect students/clients/employees (decisions support only; human decision-maker required)
Approval Process: Contact [designated person/title] with:
· What AI tool you want to use
· What data you’ll input
· What decision or output you’re aiming for
· Your intended use
Response time: 2 business days.
Prohibited Use
Staff, faculty, and students may NOT use AI tools to:
· Put confidential institutional data (student records, patient data, donor info, personnel data, financial information) into consumer AI tools
· Generate code, content, or recommendations for high-stakes decisions without human review and approval
· Create admissions, financial aid, or personnel decisions solely based on AI output
· Generate academic assessments without faculty oversight and institutional approval
· Share institutional intellectual property, trade secrets, or proprietary information
· Circumvent institutional security or access controls
· Represent AI-generated work as human-created without disclosure
· Violate applicable privacy laws, accreditation standards, or professional ethics
Guidance by Role
Faculty
· You may use AI to brainstorm course design, generate discussion questions, and edit course materials (provided you review and approve all content).
· You must clearly communicate to students your policies on AI use in coursework before the course begins.
· You should monitor submitted work for patterns suggesting undisclosed AI use; if concerning, follow academic integrity procedures.
· You may use AI to assist grading (generating rubrics, identifying patterns), but final grades are your professional judgment.
Academic and Administrative Staff
· You may use AI for productivity (summarizing emails, drafting documents, brainstorming process improvements).
· You must not put student/client/patient records, donor information, or financial data into consumer AI without approval from [Compliance/Title].
· If you analyze data with AI, save and document the process (input, prompt, output) in case of audit.
Students
· You may use AI as a learning tool (to brainstorm ideas, understand concepts, revise drafts).
· You must follow your instructor’s policy on AI use for coursework. If unclear, ask.
· You must disclose if you used AI to generate text, code, or images that you submit as your work.
· Using AI without disclosure, where prohibited by your instructor, is academic dishonesty.
IT and Technical Staff
· You are responsible for security and compliance of approved institutional AI systems.
· Support faculty and staff in using approved tools responsibly.
· Escalate suspected policy violations (confidential data in consumer tools, etc.) to [designated contact].
Data Boundaries Quick Reference
Data Type
Consumer AI Tools (ChatGPT Free, Claude Web, etc.)
Approved Enterprise Tools
Institutional Systems
Public (published materials, general info)
✓ Allowed
✓ Allowed
✓ Allowed
Internal (non-confidential institutional docs)
Ask first
✓ Allowed
✓ Allowed
Confidential (student records, health data, donor info, financial)
✗ Prohibited
Case-by-case
✓ Allowed
Default Rule: If you’re unsure whether data is confidential, assume it is. Ask before using.
Reporting and Support
Question about the policy?
Contact: [Name/Title] at [email] or [Slack channel]
Response time: 1 business day
Suspected violation?
Report to: [Compliance/Designated Person] at [email] or [Anonymous Report Form Link]
All reports will be reviewed confidentially.
Need training on responsible AI use?
Check [Learning Portal Link] for self-paced modules, or request a department workshop.
Consequences of Violation
First violation: Warning + required training.
Repeated violation: Further disciplinary action per [institution] HR policy.
Serious violation (large data breach, misrepresentation): Suspension from AI tool access + escalation to [Title].
Review and Update
This policy will be reviewed quarterly by [designated committee] and updated as:
· Technology capabilities evolve
· Institutional risk landscape changes
· Feedback from staff, faculty, and legal/compliance indicates gaps
Next review date: [Date + 3 months]
Acknowledgment
All staff and faculty are expected to:
· Read this policy and understand their role
· Ask questions if anything is unclear
· Report suspected violations
· Model responsible AI use for students and colleagues
For Higher Ed: This policy acknowledgment may be embedded in faculty/staff onboarding or annual compliance training.
For Nonprofits: This policy acknowledgment should be signed by board members and key staff, then referenced in employee handbooks.
Questions for Cabinet Customization
Before finalizing, your cabinet should decide:
1. Approved Tool List: Which tools has IT vetted as meeting our security/privacy standards? [Create and link a list]
2. Confidential Data Definition: What specific data categories require special protection at our institution? (Expand the list above)
3. Escalation: Who is the designated “approve AI use request” person? [Name/Title]
4. Violation Authority: Who investigates and responds to violations? [Name/Title]
5. Training: Will AI literacy be required for all staff? For faculty? When? [Timeline]
6. Academic Integrity Specifics: Do your accreditors (SACSCOC, HLC, etc.) or athletic associations require AI use disclosure? [Check and add]
Implementation Checklist
· Cabinet approves policy (1 meeting, 30 min)
· Legal/Compliance reviews (30 min call)
· IT confirms approved tools list is current
· One-page summary created for staff (5 min)
· Communication plan drafted (email, all-hands, department heads)
· Support contact and reporting form set up
· Training developed or identified (online modules, live workshop, written guidance)
· Policy published on institutional intranet/portal
· Staff/faculty notified and given 2 weeks to read
· Q&A session offered (optional but recommended)
· Quarterly review date added to calendar
Sample Staff Communication (One-Pager)
Subject: New AI Tool Policy – Effective [Date]
Your institution is committing to responsible AI use. Effective [Date], we have an Acceptable-Use Policy for AI tools. Here’s what you need to know:
What’s Allowed: Using AI for brainstorming, drafting, summarizing, coding help—as long as you review and approve the output before it becomes institutional work.
What Requires Approval: Using AI with student/client/patient data; creating assessments; analyzing outcomes. Just ask [Contact].
What’s Not Allowed: Putting confidential data (student records, donor info, personnel data) into consumer AI tools like ChatGPT or Claude without approval.
Your Question: Slack [Contact] or email [Email]. Response in 1 business day.
Training: Sign up for [Training Link] to learn more, or ask your department head.
Why This Matters: AI is powerful and useful. This policy keeps us and our data safe while letting you benefit from AI tools. Questions? Ask.
This template is designed for use by mission-driven institutions (universities, nonprofits, healthcare systems, public agencies). Customize the bracketed sections for your context. For assistance, contact [Your Company/Consultant Info].
Last updated: December 2025

