Free Resource

AI Acceptable Use Policy Template

A complete, opinionated AI Acceptable Use Policy you can copy, customize, and deploy in your business. Drafted by Unió Digital from the actual policies we run for Arizona construction, mining, and healthcare clients. Free to use.

What Is an AI Acceptable Use Policy?

An AI Acceptable Use Policy (AI AUP) is a written document that defines which AI tools employees may use at work, what data may not be entered into AI prompts, how AI-generated output should be reviewed and labeled, and what the consequences are for policy violations. A good AI AUP is short enough to read in 5 minutes, specific enough to enforce, and reviewed quarterly because the AI landscape changes faster than annual policy cycles can track.

How to use this template.

  1. Copy the policy below into your company's policy management system (or a Word doc).
  2. Fill in the bracketed [PLACEHOLDERS] with your specifics: company name, sanctioned-tool list, contact for questions.
  3. Pick the vertical addendum at the bottom that fits your industry (mining, construction, healthcare). Append it to the base policy.
  4. Get legal review. This template is an operational starting point, not legal advice. Your attorney should confirm the language fits your jurisdiction, contracts, and regulatory obligations.
  5. Distribute and train. Add the policy to your employee handbook. Roll out a 30-minute training session on what changed and why. Schedule a quarterly review.

Template starts below. Copy from here.

[COMPANY NAME] AI Acceptable Use Policy

Effective Date: [DATE] · Version: 1.0 · Owner: [POLICY OWNER ROLE] · Review Cadence: Quarterly

1. Purpose

This policy defines how employees, contractors, and other authorized users of [COMPANY NAME] systems may use artificial intelligence (AI) tools while performing work for the company. It exists to capture the productivity benefits of AI while protecting customer data, contractual obligations, regulatory compliance, and the company's competitive position.

2. Scope

This policy applies to all employees, contractors, consultants, and other authorized users (collectively, "Users") who access [COMPANY NAME] data or systems for work purposes, on company-owned devices or personal devices used for work. It covers all generative AI tools (large language models, image generators, code assistants), AI-augmented productivity tools, and AI agents or assistants regardless of vendor.

3. Sanctioned AI Tools

Users may use the following AI tools for work purposes when accessed via the user's [COMPANY NAME] account or company-licensed subscription:

  • Microsoft 365 Copilot (in-app productivity within Word, Excel, Outlook, Teams, PowerPoint).
  • [ENTERPRISE AI ASSISTANT] for general-purpose writing, analysis, and research. Examples: OpenAI ChatGPT Enterprise, Anthropic Claude Business, Google Gemini for Workspace.
  • [CUSTOM TOOLS] approved by the [POLICY OWNER ROLE] in writing.

Use of any AI tool not on this list (including consumer-tier ChatGPT, free Claude, free Gemini, character.ai, and similar) is prohibited for work purposes. Personal use of consumer AI tools on personal devices and personal time is not regulated by this policy.

4. Data That May Not Be Entered into AI Prompts

Regardless of which AI tool is in use, the following data may never be entered into AI prompts, attached to AI conversations, or used as AI training data:

  • Personally identifiable information (PII) of customers, employees, or third parties, including Social Security numbers, driver's license numbers, financial account numbers, and dates of birth.
  • Protected Health Information (PHI) under HIPAA, except where the AI vendor has executed a Business Associate Agreement with [COMPANY NAME].
  • Credit card numbers and cardholder data.
  • Authentication credentials including passwords, API keys, access tokens, and private keys.
  • Confidential business information including unannounced pricing, contract terms during active negotiation, M&A activity, board materials, and unfiled patent disclosures.
  • Customer or employee communications labeled or marked as confidential, privileged, or attorney-client privileged.
  • Any data marked as Confidential or Restricted under [COMPANY NAME]'s data classification scheme.

5. AI-Generated Content Review and Labeling

AI tools generate plausible-sounding output that is not always accurate. Users are responsible for reviewing AI-generated content for accuracy, completeness, and appropriateness before relying on it for business decisions, customer communications, or legal documents. AI is a productivity assistant, not a decision-maker.

AI-generated content used in customer-facing material, regulatory filings, or formal documents must be:

  • Reviewed and edited by a human with subject-matter expertise.
  • Labeled with metadata indicating AI involvement when stored in the same location as source-of-truth records (for example, in a "AI Working Documents" folder, separate from authoritative records).
  • Validated against authoritative sources before publication.

6. Account Ownership and Logging

Sanctioned AI tool accounts must be provisioned through [COMPANY NAME] IT under a company-managed identity. Personal accounts may not be used to process company data. AI tool conversations and prompts are subject to company audit logging where the underlying tool supports it. Users should have no expectation of privacy regarding work-related AI conversations conducted under company accounts.

7. Vendor Onboarding

AI tools added to the sanctioned list must complete the [COMPANY NAME] vendor risk assessment, including:

  • Data Processing Agreement (DPA) execution.
  • Subprocessor list review.
  • Data residency confirmation.
  • Training opt-out documentation.
  • For HIPAA-covered scope: Business Associate Agreement (BAA) execution.
  • For PCI-covered scope: confirmation that the vendor does not process cardholder data, or appropriate PCI scope documentation.

8. Customer-Facing AI Features

If [COMPANY NAME] deploys AI features that are visible to customers (for example, an AI chatbot, automated proposal generator, or AI-assisted customer service), those features must be reviewed for prompt injection, data exfiltration, and bias before launch. Customer-facing AI must be disclosed where required by law (state-specific AI disclosure laws) and in customer-facing privacy notices.

9. Incident Response

Users who become aware of a potential AI-related incident (data exposure to an unsanctioned AI tool, prompt injection attempt, suspected manipulation of AI-generated output, vendor breach affecting [COMPANY NAME] data) must report it to [INCIDENT REPORTING CONTACT] within 24 hours of discovery.

10. Training and Awareness

All Users will complete annual AI literacy and policy training. New employees will complete the training during onboarding. Training is updated quarterly as the AI tool landscape evolves.

11. Enforcement

Violations of this policy may result in disciplinary action up to and including termination of employment, contractor relationship, or system access. [COMPANY NAME] reserves the right to monitor AI tool usage on company systems and take appropriate action.

12. Policy Review and Updates

This policy is reviewed quarterly by the [POLICY OWNER ROLE] and updated as needed. Material changes require sign-off from [APPROVAL ROLE]. The current version of the policy is available at [POLICY LOCATION].

Questions: contact [POLICY OWNER ROLE] at [POLICY OWNER EMAIL].

Template ends. Vertical addenda below.

Vertical Addenda

Append the addendum that fits your industry to the base policy above.

Addendum A: Construction General Contractors

The following data categories are explicitly added to the "may not enter" list in Section 4 for general contractors:

  • Active bid pricing, bid sheets, and competitive estimating data.
  • Subcontractor confidential pricing, certificates of insurance with PII, and W-9 information.
  • Contract terms during active negotiation, including GMP language and owner-rep amendments.
  • OSHA 300 logs, MSHA Part 50 reports, or other safety records identifying specific employees.
  • Project files marked as confidential by the project owner or AIA contract.

Submittal documents and RFIs supplied by external parties may carry hidden prompt injection content. Users must process these in sanctioned AI tools that include output validation, not by direct paste into a conversational AI without review. Read more: AI Risks for Construction GCs.

Addendum B: Mining Operators (MSHA-Regulated)

The following data categories are explicitly added to the "may not enter" list in Section 4 for MSHA-regulated mining operators:

  • MSHA Part 50 accident, injury, and illness reports (Forms 7000-1 and 7000-2).
  • Miner training records (Part 46 / Part 48 certifications and refresher records).
  • Hazard communication records and exposure data identifying specific miners.
  • Equipment safety records, electrical permissibility documentation, and ventilation surveys.
  • Incident root-cause analyses with miner identification.

AI-generated derivative content (summaries, gap analyses, draft reports) must be stored in a separate, clearly labeled folder ("AI Working Documents") apart from source-of-truth Part 50 records. Each AI-generated artifact must include metadata identifying the model used, the prompt, and the date generated, so MSHA inspectors can distinguish source records from derived analyses. Read more: MSHA Documentation and AI.

Addendum C: Healthcare Practices (HIPAA-Covered)

For HIPAA-covered entities and their business associates, the following rules apply in addition to the base policy:

  • Protected Health Information (PHI) is explicitly out of scope for all generative AI tools, except where the AI vendor has executed a Business Associate Agreement with [COMPANY NAME] covering the specific data scope.
  • Patient charts, encounter notes, lab results, imaging interpretations, and any document combining a patient identifier with a clinical fact may not be entered into AI prompts.
  • Patient-identified intake forms, claims data, and pharmacy interactions are out of scope for generative AI.
  • Sensitivity labels at the document level (Microsoft Purview or equivalent) must enforce PHI scope-out. AI tools must refuse to process PHI-labeled content.
  • If PHI is inadvertently entered into an AI tool, the User must report immediately under the incident response procedure (Section 9). The event will be reviewed under the HIPAA Breach Notification Rule.

Acceptable administrative AI workflows include vendor and contract review, internal communications drafting, patient-facing template generation (where the templates are abstract and patient-agnostic), and insurance verification process design. Clinical workflows that touch PHI are explicitly excluded until vendor-side capabilities catch up to regulatory expectations. Read more: HIPAA-Aware AI Workflows for Medical Practices.

Disclaimer: This template is provided as an operational starting point. It is not legal advice. Have your attorney review before deployment to confirm fit with your jurisdiction, contracts, and regulatory obligations. Unió Digital makes no representation that adopting this template alone satisfies any specific compliance framework.

Want this customized for your business?

The free AI Readiness Assessment includes a tailored AI Policy aligned to your industry, regulatory environment, and current AI tool inventory. 30 minutes. No commitment.

Book the Free Assessment