AI Usage Risk Assessment: Company Checklist
for Nevada-based Organizations
A preliminary tool for Nevada employers to identify "Shadow AI" gaps and operational risks - before they become legal, financial, or reputational liabilities.
Prepared by Fellowship Intelligence, LLC
March, 2026
Why This Matters
The Hidden Risk of Shadow AI
Across Nevada, employees are quietly integrating AI tools into their daily workflows - often without organizational awareness, policy guidance or security review. This phenomenon, known as "Shadow AI," represents one of the fastest-growing compliance blind spots for employers today.
From uploading client contracts into ChatGPT to using personal accounts to process sensitive payroll data, the exposure is real and growing. Nevada employers face compounding risk across data privacy, employment law, intellectual property and fiduciary duty.
This checklist is designed to help compliance, legal, IT, and risk management leaders quickly identify where gaps exist - and prioritize corrective action before an incident forces the conversation.
Four Critical Risk Domains
01
Visibility & Usage Patterns
02
Data Protection & Privacy
03
Policy & Governance
04
Accountability & Oversight
Section 1 of 4
Visibility & Usage Patterns
You cannot govern what you cannot see. The first step in any AI risk assessment is establishing a clear, current picture of how AI tools are actually being used across your organization - not just the tools your IT team approved.
AI Tool Inventory
Do you maintain an up-to-date inventory of all AI tools currently in use by employees? This includes widely used platforms such as ChatGPT, Claude, Midjourney, Canva AI, GitHub Copilot, and Grammarly — as well as any niche or department-specific tools procured without IT review.
Workflow Integration Awareness
Are you aware of which specific business processes are being augmented or automated by AI? Marketing copy generation, customer service scripts, legal research, financial modeling and software development are common adoption points. Understanding integration depth reveals operational dependency and risk concentration.
The "Personal Account" Check
Are employees using personal email addresses or consumer-grade accounts to process company data through AI platforms? Personal accounts fall entirely outside enterprise data agreements, audit trails, and security controls - creating significant exposure under Nevada's data privacy statutes and client confidentiality obligations.
Visibility Diagnostic
How Well Do You Know Your AI Footprint?
78%
of knowledge workers report using AI tools - many without employer knowledge
52%
of AI-using employees access tools via personal, unmonitored accounts at work
1 in 3
of organizations have no formal inventory of AI tools in active employee use
Section 2 of 4
Data Protection & Privacy
Data is the fuel that makes AI powerful - and the source of your greatest liability. When employees input sensitive information into unvetted external AI systems, that data may be used to train models, stored on foreign servers, or exposed through platform breaches. Nevada employers have specific obligations under state privacy law and industry regulations that make this risk category particularly acute.
Contractual Risk Exposure
Are employees uploading client contracts, proprietary source code, financial projections, or trade secrets into unvetted external AI systems? Third-party platforms typically claim broad rights to input data under their terms of service — potentially voiding your own NDAs and client confidentiality agreements.
PII Safeguards
Is there a strict, enforced prohibition against entering Personally Identifiable Information — including names, Social Security numbers, health data, or sensitive donor and client records - into public AI models? Under Nevada Revised Statutes Chapter 603A, PII mishandling carries direct legal consequences for covered businesses.
Data Ownership Clarity
Does your organization have a clear legal understanding of who owns both the "input" data submitted to AI tools and the "output" generated in response? Ambiguity around the ownership of AI-generated work product can create intellectual property disputes, vendor lock-in and contested deliverables with clients.
Section 3 of 4
Policy & Governance
Awareness without policy is exposure without remedy. Many Nevada organizations have acknowledged AI's presence in their workplace but have not yet codified rules for its use. A well-structured governance framework transforms reactive risk management into proactive organizational control - protecting both the employer and the employee.
1
Acceptable Use Policy (AUP)
Does your Employee Handbook contain specific, enforceable language defining permitted versus prohibited AI use? A robust AUP should address tool categories, data classification rules, disclosure requirements and consequences for violations. Generic technology policies written before 2022 are almost certainly insufficient.
2
Human-in-the-Loop Review
Is there a mandated review procedure requiring human verification of AI-generated outputs before they are used in client deliverables, legal filings, financial documents or public communications? AI models hallucinate - and your organization bears full responsibility for outputs submitted under its name.
3
Attribution Standards
Are there documented rules governing when and how employees must disclose that a work product was AI-assisted? Attribution standards matter for client trust, regulatory compliance in certain industries, academic and grant contexts, and potential future legal liability around authorship.

Nevada Employer Note: Courts and regulators increasingly expect organizations to demonstrate proactive governance. The absence of an AI Acceptable Use Policy may be treated as negligence in post-incident proceedings. Document your policies — and document when employees received training on them.
Section 4 of 4
Accountability & Oversight
Policy without accountability is theater. The final dimension of AI risk assessment asks whether your organization has clearly assigned responsibility for AI governance -and whether that responsibility is backed by training, documentation and defined escalation paths when things go wrong.
Defined Roles & Authorization
Is there a specific person, committee, or department responsible for reviewing and authorizing new AI tools before employees adopt them? Without a formal approval pathway, AI adoption becomes a free-for-all; with legal, security, and compliance implications that fall back on the organization regardless of who made the decision.
The Accountability Void
If an AI-generated error causes a legal loss, financial harm or regulatory violation, is the chain of responsibility clearly documented? Nevada employers need to answer: Who authorized the tool? Who approved its use for this task? Who reviewed the output? Undefined accountability chains create undefendable liability positions.
Training Audit
Has your workforce received formal, documented training on the legal and security risks associated with generative AI? Training records matter - not just for compliance audits, but for demonstrating due diligence in the event of an incident. One-time onboarding is insufficient; AI capabilities and risks evolve rapidly.
Your Complete Assessment at a Glance
Use this consolidated checklist to score your organization's current AI risk posture. Each unchecked item represents an active gap requiring remediation. Share this with your compliance, legal and IT leadership teams to prioritize action.
Section 1: Visibility & Usage Patterns
  • Inventory of all AI tools currently in employee use
  • Awareness of which business workflows are AI-augmented
  • Policy prohibiting use of personal accounts for company data

Section 2: Data Protection & Privacy
  • Prohibition on uploading contracts, code or trade secrets to unvetted AI
  • Strict ban on entering PII or sensitive client data into public AI models
  • Documented understanding of input/output data ownership rights
Section 3: Policy & Governance
  • Employee Handbook AUP with specific AI use language
  • Mandated human-in-the-loop review for AI-generated outputs
  • Attribution and disclosure standards for AI-assisted work products

Section 4: Accountability & Oversight
  • Defined role or department for AI tool authorization
  • Documented accountability chain for AI-related errors or losses
  • Formal, documented workforce training on AI legal and security risks
Understanding Your Risk Score
Fewer checked items signal greater organizational exposure. Use this framework to interpret your results and determine urgency of action across each domain.

Important: This checklist is a preliminary diagnostic tool, not a substitute for legal counsel. Nevada employers with identified gaps in data privacy or employment law compliance should consult qualified legal advisors to assess specific obligations under NRS Chapter 603A and applicable federal regulations.
Take Action Before Exposure Becomes Incident
The organizations that manage AI risk well are not the ones that banned AI - they are the ones that governed it deliberately. A clear policy, defined accountability and a trained workforce transform AI from a liability into a competitive advantage for Nevada employers.
Start with Visibility
Conduct a rapid internal survey to identify which AI tools your team is already using. You cannot build policy around tools you don't know exist.
Update Your Employee Handbook
Draft or update your Acceptable Use Policy to include specific AI language covering data classification, prohibited inputs, disclosure requirements and enforcement consequences.
Assign Ownership
Designate a named individual or cross-functional committee responsible for AI governance, tool authorization and incident response. Accountability starts with a name in the org chart.
Train and Document
Deliver formal AI risk training to your workforce and retain records. Documentation of training is your first line of defense in any regulatory inquiry or litigation.
© 2026 Fellowship Intelligence LLC
A Nevada Limited Liability Company
732 South 6th St. #6433
Las Vegas, NV 89101
(702) 337-3833
fellowshipintelligence.com