2025-2026 Administrative Policy Manual 
    
    Oct 07, 2025  
2025-2026 Administrative Policy Manual

Acceptable Use of Artificial Intelligence (AI) for Academic and Administrative Purposes


Policy Number: 11.3
Effective Date: September 29, 2025
Revision History: None
Policy Contact: Senior Vice President for Academic and Student Affairs/Provost; Vice President for Information Technology and Institutional Research

I. Purpose and Policy Statement

The rapid advancement and increasing adoption of Artificial Intelligence (“AI”) technologies present new opportunities and challenges for higher education institutions. As AI becomes integrated into academics, scholarship, and administration, Georgia Gwinnett College (“GGC”) recognizes the need to ensure its use aligns with the College’s mission, values, and regulatory obligations.

This policy establishes principles and responsibilities for the ethical, responsible, and effective use of AI in the context of academics, scholarship, and administration at GGC.

This policy seeks to:

  • Uphold academic integrity and human-centered uses of AI.
  • Promote transparency and accountability in AI use.
  • Protect student privacy and institutional data security.
  • Foster responsible adoption and ethical use.
  • Cultivate an AI-literate and AI-enabled campus community.

All community members are responsible for following applicable laws, policies, standards, and guidelines including Ethical Use Principles documented in Section VI of this policy and Principles of Responsible Adoption documented in Section VII of this policy. All community members will report any concerns related to AI misuse or unintended consequences to their supervisor or the appropriate campus office.

II. Scope

This policy applies to all faculty, staff, students, contractors, and affiliates of GGC, and it governs the use of AI technologies in academic activities, including teaching, learning, and scholarship as well as administrative purposes.

Both institutionally provided and externally sourced AI tools are covered by this policy, particularly when their use may impact academic outcomes, student information, research processes, or scholarly communications.

III. Definitions

AI-Generated Content: Any output created wholly or partially by an AI system.
Artificial Intelligence (“AI”): Systems or tools that simulate human intelligence processes such as learning, reasoning, problem-solving, perception, or language understanding.
Bias in AI: Bias refers to systematic favoritism, distortion, or unfairness in the outputs produced by AI tools-often reflecting imbalances, stereotypes, or exclusions present in the data on which the models were trained.  
Generative AI: AI models that produce original content (e.g., text, images, code, or audio) based on input prompts; examples include Copilot, ChatGPT, and Gemini.
Hallucinations: Hallucinations occur when a generative AI system produces false, misleading, or entirely fabricated content-even if it appears accurate or convincing. This can happen in text, images, video, audio, and other outputs.
Human Oversight: The ongoing supervision, verification, and decision-making by human users when employing AI technologies.
Human Subjects Research: Systematic investigation involving living individuals from whom an investigator obtains data through direct interaction, intervention, or access to identifiable private information. Such research is subject to ethical review and oversight by the Institutional Review Board (“IRB”) in accordance with federal regulations and institutional policies.
Personally Identifiable Information (“PII”): Data that could be used to identify a specific individual, including, but not limited to, name, student ID, email address, and demographic information.
Sensitive Data: Information that, if compromised, could cause harm to individuals or the institution, including but not limited to PII, health records, financial data, and biometric information.

IV. Roles and Responsibilities
  1. Faculty: Faculty are responsible for communicating course-specific expectations regarding the use of AI tools to students via the course syllabus. When courses incorporate AI tools or include AI learning objectives, faculty should guide students in the ethical use of AI, ensuring proper attribution and encouraging responsible application. Faculty will add language to the course syllabus to disclose the use of AI in preparing and delivering instruction or assessing assignments.
    Faculty are responsible for disclosing any AI assistance used in the creation of research papers, data analysis, or other academic outputs by including an appropriate citation in the reference list.
  2. Students: Students must use AI tools responsibly and only as permitted by their course instructors or supervisors. They are expected to disclose any AI assistance received when submitting coursework or scholarly work and to uphold the highest standards of academic integrity by not engaging in unauthorized AI use, plagiarism, or data falsification. Violations involving unauthorized or deceptive use of AI tools will be addressed under the  Academic Integrity Policy for Academic Dishonesty Matters.
  3. Staff: When using AI in administrative, advising, or operational tasks, staff must ensure that any use of student data, employee information, or institutional records complies with privacy and data security guidelines. Staff should exercise human oversight and avoid overreliance on AI-generated outputs.
  4. Contractors and Affiliates of GGC: Contractors and affiliates of GGC must use AI tools in accordance with institutional policies, applicable laws, and ethical standards, when performing work for GGC.
  5. Researchers: Researchers must exercise human oversight throughout the research process when using AI tools. Substantive AI contributions to research design, data analysis, or writing must be disclosed in all scholarly communications. Additionally, researchers should seek appropriate review and approval through the Institutional Review Board (“IRB”) when AI tools interact with human subjects or sensitive data.
  6. Information Technology (“IT”): In alignment with the Information Technology Acquisition and Integration Policy, IT will review third-party AI services for privacy, security, accessibility, compliance, and ethical standards prior to procurement or adoption. IT will maintain and publish an inventory of AI tools and provide resources for faculty, staff, and students on the safe, effective, and ethical adoption of campus standard AI tools. IT will also provide guidance on responsible use of AI tools and systems to help ensure compliance with regulations such as Family Educational Rights and Privacy Act (“FERPA”) and Health Insurance Portability and Accountability Act (“HIPAA”) as well as data governance standards.
  7. Dean of Students Office: The Dean of Students office will serve as the primary resource for questions, concerns, and guidance regarding AI and academic integrity. This office will collaborate with the AI Steering Committee and other stakeholders to evaluate evolving risks and recommend updates to policies and practices. See also Academic Integrity Policy for Academic Dishonesty Matters.
  8. AI Steering Committee: The AI Steering Committee, comprised of administrators, faculty, and staff, guides campus-wide AI strategy. The committee ensures that GGC’s AI standards and practices remain aligned with applicable laws, policies, guidelines, and ethical standards. This committee will initiate, review, and recommend updates to AI-related policies at least annually and more frequently if needed, and the committee will ensure that policies and processes are promoted throughout the GGC community. The committee will facilitate the development and delivery of AI-related events and professional development opportunities for faculty, staff, and students. Members of the committee will serve as primary points of contact for AI-related inquiries and consult with campus subject matter experts as appropriate.
  9. Records and Investigations: In the event of audits or investigations, designated offices and personnel-including but not limited to the Dean of Students Office, Academic and Student Affairs, Human Resources, Business and Finance, Public Safety, Information Technology, Legal Affairs, and Internal Audits-are responsible for retaining relevant records in accordance with the University System of Georgia (“USG”) Records Retention Schedules.
V. Compliance

All AI tools used in academic contexts must operate in full compliance with applicable federal, state, and international laws, as well as recognized industry standards. This includes, but is not limited to:

  • FERPA
  • General Data Protection Regulation (“GDPR”)
  • HIPAA (where applicable)
  • National Institute of Standards and Technology (“NIST”) AI Risk Management Framework
  • Gramm-Leach-Bliley Act (“GLBA”)
VI. Principles of Ethical Use

AI tools must align with the following guiding principles:

  1. Fairness and Bias Mitigation: Tools must be evaluated for potential biases in their design, training data, and outputs. The College prioritizes AI tools that demonstrate efforts to reduce Bias in AI.
  2. Accountability: Clear lines of accountability must be established for the use of AI tools. The College community is accountable for the ethical use and oversight of AI tools, and vendors must offer transparent documentation on tool functionality, development, and maintenance.
  3. Accuracy: AI tools must be assessed for factual accuracy and reliability. Tools that consistently generate Hallucinations will be subject to heightened scrutiny and/or may be restricted from use.
  4. Transparency: The functionality, limitations, and when possible, the decision-making processes of AI tools must be documented and communicated to users. This includes disclosure of AI-generated content in academic work and scholarship.
  5. Human Oversight: AI must assist, not replace, human judgment with all outputs reviewed by a human decision-maker. Users remain ultimately responsible for decisions and outputs generated with the assistance of AI tools.
  6. Accessibility: AI tools must meet accessibility standards to ensure usability for all members of the College community. Vendors must provide evidence and/or assurances that their tool meets accessibility standards.
  7. Security: AI tools must be evaluated for vulnerabilities, including susceptibility to prompt injection attacks or manipulation. Vendors must provide evidence of security measures or safeguards to mitigate such risks.
VII. Principles of Responsible Adoption

All community members who use AI tools are responsible for their ethical and lawful use. Users must:

  1. Confirm Accuracy and Legality: Do not rely solely on AI-generated content. Always verify information using credible sources, as AI outputs can be inaccurate, biased, misleading, or fabricated. Note: AI-generated content may contain copyrighted material. Users are responsible for any legal consequences of publishing such content.
  2. Evaluate for Bias or Discrimination: Review AI outputs for bias or disparate impacts based on protected classifications (i.e., characteristics protected by institutional policy or state, local, or federal law.). Disregard any output that is indicative of a potential bias.
  3. Disclose AI Use Transparently: Clearly disclose when AI tools are used to generate academic or professional work. Transparency ensures academic integrity and fosters trust.
  4. Avoid Misuse or Legal Violations: Do not use AI to plagiarize, mislead, or violate intellectual property rights, laws, or College policies. Confirm that any quoted or paraphrased content is accurate, properly cited, and does not plagiarize or violate intellectual property rights.
  5. Prohibit Malicious Use: AI must not be used to generate harmful content, such as malware or code intended to bypass security controls.
  6. Data Protection: Users must not input Sensitive Data or PII into AI tools unless authorized through an appropriate contract and security review. Prohibited input includes:
  • Confidential Information (e.g., sensitive research data or legally protected information).
  • PII about students, faculty, staff, or other stakeholders.
  • Copyrighted or licensed content without proper usage rights.
  • Intellectual property or proprietary information (e.g., proprietary research or patentable ideas).

Whenever possible, decline permission for AI tools to store or use your inputs for model training.

VIII. Related Regulations, Statutes, Policies, and Procedures

GGC Artificial Intelligence Resource Guide
Artificial Intelligence Guidelines: A  USG IT Handbook Companion Guide
USG IT Handbook
4.1.1.1.2 Academic Integrity Policy for Academic Dishonesty Matters
6.1.1 Ethics in Research
8.2.53 Copyright and Fair Use
10.8 Data Management and Classification
10.9 Student Education Records Management Policy
11.1 Information Technology Compliance
11.50.1.1 Acceptable Use of Information Technology Resources
11.7 Information Technology Acquisition and Integration Policy
Family Educational Rights and Privacy Act
Health Insurance Portability and Accountability Act
USG Records Retention Schedules