Last edited on January 18, 2024
To administrative faculty and staff
Generative AI tools are increasingly used across professions and in academic institutions. We have has established the following guidelines to ensure the responsible and ethical use of AI tools. These guidelines are for everyone at who is doing administrative work, including faculty and staff. Remember these guidelines as you proceed with your tasks and apply the ones relevant to your activities. As generative AI continues to evolve, these recommendations will be periodically reviewed and revised as needed.
Guidelines for implementing AI tools in administrative duties
Generative AI solutions have the capacity to create text, visuals, and various media content. Utilizing these AI tools in your tasks can enhance your productivity and output quality. Yet, it's essential to exercise caution and responsibility when using them. This means grasping the potential and constraints of generative AI and ensuring its use aligns with the Schools’ principles, objectives, and strategic vision.
Bear in mind the following points when using generative AI for administrative purposes:
- The mechanisms behind how generative AI delivers specific results may not always be transparent;
- Outputs from generative AI derive from pre-existing data, which might harbor biases or errors;
- There are intellectual property considerations when employing generative AI tools, leading to potential ambiguity regarding data sources and ownership.
Core principles for using AI tools in administrative duties
- Human expertise & judgment:
- Generative AI should serve as a supportive tool, not a replacement for human expertise.
- Maintain an active role and continuous engagement with relevant stakeholders.
- Quality & responsibility:
- Ensure AI-produced content is accurate and of high quality.
- Anticipate and address concerns about AI-driven processes, clarifying its role in your work.
- Prioritize data protection in all AI-related activities.
- Data privacy & security:
- Refrain from entering sensitive data into AI tools unless vetted by the Information Technology Office and approved by the Legal Counsel.
- Inclusivity & accessibility:
- Ensure AI outputs are universally accessible and inclusive.
- Collaborate with the Office of Diversity & Inclusion for guidance.
- Transparency & disclosure:
- Declare the use of AI in significant tasks, especially those with ethical or legal implications.
- Evaluate whether stakeholders would expect to know about AI’s role in your work.
- Refrain from using AI for hiring, evaluating, or disciplining employees.
- Ongoing education:
- Stay updated on generative AI advancements.
- Participate in professional development opportunities to enhance your AI knowledge.
- Sourcing AI tools responsibly:
- Choose tools that respect privacy and security standards.
- Collaborate with your unit’s Senior IT personnel for guidance and vetting.
- Ensure tools are reliable through pilot tests and maintain comprehensive documentation on sourcing and application.