Last edited on January 18, 2024


To administrative faculty and staff

Generative AI tools are increasingly used across professions and in academic institutions. We have has established the following guidelines to ensure the responsible and ethical use of AI tools. These guidelines are for everyone at who is doing administrative work, including faculty and staff. Remember these guidelines as you proceed with your tasks and apply the ones relevant to your activities. As generative AI continues to evolve, these recommendations will be periodically reviewed and revised as needed.

Guidelines for implementing AI tools in administrative duties

Generative AI solutions have the capacity to create text, visuals, and various media content. Utilizing these AI tools in your tasks can enhance your productivity and output quality. Yet, it's essential to exercise caution and responsibility when using them. This means grasping the potential and constraints of generative AI and ensuring its use aligns with the Schools’ principles, objectives, and strategic vision.

Bear in mind the following points when using generative AI for administrative purposes:

  • The mechanisms behind how generative AI delivers specific results may not always be transparent;
  • Outputs from generative AI derive from pre-existing data, which might harbor biases or errors;
  • There are intellectual property considerations when employing generative AI tools, leading to potential ambiguity regarding data sources and ownership.

Core principles for using AI tools in administrative duties

  1. Human expertise & judgment:
    1. Generative AI should serve as a supportive tool, not a replacement for human expertise.
    2. Maintain an active role and continuous engagement with relevant stakeholders.
  2. Quality & responsibility:
    1. Ensure AI-produced content is accurate and of high quality.
    2. Anticipate and address concerns about AI-driven processes, clarifying its role in your work.
    3. Prioritize data protection in all AI-related activities.
  3. Data privacy & security:
    1. Refrain from entering sensitive data into AI tools unless vetted by the Information Technology Office and approved by the Legal Counsel.
  4. Inclusivity & accessibility:
    1. Ensure AI outputs are universally accessible and inclusive.
    2. Collaborate with the Office of Diversity & Inclusion for guidance.
  5. Transparency & disclosure:
    1. Declare the use of AI in significant tasks, especially those with ethical or legal implications.
    2. Evaluate whether stakeholders would expect to know about AI’s role in your work.
    3. Refrain from using AI for hiring, evaluating, or disciplining employees.
  6. Ongoing education:
    1. Stay updated on generative AI advancements.
    2. Participate in professional development opportunities to enhance your AI knowledge.
  7. Sourcing AI tools responsibly:
    1. Choose tools that respect privacy and security standards.
    2. Collaborate with your unit’s Senior IT personnel for guidance and vetting.
    3. Ensure tools are reliable through pilot tests and maintain comprehensive documentation on sourcing and application.

AI usage guidance

Specific examples of how AI can be used in different staff roles (e.g., administrative tasks, communication, data management).

  • Example 1: Using AI to automate administrative tasks, such as scheduling appointments and processing invoices.
  • Example 2: Using AI to manage data and generate reports.
  • Example 3: Using AI to improve customer service by providing personalized support and recommendations.

Potential privacy and security risks of AI 

Staff role Potential risks Mitigation strategies

Administrative Staff

  • Unauthorized access to sensitive data during processing tasks.
  • Data breaches through AI-powered systems
  • Implement data access controls and role-based permissions.
  • Securely store and encrypt sensitive information.
  • Regularly monitor and audit AI systems for potential vulnerabilities.

HR Staff

  • Biases in AI-powered recruitment and employee evaluation tools.
  • Potential misuse of employee data for profiling or surveillance
  • Use diverse and unbiased datasets for AI training.
  • Implement transparent and explainable AI models.
  • Establish clear policies and procedures for data use and employee privacy.

Financial Staff

  • Fraudulent transactions through AI-powered financial systems.
  • Data breaches exposing sensitive financial information.
  • Implement strong authentication and authorization measures.
  • Continuously monitor and update AI models to detect anomalies.
  • Use secure communication channels for financial transactions.

IT Staff

  • Vulnerability of AI systems to cyberattacks and malicious actors.
  • Lack of awareness and training on AI security risks.
  • Regularly update and patch AI software and systems.
  • Conduct security assessments and penetration testing.
  • Provide training and awareness programs for staff on AI security best practices.